Make an Amount N With C Coins

This week we planned to discuss Dynamic Programming. The idea was to discuss about 4-5 problems, however, the very first problem kept us busy for an entire hour. The problem is a well known one: ‘Given an infinite supply of a set of coin denominations, $C = {c_1, c_2, c_3, …}$, in how many ways can you make an amount N from those coins?’

The first question to be asked is, whether we allow permutations? That is, if, $c_1 + c_2 = N$, is one way, then do we count $c_2 + c_1 = N$, as another way? It makes sense to not allow permutations, and count them all as one. For example, if $N$ = 5, and $C$ = {1, 2, 5}, you can make 5 in the following ways: {1, 1, 1, 1, 1}, {1, 1, 1, 2}, {1, 2, 2}, {5}.

We came up with a simple bottom-up DP. I have written a top-down DP here, since it will align better with the next iteration. The idea was, $f(N) = \sum f(N-c[i])$ for all valid $c[i]$, i.e., the number of ways you can construct $f(5) = f(5-1) + f(5-2) + f(5-5) \implies f(4) + f(3) + f(0)$. $f(0)$ is 1, because you can make $0$ in only one way, by not using any coins (there was a debate as to why $f(0)$ is not 0). With memoisation, this algorithm is $O(NC)$, with $O(N)$ space.

This looks intuitive and correct, but unfortunately it is wrong. Hat tip to Vishwas for pointing out that the answers were wrong, or we would have moved to another problem. See if you can spot the problem before reading ahead.

The problem in the code is, we will count permutations multiple times, for example, for $n = 3$, the result is 3 ({1, 1, 1}, {1, 2} and {2, 1}). {1, 2} and {2, 1} are being treated distinctly. This is not correct. A generic visualization follows.

Assume we start with an amount $n$. We have only two types of coins of worth $1$ and $2$ each. Now, notice, how the recursion tree would form. If we take the coin with denomination $1$ first and the one with denomination $2$ second, we get to a subtree with amount $n-3$, and on the other side, if we take $2$ first, and $1$ next, we get a subtree with the same amount. Both of these would be counted twice with the above solution, even though, the order of the coins does not matter.

After some discussion, we agreed on a top-down DP which keeps track of which coins to use, and avoids duplication. The idea is to always follow a lexicographic sequence when using the coins. It doesn’t matter if the coins are sorted or not (actually yes, if you check all the coins that you are allowed to use, if they can be used). What matters is, always follow the same sequence. For example, if I have three coins {1, 2, 5}. Let’s say, if I have used coin $i$, I can only use coins $[i, n]$ from now on. So, if I have used coin with value $2$, I can only use $2$ and $5$ in the next steps. The moment I use 5, I can’t use 2 any more.

If you follow, this will allow sequences of coins, in which the coin indices are monotonically increasing, i.e., we won’t encounter a scenario such as {1, 2, 1}. This was done in a top-down DP as follows:

Now, this is a fairly standard problem. I decided to check on the interwebs, if my DP skills have been rusty. I found the solution to the same problem on Geeks-for-Geeks, where they present the solution in bottom-up DP fashion. There is also an $O(N)$ space solution in the end, which is very similar to our first faulty solution, with a key difference that the two loops are exchanged. That is we loop over coins in the outer-loop and loop over amount in the inner loop.

This is almost magical. Changing the order of the loops fixes the problem. I have worked out the table here step by step. Please let me know if there is a mistake.

Step 1: Calculating with 3 coins, uptil N = 10. Although we use an one-dimensional array, I have added multiple rows, to show how the values change over the iterations.

Step 2: Initialize table[0] = 1.

Step 3: Now, we start with coin 1. Only cell 0 has a value. We start filling in values for $n = 1, 2, 3, ..$. Since, all of these can be made by adding \$1 to the amount once less then that amount. Thus, the total number of ways right now, would be 1 for all, since we are using only the first coin, and the only way to construct an amount would be$1 + 1 + 1 + … = n$. Step 4: Now, we will use coin 2 with denomination \$2. We will start with $n = 2$, since we can’t construct any amount less than \$2 with this coin. Now, the number of ways for making amount \$2 and \$3 would be$2$. One would be the current number of ways, the other would be removing the last two$1$s, and adding a two. Similarily, mentally (or manually, on paper) verify how the answers would be. Step 5: We repeat the same for 3. The cells with a dark green bottom are the final values. All others would have been overwritten. I was looking into where exactly are we maintaining the monotonically increasing order that we wanted in the top-down DP in this solution. It is very subtle, and can be understood, if you verify step 4 on paper, for$4, 5, 6, …$and see the chains that they form. In the faulty solution, when we compute amounts in the outer loop, when we reach to amount$n$, we have computed all previous amounts for all possible coins. Now, if you compute from the previous solutions, they have included the counts for all coins. If you try to calculate the count for$n$, using the coin$i$, and result for$n - cval[i]$, it is possible, that the result for$n - cval[i]$, includes the ways with coins >$i$. This is undesirable. However, when we compute the other way round, we compute for each coin at a time, in that same lexicographical order. So, if we are using the results for$n - cval[i]$, we are sure, that it does not include the count for coins >$i$, because they haven’t been computed yet, since they would only happen after computing the result for$i$. As they say, sometimes being simple is the hardest thing to do. This was a simple problem, but it still taught me a lot. Static to Dynamic Transformation - I Instead of going into fractal trees directly, I am going to be posting a lot of assorted related material that I am going through. Most of it is joint work with Dhruv and Bhuwan. This post is about the Static-to-Dynamic Transformation lecture by Jeff Erickson. The motivation behind this exercise is to learn, how can we use a static data-structure, (which uses preprocessing to construct itself and answer queries in future) and create a dynamic data-structure that can continue taking inserts continuously. I won’t be formal here, so there would be a lot of chinks in the explanation. So either excuse me for that, or read the original notes. A Decomposable Search Problem A search problem$Q$with input$\mathcal{X}$over data-set$\mathcal{D}$is said to be decomposable, if, for any pair of disjoint data sets,$D$and$D’$, the answer over$D \cup D’$, can be computed from the answers over$D$and$D’$in constant time. Or:$Q(x, D \cup D’) = Q(x, D) \diamond Q(x, D’)$Where$\diamond$is an associative and commutative function which has the same range, as$Q$. Also, we should be able to compute$\diamond$in$O(1)$time. Examples of such a function would be$+$,$\times$,$min$,$max$,$\vee$,$\wedge$etc. (but not$-$,$\div$, etc.). Examples of such a decomposable search problem can be a simple existence query, where$Q(x, D$returns true if$x$exists in$D$. Then the$\diamond$function is the binary OR. Another example can be where the dataset is a collection of coordinates, and the query is the number of points which lie in a given rectangle. The$\diamond$function here is$+$. Making it Dynamic (Insertions Only) If we have a static structure that can store$n$elements by needing$S(n)$space, after$P(n)$preprocessing, and can answer a query in$Q(n)$time. What we mean by a static data-structure is that we can only make inserts into the data-structure exactly once. But we can iterate through that data-structure (this is what the notes have missed, but is a requirement to get the bounds). Then, we can construct a dynamic data-structure the space requirement of the dynamic structure would be$O(S(n))$, with query time of$O(\log n).Q(n)$, and insert time of$O(\log n).\frac{P(n)}{n}$amortized. How do we do this? Query: Our data-structure has$l$=$\lfloor{\lg{n}\rfloor}$levels. Each level$i$is either empty, or has a static-data structure with$2^i$elements. Hence, since the search query is decomposable, the answer is simply$Q(D_{0}) \diamond Q(D_{1}) \diamond … \diamond Q(D_{l})$. It is easy to see why the total time taken for the query would be$O(\log n)$Q(n). An interesting point is, if$Q(n) > n^\epsilon$, where$\epsilon > 0$(which essentially means, if$Q(n)$is polynomial in$n$), then the total query time is$O(Q(n))$. So, for example, if$Q(n) = n^2$, with the static data-structure, the query time with the dynamic data-structure would be$O(Q(n))$. Here is the proof, the total query time is:$\sum{Q(\frac{n}{2^i})} \implies \sum{(\frac{n}{2^i})^\epsilon} \implies n^\epsilon \sum{(\frac{1}{2^i})^\epsilon} \implies n^\epsilon . c \implies O(n^\epsilon) \implies O(Q(n))$. Insert: For insertion, we find the smallest empty level$k$, and build$L_k$with all the preceding levels ($L_0$,$L_1$, …,$L_k$) and the new element, and discard the preceding levels. Since it costs$P(n)$to build a level, and each element will participate in the array building process$O(\log n)$times (or jump levels that many times), we will pay the$P(n)$cost$O(\log n)$times. Over$n$elements, that is$O(\log n).\frac{P(n)}{n}$per element amortized. Again, if$P(n) > n^{1+\epsilon}$for any$\epsilon > 0$, the amortized insertion time per element is$(O(P(n)/n)$. The proof is similar to what we described above for the query time. Interesting Tidbit Recollect what we mean when we say that a data-structure is static. A Bloom Filter is a static data-structure in a different way. You can keep inserting elements into it dynamically up to a certain threshold, but you can’t iterate on those elements. The strategy to make it dynamic is very similar, we start with a reasonably sized bloom-filter, keep inserting into it as long as we can. Once it is too full, we allocate another bloom-filter of twice the size, and insert elements into that, from now on. And so on. The queries are done on all the bloom-filters and are a union of their individual results. An implementation is here. What Next: Deamortization of this data-structure with insertions as well as deletions. Then we will move on to Cache-Oblivious Lookahead Arrays, Fractional Cascading, and eventually Fractal Trees. This is like a rabbit hole! Not-Just-Sorted-Arrays It is fascinating, how simple data structures can be used to build web-scale systems (Related: A funny video on MongoDB). If this doesn’t make sense to you yet, allow me to slowly build up to the story. One of the most simple, and yet powerful algorithm a programmer has in his toolbox is the Binary Search. There are far too many applications to it. Consider reading this Quora answer for simple examples. I personally use it in git bisect to hunt down bad commits in a repository with tens of thousands of commits. The humble sorted array is a beautiful thing. You can search over it in$O(\log n)$time. There is one trouble though. You cannot modify it. I mean, you can, but then you will spoil the nice property of it being sorted, unless you pay an$O(n)$cost to copy the array to a new location, and insert the new element. If you have reserved a large enough array before hand, you don’t need to copy to a new array, but still have to shift elements and that will still be an$O(n)$cost. Also, if we were allowed to plot complexities on a graph, we can plot the insert complexity on the X-axis and search complexity on the Y-axis. Then all the suitable data-structures would hopefully be bound by the square with edges on <$O(1)$,$O(1)$> and <$O(n)$,$O(n)$>. The sorted array with <$O(n)$,$O(\log n)$> would lie somewhere on the bottom right corner, whereas, a simple unsorted array would be on the top-left with <$O(1)$,$O(n)$>. You can’t do insertions better than$O(1)$and you can’t do searches better than$O(\log n)$(although the bases and constants matter a lot, in practice). Now, how do we use a static structure, so that we retain the goodness of a sorted array, but allow ourselves the ability to add elements in an online fashion? What we have here, is a ‘static’ data-structure, and we are trying to use it for a ‘dynamic’ usecase. Jeff Erickson’s notes on Static to Dynamic Transformation are of good use here. The notes present results related to how to use static data-structures to build dynamic ones. In this case, you compromise a bit on the search complexity, to get much better insert complexity. The notes present inserts-only and inserts-with-deletions static to dynamic transformations. I haven’t read the deletions part of it, but the inserts-only transformation is easy to follow. The first important result is: If the static structure has a space complexity of$S(n)$, query complexity of$Q(n)$, and insert complexity of$P(n)$, then the space complexity of the dynamic structure would be$O(S(n))$, with query complexity of$O(\log n).Q(n)$, and insert complexity of$O(\log n).\frac{P(n)}{n}$amortized. Then the notes present the lazy-rebuilding method by Overmars and van Leeuwen. Which improves the first result’s insertion complexity by getting the same complexity in the worst case instead of amortized. (Fun fact: Overmars is the same great fellow who wrote Game Maker, a simple game creating tool, which I used when I was 12! Man, the nostalgia :) I digress..) The inserts-only dynamic structure, is pretty much how LSM trees work. The difference is the$L_0$array starts big (hundreds of MBs, or a GB in some cases), and resides in memory, so that inserts are fast. This$L_0$structure is later flushed to disk, but does not need to be immediately merged with a bigger file. That is done by background compaction threads, which run in a staggered fashion, so as to minimize disruption to the read workload. Read the BigTable paper, to understand how simple sorted arrays sit at the core of the biggest databases in the world. Next Up: Fractal Trees and others. A Fortune Cookie Server in Go I really like the concept of Fortune Cookies in *nix systems, and I absolutely love the Hindi movie Andaz Apna Apna. So, I thought I would write up a simple fortune-cookie server which serves random quotes from the movie. A lot of my friends liked it. So, I thought it will be even nicer if I could generalize it, and add a bunch of other movies and TV serials, that are popular. So I wrote up Elixir, which is a generic fortune-cookie server written in Go. This new fortune-cookie server was hosted on rand(quotes), and had quotes from movies like Quentin Tarantino’s movies, Lord of the Rings, and the popular TV shows Breaking Bad, and Game of Thrones. Using Elixir, it is extremely simple to write a fortune-cookie server which serves quotes from multiple quote databases. All you need to do is create the a file that contains the quotes you want to serve, one per line, and give it a name like foo.quotes. Place it in the directory where the server was started from, and those quotes would be serve from the /foo endpoint. To make it more fun, /foo?f=cowsay returns the quote in the cowsay format! Something like this You can create many such quotes databases, and also add/delete/modify them while the server is running, and the server will pick up the changes. A full-featured fortune-cookie server would look something like this: Implementation Note: To implement the feature of keeping tab on the quote databases, without having to restart the server, one way was to use the Inotify subsystem in Linux, using Go. But this didn’t work for OSX. So I wrote up a quick and dirty implementation which does ioutil.ReadDir(".") periodically, and filters all the files which have a .quotes extension. More Writing This Year It has almost been a year since I wrote something. For the most part, I have been too busy to share something. Since the past one year, I have been working on HBase at Facebook, which is the backbone of Facebook Messages, and many other important services at Facebook. Also, I’ve moved to Mountain View, California, and I absolutely love the surroundings. I have been trying my hand at different things, like learning some ML, Go, trying to learn how to play an Electric Guitar, and other things. One thing I want to follow this year would be to continue doing new things, and keep sharing my experiences. Finally, I have also ditched WordPress in favor of Octopress, a Markdown-style blogging framework built on top of Jekyll. What this means is, I just need to worry about the content, which I can write in simple Markdown format. Octopress generates a completely static website for me to use. I don’t have to setup the Apache-MySQL monstrosity to serve a simple blog. However, the transition isn’t very smooth. • I had to get my posts from WordPress into a Markdown format. For this, I used exitwp. • I had to copy the images I had uploaded to my WP setup to my Octopress sources directory, and manually change the image tags to point to the right location in all the markdown files. • For LaTeX, I am using MathJax with Octopress. I have only lost two things in this transition: • Obviously, I lost the ability to receive comments natively, and lost the older comments on my posts. This is fine by me, since I don’t receive too many comments anyways. I will enable Disqus on the blog for future comments. • Also, WP has this ridiculous URL scheme for posts, which is something like yourblog.com/?p=XYZ, where XYZ is a number, while Octopress has a more sensible, :year/:month/:date/:title scheme. Google had indexed my blog according to the older scheme and now, and anybody who has linked to a specific blog post, will now be redirected to the main page. In short, its not pleasant. However, the big win is that it is super-easy for me to write posts and host a blog. Earlier, this blog was put up on a free shared hosting site, and it was very clumsy to manage my blog. And as of the time of writing this post, I am hosting multiple blogs on a very lightweight VPS, and as long as I don’t get DDoS-ed, this machine is more than capable of hosting several such static-only blogs. Because, after all, how hard is it to serve static HTML :) I have a lot to share about what I learnt in the last year, so keep looking :) Putting My Twitter Friends and Followers on the Map - II (Using D3.js) I had done a visualisation using R, where I plotted the locations of my friends and followers on Twitter on a map. I re-did this visualisation using D3.js. It is a fantastic visualisation tool, which a lot of features. I could only explore a fraction of the API for my purpose, but you can see a lot of cool examples on their page. The final output was something like this (a better look here): I wanted to resize the bubbles when there are a lot of points in the vicinity, but I’ll leave that for later. You can have a look at the D3.js code, and the python script which gets the lat-long pairs using the Twitter API, and the Google GeoCoding API (Yahoo! decided to commercialize their GeoCoding API sadly). (The map data and some inspiration for the code, is from here) Scalable Bloom Filters in Go I have been trying to learn some Go in my free time. While, I was trying to code up a simple Bloom Filter, I realized that once a Bloom Filter gets too full (i.e., the false positive rate becomes high), we cannot add more elements to it. We cannot simply resize this original Bloom Filter, since we would need to rehash all the elements which were inserted, and we obviously don’t maintain that list of elements. A solution to this is to create a new Bloom Filter of twice the size (or for that matter, any multiple >=2), and add new elements to the new filter. When we need to check if an element exists, we need to check in the old filter, as well as the new filter. If the new filter gets too full, we create another filter which is a constant factor (>=2) greater in size than the second filter. bit.ly uses a solution similar to this (via Aditya). We can see that if we have N elements to insert in our ‘Scalable’ Bloom Filter, we would need log N filters (base r, where r is the multiple mentioned above). It is also easy to derive the cumulative false positive rate for this new Bloom Filter. If the false positive rate of each of the individual constituent bloom filters is f, the probability that we do not get a false positive in one of the filters is (1-f). Therefore the probability that we do not get a false positive in any of the q filters is, (1-f)^q. Hence, the probability that we get a false positive in any of these q filters is: 1 - (1-f)^q. Some rough estimates show that this cumulative false positive rate is around: q * f (only if f is small). Where, q is about log N, as we noted above. Therefore, if you have four filters, each with a false positive rate of 0.01, the cumulative false positive rate is about 4 * 0.01 = 0.04. This is exactly what we want. What is beautiful in this construction is, the false positive rate is independent of how fast the filter sizes grow. If you maintain a good (small) false positive rate in each of the constituent filter, you can simply add up their false positive rates to get an estimate of the cumulative false positive rate (only if f is small). You can simply grow your filters fast (like around 5x), each time one filter becomes too full, so as to keep the number of filters small. I have implemented this ‘Scalable Bloom Filter’ (along with the vanilla Bloom Filter and the Counting Bloom Filter) in Go. Please have a look, and share any feedback. Linking C++ Templates Today I was trying to link together code which uses C++ Templates. The usual accepted pattern is to put the declarations in the header, and the definitions in a separate .hpp / .cpp file. However, I was unable to get them to link it together. To my surprise, I discovered (or re-discovered, probably) that, when dealing with C++ Templates, you need to put the definitions in the header file. A good explanation of why it is so, is here. Five Awesome Shell Utilities You Might Not Know About pgrep I always used to do “ps -ax | awk ‘/procname{print$1}’”, until I learnt that, we could simply do “pgrep procname”, and it will list the PIDs of all the processes with in their names.

pkill Similarly, I used to do “ps -ax | awk ‘/procname{print \$1}’ | xargs kill”. As you must have guessed, this kills all the processes with names having ‘procname’ in them. But, a much simpler way is to just do “pkill procname”

zcat (and other related utilities) A lot of times, we need to grep through an archive. For this, we usually copy the archive somewhere else, grep on the resulting files, and then delete these files. zcat is much simpler, in the sense that, it uncompresses an archive and displays the result on the standard output. Now you can pipe the output to grep. Or, you can directly use zgrep! See some other related utilities here.

netcat netcat is a damn neat networking utility, which reads and writes data across the network using TCP. This is pretty nifty because we can pipe the output of a command to a process running on a different machine. This is extremely useful for monitoring. Thanks to Dhruv for introducing this one.

strace This utility can be used to print the list of systems calls (along with their arguments), being made by a program while it is running. How cool is that! See the output of ‘strace ls’ here.

Latency Numbers Every Programmer Should Know

Here are some latency numbers that Peter Norvig thinks every engineer should know, and I whole heartedly agree (I was recently asked questions, which required the knowledge of these numbers, and my guesstimate was quite off the mark).

Edit: Here is a version with latency figures in ms, where appropriate.

Update: Here is a version showing the change in the numbers over the years.