Coin Change
Algorithmically Speaking - #1: Let's unravel the Coin Change problem together.
Hello there!
Today we are going to be diving into one of the most common algorithmic problems we face in our daily lives: the Coin Change problem.
This problem is the perfect excuse for me to introduce you to Greedy Algorithms and Dynamic Programming.
At the end of the post, you will find some algorithmic challenges so you can try and apply some of the topics that I will explain today. Feel free to skip to that part if you think you already have the necessary skills to solve them.
Let’s begin!
Formulating the Problem
In its simplest terms, the problem can be formulated as follows:
We are given a set of coins and our task is to form a sum of money N
using the coins. The values of the coins are c = {c1, c2,..., ck}
, and each coin can be used as many times as we want. What is the minimum number of coins needed?
As an example, let’s say our set of coins is c = {1, 3, 4}
,
and we want to form an amount of money equal to 5
.
In this case, it is optimal to select 2 coins (one with a value of 1
and another with a value of 4
).
Trust me on this, we cannot create that amount of money with this set of coins using less than 2 coins. But how can we solve this problem for any arbitrary set of coins and any amount of money?
Greedy Solution
A very intuitive and easy solution that might come to everyone’s mind is the following:
Always select the largest possible coin, until the required sum of money has been reached.
Let’s face it. We would probably do this if we had to solve this problem in real life. This algorithm works in our example case above, let’s see how:
We start with an amount of money equal to
0
.We select a coin with value
4
. We have an amount of money equal to4
.We select a coin with the value
1
. We have an amount of money equal to5
. We finish our algorithm.
But will this algorithm work in every case?
First, this is a Greedy algorithm. These algorithms construct solutions by always making the choice that seems like the best at the moment. You never take back any choice if you use a greedy algorithm. Instead, you directly construct the solution. This is why greedy algorithms are usually very efficient.
Nevertheless, the fact that greedy algorithms always select optimal local choices makes it difficult to prove their correctness. To do it properly, you should be able to argue that all the optimal local choices are also globally optimal.
Luckily for us, despite the difficulty of proving the correctness of a greedy algorithm, showing they are not always correct is a lot more simple: we just have to find an example case where it produces the wrong output.
For example, what happens with this algorithm if we take the same set of coins as above but now we want to form an amount of money equal to 6
?
First, it will select a coin with a value of 4
, then a coin with a value of 1
, and then another one with a value of 1
. This indeed adds up to the amount we want, and it uses 3
coins. Nevertheless, if we choose two coins with a value of 3
, we end up with a solution that uses fewer coins than the greedy algorithm proposed.
We have shown that this algorithm doesn’t produce the optimal result in every case!
Despite that, this algorithm is really good. And it does work for any amount of money if we have the appropriate set of coins. Most ATMs do this when you withdraw money, and it works for most global currency systems.
Now, instead of diving into when this greedy algorithm works, let’s focus on a more interesting question: how do we solve the Coin Change problem in the general case?
Dynamic Programming Solution
Dynamic Programming can be seen as a programming technique that combines the best of two other techniques:
Complete Search: A dynamic programming solution should be as correct as the complete search approach to solving the same problem.
Greedy Algorithms: A dynamic programming solution should be as efficient as greedy algorithms usually are.
This technique can be applied when the problem can be divided into overlapping subproblems that can be solved independently. Usually, the relation between the solution for a problem and the solution for the subproblems that it will be divided is represented by a recurrence.
If we were to solve the Coin Change problem recursively, we could define a function f(x)
that will answer the following question: what is the minimum amount of coins needed to form an amount of x
,
with the set of coins that we have?
For example, assuming our coins are c = {1, 3, 4}
, the answer for the first few cases is:
f(0) = 0
f(1) = 1
f(2) = 2
f(3) = 1
f(4) = 1
f(5) = 2
The good thing that the function f
has is that its value can be calculated by recursively evaluating the function in smaller input values. The key idea to formulate the recursive solution is to focus on the first coin we choose every time.
Let’s say we are calculating the answer for x = 9
. In this case, we know that the first coin we choose has to be one of 1
, 3
, or 4
because those are the only coins we have available. In case we decide to go for a coin with the value of 1
,
then we will face the problem of forming the amount of 8
using the minimum amount of coins. This is the same original problem we are trying to solve, but now in a smaller instance. Hence, a subproblem. The same goes for coins with values 3
and 4
.
Generalizing, our recursive formula to solve the Coin Change problem with this set of coins is as follows:
f(x) = min(f(x - 1) + 1,
f(x - 3) + 1,
f(x - 4) + 1)
The base case for our recursion is when x = 0
. In that case, the answer is 0
because we don’t need any coins to form an empty amount of money. And, just to make the implementation easier, let’s say that the answer for x < 0
is an infinitely large number because there is no answer for a negative amount of money.
That being said, we could implement the described algorithm like this:
But this function is still not efficient. If you are sharp enough, you might have noticed that this function calculates the value for the same input several times. For example, two possible recursion paths are the following when solving f(9)
:
f(9) --> f(6) --> f(2)
f(9) --> f(5) --> f(2)
In both of these paths, we go through solving for x = 2
, which is a waste of time and resources. If we had a way of recording the input values for which we already know the solution, the performance of our recursive algorithm would improve.
Fortunately, we can achieve this behavior using one of the following methods:
Memoization
Keep reading with a 7-day free trial
Subscribe to Algorithmically Speaking to keep reading this post and get 7 days of free access to the full post archives.