Riding high from the success of your last interview, you decide it’s time to trade up. You apply to another company. When your coding interview rolls around, you’re cool, calm, and collected. Then they hit you with the following problem:

“Given a set of coins [1c, 2c] which can be used any number of times, how many unique ways are there to make an amount? (3c)”

The hair on the back of your neck stands up. You flashback to uncomfortable seats and long hours in sweltering lecture halls. You suspect that the optimal way to solve this problem uses dynamic programming, but you’re not certain. **How and when should you use Dynamic Programming in a functional language?**

Answering this question requires an understanding of when dynamic programming is appropriate and how the implementation of this technique differs in a functional language. Let’s take a look at the characteristics of dynamic programming problems so you can recognize them in your next interview.

## Recognizing Dynamic Programming Problems

Dynamic Programming is worth considering when the problem can be broken down into sub-problems and when those sub-problems are computed multiple times.

### Dividing into Sub-Problems

Finding a way to reduce a single problem into multiple overlapping sub-problems is *the* key characteristic of problems which benefit from a dynamic programming approach. The first question we need to answer when considering solving a problem using dynamic programming is how to divide it into sub-problems.

This division can be seen by understanding the steps which must be taken at each branch-point of your problem.

For instance, in our sample problem involving making change for a given amount, one way we could divide our top-level problem into 2 sub-problems is by making the choice to either *include* or *exclude* a given coin. Here’s a diagram showing the inclusion and exclusion of the *2c* coin to help you visualize the problem:

In the left sack, we decide not to include to the 2c coin, so we take it out of our list of coins to include and we leave the goal amount unchanged. On the right sack, we decide we are going to include the 2c coin, so we remove 2c from our goal amount.

Notice that when we include the 2c coin, we don’t remove it from our list of coins. This tracks the case where we use the 2c coin more than once.

### Overlapping Sub-Problems

If we repeat this process a number of times, you’ll see that eventually we start finding states which are repeated throughout the solving of the problem. Try writing out a few iterations by hand to get a feel for the algorithm.

This demonstrates that when we divide our problem by including or excluding a given coin, the result contains overlapping sub-problems. These overlapping sub-problems are a perfect fit for dynamic programming!

## Dynamic Programming

So now that we know when to use dynamic programming, how should we go about doing it?

The primary techniques involved in dynamic programming are **memoization** and **bottom-up** computation.

### Memoization

Memoization is a very fancy way of telling you to save your work! When we notice that we have overlapping sub-problems, memoization lets us save the result of those sub-problems.

The next time that we run into one of the overlapping sub-problems that we’ve already solved, we can use our saved result instead of continuing the computation. This can result in substantial runtime complexity savings!

### Bottom-Up Computation

The second technique often used in conjunction with memoization is bottom-up computation. If we’re going to save our work at each step, then it starting at the beginning and working our way upwards makes sense.

This has two advantages. The first is that when we compute a new step of our algorithm, it’s likely that we have already computed the inputs for that step. The second is that if we work from a bottom-up approach, we don’t have to build a large call stack. This is a concern with top-down algorithms.

If this call stack gets too large we place our program at risk of a Stack Overflow. Haskell is sometimes able to optimize top-down approaches using a technique called Tail Recursion, but it’s best not to rely on the compiler until you’re very familiar with its customs.

## Back to the Interview

When solving dynamic coding problems in an imperative language, the naive solution is often a set of `for loops`

. Imperative programming encourages you to think about solving problems using nested iteration, and typically as a series of `for loops`

.

The naive solution in functional programming languages is often the recursive one. Having recalled all you could from your algorithm days, you spin the following yarn:

```
import Debug.Trace
type Denom = Int
type Amount = Int
type NumWays = Int
-- Top Down Approach
makeChange :: [ Denom ] -> Amount -> NumWays
makeChange denoms amount
| amount == 0 = 1
| amount < 0 = 0
| null denoms = 0
| otherwise =
trace ("finding ways to get " ++ show amount ++ " with " ++ show denoms) $
makeChange (init denoms) amount +
makeChange denoms (amount - last denoms)
```

The interviewer cocks her head and you know you’ve blundered into a trap; you’re just not sure of its nature yet…

“Why don’t you show me the output for our test case? Show me how many ways there are to make 3c with [1c, 2c].”

You `cabal run`

and are greeted with the following output:

```
Finding ways to get 3 with [1,2]
Finding ways to get 3 with [1]
Finding ways to get 2 with [1]
Finding ways to get 1 with [1]
Finding ways to get 1 with [1,2]
Finding ways to get 1 with [1]
2
```

You do some quick mental math and realize that you’ve come to the right solution! You’re pleased as a proud parent. Your pride is somewhat reduced as the debug output shows you the source of your interview’s consternation:

```
finding ways to get 1 with [1]
finding ways to get 1 with [1]
```

All children disappoint their parents eventually. Even in our simple test case, we can see that we had to calculate the number of times to make 1c with `[1c]`

twice.

This repetition occurs naturally when we divide our original problem into sub-problems. As we break each branch into smaller parts by reducing the amount or excluding a coin, it’s very likely that we will end up with multiple branches that are identical.

In a more complicated example this repeated calculation of known sub-problems could result in thousands of wasted cycles.

## Bottom-up Memoization

It’s time to put our newly acquired dynamic programming chops to work. We know we need to store the result of sub-problems and work from the bottom-up.

### Implementing Memoization

How can we characterize our sub-problems so we can store them in our memoization structure? The defining characteristic of each sub-problem is the arguments of our `makeChange`

function! These arguments will form the key of our memoization map. The signature of this memoization data structure looks like this:

This structure represents the number of ways to make change for a given amount with a given set of denominations. Now when we see a sub-problem, we can check the `MemoizedChange`

map and short-circuit if we’ve already solved it.

### Identify Base Cases

Imagining the algorithm unfurl upwards towards your solution, you realize that you’ve been intent on the result and cannot see the ground. Where does our algorithm begin?

The base cases are answers we always know:

- How many ways are there to make 0 cents?
- No matter how many coins you use, there’s always 1 way; by doing nothing at all!

- How many ways are there to make any amount with no coins?
- It can’t be done! Unless the amount is 0, and this case is covered above.

Beginning from these values, we can build up our memoized structure. We’ll start with no coins and compute the number of ways to make change with values from `[0..goal]`

. Then we’ll add the next coin and recurse.

### Define the Solution Space

For clarity, let’s build up the full list of keys that we’ll need to fill for our memoized structure:

```
numWays :: ([Denom], Amount) -> [([Denom], Amount)]
numWays (denoms, amounts) = do
subDenom <- [take n denoms | n <- [0..length denoms]]
amount <- [0..amounts]
pure (subDenom, amount)
```

The `numWays`

function takes the argument to our function, a set of denominations and an amount, and expands it to encompass all the keys we need to fill in our bottom-up memoized structure.

#### The List Monad

We use the List Monad to facilitate building our final type of `[([Denom], Amount)]`

. Let’s break down each line so you can use the List Monad too:

subDenoms represents the list of all sub-denominations. For our input `[1,2]`

, `subDenoms`

represents `[[],[1],[1,2]]`

.

We use the List Monad’s do-notation to pull out each individual sub-denomination and amount. Then we return each unique pair:

## A Second Solution

Now that we know what keys we need to compute inside our memoized structure, we need to iterate over those keys.

For each key, we add the number of ways to make change if we include the coin and the number of ways to make change if we exclude the coin. In our representation of keys, the coin is the last element of our `denoms`

list:

```
import qualified Data.Map.Strict as M
type MemoizedChange = M.Map ([Denom], Amount) NumWays
makeChange :: MemoizedChange -> ([Denom], Amount) -> NumWays
makeChange prevSeen request =
foldl' computeNumWays prevSeen (numWays request) M.! request
computeNumWays :: MemoizedChange
-> ([Denom], Amount)
-> M.Map ([Denom], Amount) NumWays
computeNumWays seen (denoms, 0) = M.insert (denoms,0) 1 seen
computeNumWays seen ([], amount) = M.insert ([],amount) 0 seen
computeNumWays seen key@(denoms,amount) =
let waysWithout = memoLookup (init denoms, amount) seen
waysWith = memoLookup (denoms, amount - last denoms) seen
in trace ("finding ways to get " ++ show amount ++ " with " ++ show denoms) $
M.insert key (waysWithout + waysWith) seen
```

The `computeNumWays`

function tracks our base cases and is responsible for inserting the value of `(waysWithout + waysWith)`

into our memoized structure once we know the total number of ways.

Both `waysWithout`

and `waysWith`

are computed using the `memoLookup`

function. This function checks one final edge case; the case where subtracting the current coin from our amount results in a negative number. The number of ways to get a negative amount is always 0.

```
memoLookup :: ([Denom], Amount) -> MemoizedChange -> NumWays
memoLookup key@(_, amount) seen
| amount < 0 = 0
| key `M.member` seen = seen M.! key
| otherwise = error $ "Key not found: " ++ show key
```

## Not so Idiomatic

This implementation works, but if you’re being frank with yourself, it’s not very functional. Building the `MemoizedChange`

map by iterating over the expanded list of `numWays`

keys is focusing on iterating over data rather than expressing a pattern of computation. This has the odor of imperative programming.

Your interviewer, satisfied with your familiar solution, has already moved to wrap up. They’re asking if you have questions about StartUpCorp #9, but the itch of a problem unsolved is hard to shake.

Your fingers are moving of their own accord as you strive to rework your implementation into something more palatable:

```
makeChangeFunctionally :: [ Denom ] -> Amount -> NumWays
makeChangeFunctionally _ 0 = 1
makeChangeFunctionally [] _ = 0
makeChangeFunctionally denoms amount =
let denomsList =
[take n denoms | n <- [1..length denoms]]
amountsList = [0..amount]
nextRow :: [Denom] -> [NumWays] -> [NumWays]
nextRow denoms prevRow =
let waysWithout = prevRow
waysWith = replicate (last denoms) 0 ++
thisRow
thisRow = zipWith3 (const (+))
amountsList
waysWithout
waysWith
in thisRow
firstRow = 1:map (const 0) amountsList
memoizedChange =
firstRow:zipWith nextRow denomsList memoizedChange
in last $ last memoizedChange
```

### Recursive Lists

You recall that Haskell is a *lazy* language. The strict rules of assignment order can be bent in this realm.

We conjure the `memoizedChange`

map as an recursive list instead of that dreadfully explicit map. You envision the `memoizedChange`

map as the functional programmer’s ideal data structure, a self-referential list of lists. A table with columns as the amounts and rows as subsets of the denominations.

Choosing the recursive `zipWith`

function takes advantage of Haskell’s built-in memoization feature: Sharing. This lets us cache values for free as thunks without having to store and maintain them ourselves!

To build each value in the row we perform our familiar addition of `waysWith`

and `waysWithout`

:

```
nextRow :: [Denom] -> [NumWays] -> [NumWays]
nextRow denoms prevRow =
let waysWithout = prevRow
waysWith =
replicate (last denoms) 0 ++ thisRow
thisRow =
zipWith3 (const (+)) amountsList waysWithout waysWith
in thisRow
```

#### Finding Ways Without a Coin

To find the number of ways without using a given coin, we simply drop to the previous row in our table. This represents removing a coin from our list of choices while maintaining the same amount.

When we start our algorithm, what is `prevRow`

? It comes from the last argument of the `nextRow`

function call:

```
firstRow = 1:map (const 0) amountsList
memoizedChange =
firstRow:zipWith nextRow denomsList memoizedChange
```

We can see that prevRow at the start of our algorithm is a recursive reference to `memoizedChange`

! We need to append `firstRow`

using the `(:)`

operator so there is a set of concrete values, otherwise we’d run into an infinite loop of self-reference.

#### Finding Ways With a Coin

To find the number of ways if we do use a given coin, we need to look backwards in the current row by a number of columns equal to the value of the coin. This operation represents keeping our list of coin choices the same while removing the value of the coin from the amount.

Here’s where the recursive magic lies. `waysWith`

provides a set of phantom starting values which represent values to the left of our actual table. By padding the current row with these concrete values, we are effectively looking backwards in the current row.

#### Alternatives to List Indexing

This allows us to lazily perform the computation of `waysWith`

throughout the current row of the table without having to use list indexing. Instead we make use of the `zipWith3`

function:

Recursion will spiral endlessly without a bound. Bind your algorithm to the finite plane by weaving in the explicitly defined `amountsList`

.

The tension leaves your body with a sigh as you visualize a single list of keys and a beautiful memoized structure before your eyes. When you hit `cabal run`

you don’t glance at the output to verify your solution.

## Afterword

Translating algorithmic techniques like Dynamic Programming to a new paradigm can be tricky! I hope that this article helps you develop your own intuition and lets you grok how functional languages solve these types of problems.