Recursive Algorithm Time Complexity: Coin Change - c++

I'm going through some algorithms, and came across the coin change problem.
When thinking about the problem I came up with this naive recursive solution:
int coinChange(const vector<int>& coins, int start, int n) {
if (n == 0) return 1;
if (n < 0) return 0;
int total = 0;
for (int i = start; i < coins.size(); ++i) {
if (coins[i] <= n) total += coinChange(coins, i, n-coins[i]);
}
return total;
}
I then realized the "accepted" solution was as follows:
int count( int S[], int m, int n )
{
// If n is 0 then there is 1 solution (do not include any coin)
if (n == 0)
return 1;
// If n is less than 0 then no solution exists
if (n < 0)
return 0;
// If there are no coins and n is greater than 0, then no solution exist
if (m <=0 && n >= 1)
return 0;
// count is sum of solutions (i) including S[m-1] (ii) excluding S[m-1]
return count( S, m - 1, n ) + count( S, m, n-S[m-1] );
}
At first I thought the two were essentially the same. It was clear to me that my recursion tree was much wider, but it seemed that this was only because my algorithm was doing more work at each level so it evened out. It looks like both algorithms are considering the number of ways to make changes with the current coin (given it is <= the current sum), and considering the number of ways to make change without the current coin (thus with all the elements in the coin array minus the current coin). Therefore the parameter start in my algorithm was doing essentially the same thing as m is doing in the second algorithm.
The more I look at it though, it seems that regardless of the previous text, my algorithm is O(n^n) and the second one is O(2^n). I've been looking at this for too long but if someone could explain what extra work my algorithm is doing compared to the second one that would be great.
EDIT
I understand the dynamic programming solution to this problem, this question is purely a complexity based question.

The two pieces of code are the same except that the second uses recursion instead of a for loop to iterate over the coins. That makes their runtime complexity the same (although the second piece of code probably has worse memory complexity because of the extra recursive calls, but that may get lost in the wash).
For example, here's partial evaluation of the second count in the case where S = [1, 5, 10] and m=3. On each line, I expand the left-most definition of count.
count(S, 3, 100)
= count(S, 2, 100) + count(S, 3, 90)
= count(S, 1, 100) + count(S, 2, 95) + count(S, 3, 90)
= count(S, 0, 100) + count(S, 1, 99) + count(S, 2, 95) + count(S, 3, 90)
= 0 + count(S, 1, 99) + count(S, 2, 95) + count(S, 3, 90)
You can see that this is the same calculation as your for-loop that sums up total.
Both algorithms are terrible because they run in exponential time. Here is an answer (of mine) that uses a neat dynamic programming method that runs in O(nm) time and uses O(n) memory, and is extremely concise -- comparable in size to your naive recursive solution. https://stackoverflow.com/a/20743780/1400793 . It's in Python, but it's trivially convertable to C++.

You didn't read the whole article (?).
The idea behind dynamic programming is that you store some values you already got and that way you don't need to calculate them again. In the end of the article you can see the actual correct solution.
As for why your solution is n^n and their original one is 2^n. Both solutions actually are 2^(n+#coins). They just call the function with m-1, instead of having a cycle that goes trough every coin. While your solution tries every coin at the start and then less and less, theirs tries to take one coin of type m, then another, then another, until at some point it switches to type m-1 and does the same with it and so on. Basically both solutions are the same.
Another way to prove that they have the same complexity is like this:
Both solutions are correct, so they will reach all possible solutions, and both stop growing a particular branch of the recursion the moment it reaches a negative n. Therefore, they have the same complexity.
And if you are not convinced just try each solution except add some counter and increment it every time you enter the function. Do this for each solution and you will see that you get the same number.

Benchmark
On my computer benchmarks follows:
coinChange(v, 0, 500);// v=[1, 5, 10, 15, 20, 25]
took 1.84649s to complete.
But
count(s, 6, 500); //s = [1, 5, 10, 15, 20, 25]
took 0.853075s to execute.
EDIT
I interpret the result as the time complexity of two algorithm are the same.

Related

Does this problem have overlapping subproblems?

I am trying to solve this question on LeetCode.com:
You are given an m x n integer matrix mat and an integer target. Choose one integer from each row in the matrix such that the absolute difference between target and the sum of the chosen elements is minimized. Return the minimum absolute difference. (The absolute difference between two numbers a and b is the absolute value of a - b.)
So for input mat = [[1,2,3],[4,5,6],[7,8,9]], target = 13, the output should be 0 (since 1+5+7=13).
The solution I am referring is as below:
int dp[71][70 * 70 + 1] = {[0 ... 70][0 ... 70 * 70] = INT_MAX};
int dfs(vector<set<int>>& m, int i, int sum, int target) {
if (i >= m.size())
return abs(sum - target);
if (dp[i][sum] == INT_MAX) {
for (auto it = begin(m[i]); it != end(m[i]); ++it) {
dp[i][sum] = min(dp[i][sum], dfs(m, i + 1, sum + *it, target));
if (dp[i][sum] == 0 || sum + *it > target)
break;
}
} else {
// cout<<"Encountered a previous value!\n";
}
return dp[i][sum];
}
int minimizeTheDifference(vector<vector<int>>& mat, int target) {
vector<set<int>> m;
for (auto &row : mat)
m.push_back(set<int>(begin(row), end(row)));
return dfs(m, 0, 0, target);
}
I don't follow how this problem is solvable by dynamic programming. The states apparently are the row i and the sum (from row 0 to row i-1). Given that the problem constraints are:
m == mat.length
n == mat[i].length
1 <= m, n <= 70
1 <= mat[i][j] <= 70
1 <= target <= 800
My understanding is that we would never encounter a sum that we have previously encountered (all values are positive). Even the debug cout statement that I added does not print anything on the sample inputs given in the problem.
How could dynamic programming be applicable here?
This problem is NP-hard, since the 0-1 knapsack problem reduces to it pretty easily.
This problem also has a dynamic programming solution that is similar to the one for 0-1 knapsack:
Find all the sums you can make with a number from the first row (that's just the numbers in the first row):
For each subsequent row, add all the numbers from the ith row to all the previously accessible sums to find the sums you can get after i rows.
If you need to be able to recreate a path through the matrix, then for each sum at each level, remember the preceding one from the previous level.
There are indeed overlapping subproblems, because there will usually be multiple ways to get a lot of the sums, and you only have to remember and continue from one of them.
Here is your example:
sums from row 1: 1, 2, 3
sums from rows 1-2: 5, 6, 7, 8, 9
sums from rows 1-3: 12, 13, 14, 15, 16, 17, 18
As you see, we can make the target sum. There are a few ways:
7+4+2, 7+5+1, 8+4+1
Some targets like 15 have a lot more ways. As the size of the matrix increases, the amount of overlap tends to increase, and so this solutions is reasonably efficient in many cases. The total complexity is in O(M * N * max_weight).
But, this is an NP-hard problem, so this is not always tractable -- max_weight can grow exponentially with the size of the problem.

Is prefix sum included in dynamic programming?

I've been solving algorithm problems, and I'm a bit confused about the terms.
When we want to calculate prefix sum (or cumulative sum) like the code below, can we say that we are using dynamic programming?
def calc_prefix_sum(nums):
N = len(nums)
prefix_sum = [0] * (N + 1)
for i in range(1, N + 1):
prefix_sum[i] = prefix_sum[i - 1] + nums[i - 1]
return prefix_sum
nums = [1, 3, 0, -2, 1]
print(calc_prefix_sum(nums))
[0, 1, 4, 4, 2, 3]
According to the definition in this page,
Dynamic programming is used where we have problems, which can be
divided into similar sub-problems so that their results can be
re-used.
In my prefix_sum algorithm, the current calculation (prefix_sum[i]) is divided into similar sub-problems (prefix_sum[i - 1] + nums[i - 1]) so that the previous result (prefix_sum[i - 1]) can be re-used. So I am assuming that calculating prefix sum is one of the applications of dynamic programming.
Can I say it's dynamic programming, or should I use different terms? (Especially, I am thinking about the situation in coding interviews.)
No, the correct term is memoization, not dynamic programming. Dynamic programming requires the problem to have optimal substructure as well as overlapping subproblems. Prefix sum has optimal substructure but it does not have overlapping subproblems. Therefore, this optimization should be called memoization.
Yes, prefix sums can be considered as a form of Dynamic Programming. It is the simplest way to calculate the sum of a range given a static array by using a prefix array which stores data based on previous sums.
Prefix Sum Array Construction Runtime = O(n)
Prefix Sum Query Runtime = O(1)
People often say that Kadane's algorithm is DP, and Kadane's is only 1 if statement away from a prefix sum.
def maxSubArray(nums: List[int]) -> int:
for i in range(1, len(nums)):
if nums[i-1] > 0:
nums[i] = nums[i-1] + nums[i]
return max(nums)
If you tried to calculate a prefix sum recursively, you would end up with an O(n^2) without memoization but an O(n) algorithm with memoization. This is because of overlapping subproblems.
nums = [1, 3, 0, -2, 1]
def cumsum(i):
if i < 0:
return 0
return nums[i] + cumsum(i-1)
prefix_sum = [cumsum(i) for i in range(len(nums))]
We can see that cumsum(0) is called 5 times, since the recursion must hit the base case before returning, and we call the function 5 times. cumsum(1) is called 4 times, cumsum(2) is called 3 times, and so on.
This is why I would say that prefix sum has both optimal substructure and overlapping subproblems.

3-sum alternative approach

I tried an alternative approach to the 3sum problem: given an array find all triplets that sum up to a given number.
Basically the approach is this: Sort the array. Once a pair of elements (say A[i] and A[j]) is selected, a binary search is done for the third element [using the equal_range function]. The index one past the last of the matching elements is saved in a variable 'c'. Since A[j+1] > A[j], we to search only upto and excluding index c (since numbers at index c and beyond would definitely sum greater than the target sum). For the case j=i+1, we save the end index as 'd' instead and make c=d. For the next value of i, when j=i+1, we need to search only upto and excluding index d.
C++ implementation:
int sum3(vector<int>& A,int sum)
{
int count=0, n=A.size();
sort(A.begin(),A.end());
int c=n, d=n; //initialize c and d to array length
pair < vector<int>::iterator, vector<int>::iterator > p;
for (int i=0; i<n-2; i++)
{
for (int j=i+1; j<n-1; j++)
{
if(j == i+1)
{
p=equal_range (A.begin()+j+1, A.begin()+d, sum-A[i]-A[j]);
d = p.second - A.begin();
if(d==n+1) d--;
c=d;
}
else
{
p=equal_range (A.begin()+j+1, A.begin()+c, sum-A[i]-A[j]);
c = p.second - A.begin();
if(c==n+1) c--;
}
count += p.second-p.first;
for (auto it=p.first; it != p.second; ++it)
cout<<A[i]<<' '<<A[j]<<' '<<*it<<'\n';
}
}
return count;
}
int main() //driver function for testing
{
vector <int> A = {4,3,2,6,4,3,2,6,4,5,7,3,4,6,2,3,4,5};
int sum = 17;
cout << sum3(A,sum) << endl;
return 0;
}
I am unable to work out the upper bound time needed for this algorithm. I understand that the worst case scenario will be when the target sum is unachievably large.
My calculations yield something like:
For i=0, no. of binary searches is lg(n-2) + lg(n-3) + ... +lg(1)
For i=1, lg(n-3) + lg(n-4) + ... + lg(1)
...
...
...
For i=n-3, lg(1)
So totally, lg((n-2)!) + lg((n-3)!) + ... + lg(1!)
= lg(1^n*2^(n-1)3^(n-2)...*(n-1)^2*n^1)
But how to deduce the O(n) bound from this expression?
In addition to James' good answer I would like to point out that this can actually go upto O (n^3) in the worst case because you are running 3 nested for loops. Consider the case
{1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}
and the demanded sum is 3.
When computing complexity, I'll start by referring to the Big-O Cheat sheet. I use this sheet to classify smaller sections of the code to get their runtime performance.
E.g. if I had a simple loop it would be O(n). BinSearch (according to the cheat sheet) is O(log(n)), etc..
Next, I use the Properties of Big-O notation to composite the smaller pieces together.
So for instance if I had two loops independent of each other it would be O(n) + O(n) or O(2n) => O(n). If one of my loops were inside the other, I would multiply them. So g( f(x) ) turns into O(n^2).
Now, I know you're saying: "hey, wait, I'm changing the upper and lower bounds of the inner loop" but I don't think that really matters...here's a university level example.
So my back-of-the-napkin calculation of your runtime is O(n^2) * O(Log(n)) or O(n^2 Log(n)).
But this need not be the case. I could've done something horribly wrong. So my next step would be to start graphing the runtimes of your worst possible case. Set sum to the impossibly large value and generate larger and larger arrays. You can avoid integer overflow by using lots and lots of repeated smaller numbers.
Also, compare it to the Quadratic 3Sum Solution. That's a known O(n^2) solution. Be sure to compare worst cases, or at least the same array on both. Do both timed tests at the same time so you can start getting a feel for which is faster while you are empirically testing the runtime.
Release builds, optimized for speed.
1. For your analysis, note that
log(1) + log(2) + ... + log(k) = Theta(k log(k)).
Indeed, the upper half of this sum is log(k/2) + log(k/2+1) + ... + log(k),
so it is at least log(k/2)*k/2, which is asymptotically the same as log(k)*k already.
Similarly, we can conclude that
log(n-1) + log(n-2) + log(n-3) + ... + log(1) + // Theta((n-1) log(n-1))
log(n-2) + log(n-3) + ... + log(1) + // Theta((n-2) log(n-2))
log(n-3) + ... + log(1) + // Theta((n-3) log(n-3))
... +
log(1) = Theta(n^2 log(n))
Indeed, if we consider the logarithms which are at least log(n/2), it's the half-triangle (thus ~1/2) of the upper left quadrant (thus ~n^2/4) of the above sum, so there are Theta(n^2/8) such terms.
2. As noted by satvik in another answer, your output loop can take up to Theta(n^3) steps when the number of outputs itself is Theta(n^3), which is when they are all equal.
3. There are O(n^2) solutions to the 3-sum problem, which are therefore asymptotically faster than this one.

Finding all paths down stairs?

I was given the following problem in an interview:
Given a staircase with N steps, you can go up with 1 or 2 steps each time. Output all possible way you go from bottom to top.
For example:
N = 3
Output :
1 1 1
1 2
2 1
When interviewing, I just said to use dynamic programming.
S(n) = S(n-1) +1 or S(n) = S(n-1) +2
However, during the interview, I didn't write very good code for this. How would you code up a solution to this problem?
Thanks indeed!
I won't write the code for you (since it's a great exercise), but this is a classic dynamic programming problem. You're on the right track with the recurrence; it's true that
S(0) = 1
Since if you're at the bottom of the stairs there's exactly one way to do this. We also have that
S(1) = 1
Because if you're one step high, your only option is to take a single step down, at which point you're at the bottom.
From there, the recurrence for the number of solutions is easy to find. If you think about it, any sequence of steps you take either ends with taking one small step as your last step or one large step as your last step. In the first case, each of the S(n - 1) solutions for n - 1 stairs can be extended into a solution by taking one more step, while in the second case each of the S(n - 2) solutions to the n - 2 stairs case can be extended into a solution by taking two steps. This gives the recurrence
S(n) = S(n - 2) + S(n - 1)
Notice that to evaluate S(n), you only need access to S(n - 2) and S(n - 1). This means that you could solve this with dynamic programming using the following logic:
Create an array S with n + 1 elements in it, indexed by 0, 1, 2, ..., n.
Set S[0] = S[1] = 1
For i from 2 to n, inclusive, set S[i] = S[i - 1] + S[i - 2].
Return S[n].
The runtime for this algorithm is a beautiful O(n) with O(n) memory usage.
However, it's possible to do much better than this. In particular, let's take a look at the first few terms of the sequence, which are
S(0) = 1
S(1) = 1
S(2) = 2
S(3) = 3
S(4) = 5
This looks a lot like the Fibonacci sequence, and in fact you might be able to see that
S(0) = F(1)
S(1) = F(2)
S(2) = F(3)
S(3) = F(4)
S(4) = F(5)
This suggests that, in general, S(n) = F(n + 1). We can actually prove this by induction on n as follows.
As our base cases, we have that
S(0) = 1 = F(1) = F(0 + 1)
and
S(1) = 1 = F(2) = F(1 + 1)
For the inductive step, we get that
S(n) = S(n - 2) + S(n - 1) = F(n - 1) + F(n) = F(n + 1)
And voila! We've gotten this series written in terms of Fibonacci numbers. This is great, because it's possible to compute the Fibonacci numbers in O(1) space and O(lg n) time. There are many ways to do this. One uses the fact that
F(n) = (1 / √(5)) (Φn + φn)
Here, Φ is the golden ratio, (1 + √5) / 2 (about 1.6), and φ is 1 - Φ, about -0.6. Because this second term drops to zero very quickly, you can get a the nth Fibonacci number by computing
(1 / √(5)) Φn
And rounding down. Moreover, you can compute Φn in O(lg n) time by repeated squaring. The idea is that we can use this cool recurrence:
x0 = 1
x2n = xn * xn
x2n + 1 = x * xn * xn
You can show using a quick inductive argument that this terminates in O(lg n) time, which means that you can solve this problem using O(1) space and O(lg n) time, which is substantially better than the DP solution.
Hope this helps!
You can generalize your recursive function to also take already made moves.
void steps(n, alreadyTakenSteps) {
if (n == 0) {
print already taken steps
}
if (n >= 1) {
steps(n - 1, alreadyTakenSteps.append(1));
}
if (n >= 2) {
steps(n - 2, alreadyTakenSteps.append(2));
}
}
It's not really the code, more of a pseudocode, but it should give you an idea.
Your solution sounds right.
S(n):
If n = 1 return {1}
If n = 2 return {2, (1,1)}
Return S(n-1)x{1} U S(n-2)x{2}
(U is Union, x is Cartesian Product)
Memoizing this is trivial, and would make it O(Fib(n)).
Great answer by #templatetypedef - I did this problem as an exercise and arrived at the Fibonacci numbers on a different route:
The problem can basically be reduced to an application of Binomial coefficients which are handy for Combination problems: The number of combinations of n things taken k at a time (called n choose k) can be found by the equation
Given that and the problem at hand you can calculate a solution brute force (just doing the combination count). The number of "take 2 steps" must be zero at least and may be 50 at most, so the number of combinations is the sum of C(n,k) for 0 <= k <= 50 ( n= number of decisions to be made, k = number of 2's taken out of those n)
BigInteger combinationCount = 0;
for (int k = 0; k <= 50; k++)
{
int n = 100 - k;
BigInteger result = Fact(n) / (Fact(k) * Fact(n - k));
combinationCount += result;
}
The sum of these binomial coefficients just happens to also have a different formula:
Actually, you can prove that the number of ways to climb is just the fibonacci sequence. Good explanation here: http://theory.cs.uvic.ca/amof/e_fiboI.htm
Solving the problem, and solving it using a dynamic programming solution are potentially two different things.
http://en.wikipedia.org/wiki/Dynamic_programming
In general, to solve a given problem, we need to solve different parts of the problem (subproblems), then combine the solutions of the subproblems to reach an overall solution. Often, many of these subproblems are really the same. The dynamic programming approach seeks to solve each subproblem only once, thus reducing the number of computations
This leads me to believe you want to look for a solution that is both Recursive, and uses the Memo Design Pattern. Recursion solves a problem by breaking it into sub-problems, and the Memo design pattern allows you to cache answers, thus avoiding re-calculation. (Note that there are probably cache implementations that aren't the Memo design pattern, and you could use one of those as well).
Solving:
The first step I would take would be to solve some set of problems by hand, with varying or increasing sizes of N. This will give you a pattern to help you figure out a solution. Start with N = 1, through N = 5. (as others have stated, it may be a form of the fibbonacci sequence, but I would determine this for myself before calling the problem solved and understood).
From there, I would try to make a generalized solution that used recursion. Recursion solves a problem by breaking it into sub-problems.
From there, I would try to make a cache of previous problem inputs to the corresponding output, hence memoizing it, and making a solution that involved "Dynamic Programming".
I.e., maybe the inputs to one of your functions are 2, 5, and the correct result was 7. Make some function that looks this up from an existing list or dictionary (based on the input). It will look for a call that was made with the inputs 2, 5. If it doesn't find it, call the function to calculate it, then store it and return the answer (7). If it does find it, don't bother calculating it, and return the previously calculated answer.
Here is a simple solution to this question in very simple CSharp (I believe you can port this with almost no change to Java/C++).
I have added a little bit more of complexity to it (adding the possibility that you can also walk 3 steps). You can even generalize this code to "from 1 to k-steps" if desired with a while loop in the addition of steps (last if statement).
I have used a combination of both dynamic programming and recursion. The use of dynamic programming avoid the recalculation of each previous step; reducing the space and time complexity related to the call stack. It however adds some space complexity (O(maxSteps)) which I think is negligible compare to the gain.
/// <summary>
/// Given a staircase with N steps, you can go up with 1 or 2 or 3 steps each time.
/// Output all possible way you go from bottom to top
/// </summary>
public class NStepsHop
{
const int maxSteps = 500; // this is arbitrary
static long[] HistorySumSteps = new long[maxSteps];
public static long CountWays(int n)
{
if (n >= 0 && HistorySumSteps[n] != 0)
{
return HistorySumSteps[n];
}
long currentSteps = 0;
if (n < 0)
{
return 0;
}
else if (n == 0)
{
currentSteps = 1;
}
else
{
currentSteps = CountWays(n - 1) +
CountWays(n - 2) +
CountWays(n - 3);
}
HistorySumSteps[n] = currentSteps;
return currentSteps;
}
}
You can call it in the following manner
long result;
result = NStepsHop.CountWays(0); // result = 1
result = NStepsHop.CountWays(1); // result = 1
result = NStepsHop.CountWays(5); // result = 13
result = NStepsHop.CountWays(10); // result = 274
result = NStepsHop.CountWays(25); // result = 2555757
You can argue that the initial case when n = 0, it could 0, instead of 1. I decided to go for 1, however modifying this assumption is trivial.
the problem can be solved quite nicely using recursion:
void printSteps(int n)
{
char* output = new char[n+1];
generatePath(n, output, 0);
printf("\n");
}
void generatePath(int n, char* out, int recLvl)
{
if (n==0)
{
out[recLvl] = '\0';
printf("%s\n",out);
}
if(n>=1)
{
out[recLvl] = '1';
generatePath(n-1,out,recLvl+1);
}
if(n>=2)
{
out[recLvl] = '2';
generatePath(n-2,out,recLvl+1);
}
}
and in main:
void main()
{
printSteps(0);
printSteps(3);
printSteps(4);
return 0;
}
It's a weighted graph problem.
From 0 you can get to 1 only 1 way (0-1).
You can get to 2 two ways, from 0 and from 1 (0-2, 1-1).
You can get to 3 three ways, from 1 and from 2 (2 has two ways).
You can get to 4 five ways, from 2 and from 3 (2 has two ways and 3 has three ways).
You can get to 5 eight ways, ...
A recursive function should be able to handle this, working backwards from N.
Complete C-Sharp code for this
void PrintAllWays(int n, string str)
{
string str1 = str;
StringBuilder sb = new StringBuilder(str1);
if (n == 0)
{
Console.WriteLine(str1);
return;
}
if (n >= 1)
{
sb = new StringBuilder(str1);
PrintAllWays(n - 1, sb.Append("1").ToString());
}
if (n >= 2)
{
sb = new StringBuilder(str1);
PrintAllWays(n - 2, sb.Append("2").ToString());
}
}
Late C-based answer
#include <stdio.h>
#include <stdlib.h>
#define steps 60
static long long unsigned int MAP[steps + 1] = {1 , 1 , 2 , 0,};
static long long unsigned int countPossibilities(unsigned int n) {
if (!MAP[n]) {
MAP[n] = countPossibilities(n-1) + countPossibilities(n-2);
}
return MAP[n];
}
int main() {
printf("%llu",countPossibilities(steps));
}
Here is a C++ solution. This prints all possible paths for a given number of stairs.
// Utility function to print a Vector of Vectors
void printVecOfVec(vector< vector<unsigned int> > vecOfVec)
{
for (unsigned int i = 0; i < vecOfVec.size(); i++)
{
for (unsigned int j = 0; j < vecOfVec[i].size(); j++)
{
cout << vecOfVec[i][j] << " ";
}
cout << endl;
}
cout << endl;
}
// Given a source vector and a number, it appends the number to each source vectors
// and puts the final values in the destination vector
void appendElementToVector(vector< vector <unsigned int> > src,
unsigned int num,
vector< vector <unsigned int> > &dest)
{
for (int i = 0; i < src.size(); i++)
{
src[i].push_back(num);
dest.push_back(src[i]);
}
}
// Ladder Problem
void ladderDynamic(int number)
{
vector< vector<unsigned int> > vecNminusTwo = {{}};
vector< vector<unsigned int> > vecNminusOne = {{1}};
vector< vector<unsigned int> > vecResult;
for (int i = 2; i <= number; i++)
{
// Empty the result vector to hold fresh set
vecResult.clear();
// Append '2' to all N-2 ladder positions
appendElementToVector(vecNminusTwo, 2, vecResult);
// Append '1' to all N-1 ladder positions
appendElementToVector(vecNminusOne, 1, vecResult);
vecNminusTwo = vecNminusOne;
vecNminusOne = vecResult;
}
printVecOfVec(vecResult);
}
int main()
{
ladderDynamic(6);
return 0;
}
may be I am wrong.. but it should be :
S(1) =0
S(2) =1
Here We are considering permutations so in that way
S(3) =3
S(4) =7

Dynamic programming algorithm N, K problem

An algorithm which will take two positive numbers N and K and calculate the biggest possible number we can get by transforming N into another number via removing K digits from N.
For ex, let say we have N=12345 and K=3 so the biggest possible number we can get by removing 3 digits from N is 45 (other transformations would be 12, 15, 35 but 45 is the biggest). Also you cannot change the order of the digits in N (so 54 is NOT a solution). Another example would be N=66621542 and K=3 so the solution will be 66654.
I know this is a dynamic programming related problem and I can't get any idea about solving it. I need to solve this for 2 days, so any help is appreciated. If you don't want to solve this for me you don't have to but please point me to the trick or at least some materials where i can read up more about some similar issues.
Thank you in advance.
This can be solved in O(L) where L = number of digits. Why use complicated DP formulas when we can use a stack to do this:
For: 66621542
Add a digit on the stack while there are less than or equal to L - K digits on the stack:
66621. Now, remove digits from the stack while they are less than the currently read digit and put the current digit on the stack:
read 5: 5 > 2, pop 1 off the stack. 5 > 2, pop 2 also. put 5: 6665
read 4: stack isnt full, put 4: 66654
read 2: 2 < 4, do nothing.
You need one more condition: be sure not to pop off more items from the stack than there are digits left in your number, otherwise your solution will be incomplete!
Another example: 12345
L = 5, K = 3
put L - K = 2 digits on the stack: 12
read 3, 3 > 2, pop 2, 3 > 1, pop 1, put 3. stack: 3
read 4, 4 > 3, pop 3, put 4: 4
read 5: 5 > 4, but we can't pop 4, otherwise we won't have enough digits left. so push 5: 45.
Well, to solve any dynamic programming problem, you need to break it down into recurring subsolutions.
Say we define your problem as A(n, k), which returns the largest number possible by removing k digits from n.
We can define a simple recursive algorithm from this.
Using your example, A(12345, 3) = max { A(2345, 2), A(1345, 2), A(1245, 2), A(1234, 2) }
More generally, A(n, k) = max { A(n with 1 digit removed, k - 1) }
And you base case is A(n, 0) = n.
Using this approach, you can create a table that caches the values of n and k.
int A(int n, int k)
{
typedef std::pair<int, int> input;
static std::map<input, int> cache;
if (k == 0) return n;
input i(n, k);
if (cache.find(i) != cache.end())
return cache[i];
cache[i] = /* ... as above ... */
return cache[i];
}
Now, that's the straight forward solution, but there is a better solution that works with a very small one-dimensional cache. Consider rephrasing the question like this: "Given a string n and integer k, find the lexicographically greatest subsequence in n of length k". This is essentially what your problem is, and the solution is much more simple.
We can now define a different function B(i, j), which gives the largest lexicographical sequence of length (i - j), using only the first i digits of n (in other words, having removed j digits from the first i digits of n).
Using your example again, we would have:
B(1, 0) = 1
B(2, 0) = 12
B(3, 0) = 123
B(3, 1) = 23
B(3, 2) = 3
etc.
With a little bit of thinking, we can find the recurrence relation:
B(i, j) = max( 10B(i-1, j) + ni , B(i-1, j-1) )
or, if j = i then B(i, j) = B(i-1, j-1)
and B(0, 0) = 0
And you can code that up in a very similar way to the above.
The trick to solving a dynamic programming problem is usually to figuring out what the structure of a solution looks like, and more specifically if it exhibits optimal substructure.
In this case, it seems to me that the optimal solution with N=12345 and K=3 would have an optimal solution to N=12345 and K=2 as part of the solution. If you can convince yourself that this holds, then you should be able to express a solution to the problem recursively. Then either implement this with memoisation or bottom-up.
The two most important elements of any dynamic programming solution are:
Defining the right subproblems
Defining a recurrence relation between the answer to a sub-problem and the answer to smaller sub-problems
Finding base cases, the smallest sub-problems whose answer does not depend on any other answers
Figuring out the scan order in which you must solve the sub-problems (so that you never use the recurrence relation based on uninitialized data)
You'll know that you have the right subproblems defined when
The problem you need the answer to is one of them
The base cases really are trivial
The recurrence is easy to evaluate
The scan order is straightforward
In your case, it is straightforward to specify the subproblems. Since this is probably homework, I will just give you the hint that you might wish that N had fewer digits to start off with.
Here's what i think:
Consider the first k + 1 digits from the left. Look for the biggest one, find it and remove the numbers to the left. If there exists two of the same biggest number, find the leftmost one and remove the numbers to the left of that. store the number of removed digits ( name it j ).
Do the same thing with the new number as N and k+1-j as K. Do this until k+1 -j equals to 1 (hopefully, it will, if i'm not mistaken).
The number you end up with will be the number you're looking for.