Dynamic Programming Algos With Coding
Dynamic Programming Algos With Coding
time and number of comparisons by storing the results of past computations. The basic idea of
dynamic programming is to store the results of previous calculation and reuse it in future instead of
recalculating them.
We can also see Dynamic Programming as dividing a particular problem into subproblems and then
storing the result of these subproblems to calculate the result of the actual problem.
and,
fib(0) = 0
fib(1) = 1
We can see that the above function fib() to find the nth fibonacci number is divided into two
subproblems fib(n-1) and fib(n-2) each one of which will be further divided into subproblems and so
on.
CPP
int fib(int n)
{
if (n <= 1)
return n;
Below is the recursion tree for the recursive solution to find the N-th Fibonacci number:
fib(5)
/ \
fib(4) fib(3)
/ \ / \
/ \ / \ / \
/ \
fib(1) fib(0)
We can see that the function fib(3) is being called 2 times. If we would have stored the value of
fib(3), then instead of computing it again, we could have reused the old stored value.
The time complexity of the recursive solution is exponential. However, we can improve the time
complexity by using Dynamic Programming approach and storing the results of the subproblems as
shown below:
CPP
int fib(int n)
int i;
f[1] = 1;
// and store it
return f[n];
We had already discussed the basics of Overlapping Subproblems property of a problem that can be
solved using Dynamic Programming algorithm. Let us extend our previous example of Fibonacci
Number to discuss the overlapping subproblems property in details.
int fib(int n)
if ( n <= 1 )
return n;
fib(5)
/ \
fib(4) fib(3)
/ \ / \
/ \ / \ / \
/ \
fib(1) fib(0)
We already discussed how storing results of the subproblems can be effective in reducing the
number of calculations or operations to obtain the final result. As in the above recursion tree, we can
see that different values like fib(1), fib(0), fib(2) are being calculated more than once. There are two
different ways to store the values so that these values can be reused:
1. Memoization (Top Down)
1. Memoization (Top Down): The memoized program for a problem is similar to the recursive
version with a small modification that it looks into a lookup table before computing
solutions. We initialize a lookup array with all initial values as NIL. Whenever we need the
solution to a subproblem, we first look into the lookup table. If the precomputed value is
there then we return that value, otherwise, we calculate the value and put the result in the
lookup table so that it can be reused later. Following is the memoized version for nth
Fibonacci Number.
C++Java
#include <bits/stdc++.h>
#define NIL -1
int lookup[MAX];
void _initialize()
int i;
lookup[i] = NIL;
if (lookup[n] == NIL)
if (n <= 1)
lookup[n] = n;
else
return lookup[n];
// Driver code
int main ()
int n = 40;
_initialize();
return 0;
Output:
1. Tabulation (Bottom Up): The tabulated program for a given problem builds a table in bottom
up fashion and returns the last entry from table. For example, for the same Fibonacci
number, we first calculate fib(0) then fib(1) then fib(2) then fib(3) and so on. So literally, we
are building the solutions of subproblems bottom-up. Following is the tabulated version for
nth Fibonacci Number.
C/C++Java
#include<bits/stdc++.h>
int fib(int n)
int f[n+1];
int i;
f[0] = 0; f[1] = 1;
return f[n];
// Driver Code
int main ()
int n = 9;
return 0;
Output:
Fibonacci number is 34
Both Tabulated and Memoized approaches stores the solutions of subproblems. In Memoized
version, the table is filled on demand while in Tabulated version, starting from the first entry, all
entries are filled one by one. Unlike the Tabulated version, all entries of the lookup table are not
necessarily filled in Memoized version.
Mark as Read
A given problem has Optimal Substructure Property if the optimal solution of the given problem can
be obtained by using optimal solutions of its subproblems.
That is, say if a problem x is divided into subproblems A and B then the optimal solution of x can be
obtained by summing up the optimal solutions to the subproblems A and B.
For example, the Shortest Path problem has following optimal substructure property:
If a node x lies in the shortest path from a source node u to destination node v then the shortest
path from u to v is combination of shortest path from u to x and shortest path from x to v. The
standard All Pair Shortest Path algorithms like Floyd–Warshall and Bellman–Ford are typical
examples of Dynamic Programming.
Let us consider a simple example of 0-1 Knapsack Problem. The problem states that given values and
weight associated with N items. The task is to put these items into a Knapsack of capacity W such
that the value of all items in the Knapsack is maximum possible. You can either include a complete
element or do not include it, it is not allowed to add a fraction of an element.
For Example:
The answer will be 220. We will pick the 2nd and 3rd elements
and add them to the Knapsack for maximum value.
Optimal Substructure: To consider all subsets of items, there can be two cases for every item: (1) the
item is included in the optimal subset, (2) not included in the optimal set.
Therefore, the maximum value that can be obtained from N items is the max of the following two
values.
1. Maximum value obtained by n-1 items and W weight (excluding nth item).
2. Value of nth item plus maximum value obtained by n-1 items and W minus the weight of the
nth item (including nth item).
If the weight of the nth item is greater than W, then the nth item cannot be included and case 1 is
the only possibility.
Overlapping Subproblems: Let us first look at the recursive solution to the above problem:
It should be noted that the above function computes the same subproblems again and again. See the
following recursion tree when the above recursive function is evaluated with the sample examples.
Since sub-problems are evaluated again, this problem has Overlapping Subproblems property. So the
0-1 Knapsack problem has both properties of a dynamic programming problem. Like other typical
Dynamic Programming(DP) problems, recomputations of same subproblems can be avoided by
constructing a temporary array K[][] in a bottom-up manner. Following is Dynamic Programming
based implementation.
C++Java
#include <bits/stdc++.h>
// in a knapsack of capacity W
int i, w;
if (i == 0 || w == 0)
K[i][w] = 0;
else
return K[n][W];
// Driver Code
int main()
int W = 50;
return 0;
Output
220
Mark as Read
Dynamic Programming (DP) is a technique that solves some particular type of problems in
Polynomial Time. Dynamic Programming solutions are faster than exponential brute method and can
be easily proved for their correctness. Before we study how to think Dynamically for a problem, we
need to learn:
1. Overlapping Subproblems
Steps to solve a DP
1) Identify if it is a DP problem
least parameters
• Typically, all the problems that require to maximize or minimize certain quantity or counting
problems that say to count the arrangements under certain condition or certain probability
problems can be solved by using Dynamic Programming.
• All dynamic programming problems satisfy the overlapping subproblems property and most
of the classic dynamic problems also satisfy the optimal substructure property. Once, we
observe these properties in a given problem, be sure that it can be solved using DP.
Step 2 : Deciding the state DP problems are all about state and their transition. This is the most basic
step which must be done very carefully because the state transition depends on the choice of state
definition you make. So, let's see what do we mean by the term "state". State A state can be defined
as the set of parameters that can uniquely identify a certain position or standing in the given
problem. This set of parameters should be as small as possible to reduce state space. For example: In
our famous Knapsack problem, we define our state by two parameters index and weight i.e
DP[index][weight]. Here DP[index][weight] tells us the maximum profit it can make by taking items
from range 0 to index having the capacity of sack to be weight. Therefore, here the
parameters index and weight together can uniquely identify a subproblem for the knapsack
problem. So, our first step will be deciding a state for the problem after identifying that the problem
is a DP problem. As we know DP is all about using calculated results to formulate the final result. So,
our next step will be to find a relation between previous states to reach the current state.
Step 3 : Formulating a relation among the states This part is the hardest part of for solving a DP
problem and requires a lot of intuition, observation and practice. Let's understand it by considering a
sample problem
1+1+1+1+1+1
1+1+1+3
1+1+3+1
1+3+1+1
3+1+1+1
3+3
1+5
5+1
Let's think dynamically about this problem. So, first of all, we decide a state for the given problem.
We will take a parameter n to decide state as it can uniquely identify any subproblem. So, our state
dp will look like state(n). Here, state(n) means the total number of arrangements to form n by using
{1, 3, 5} as elements. Now, we need to compute state(n). How to do it? So here the intuition comes
into action. As we can only use 1, 3 or 5 to form a given number. Let us assume that we know the
result for n = 1,2,3,4,5,6 ; being termilogistic let us say we know the result for the state (n = 1), state
(n = 2), state (n = 3) ......... state (n = 6) Now, we wish to know the result of the state (n = 7). See, we
can only add 1, 3 and 5. Now we can get a sum total of 7 by the following 3 ways: 1) Adding 1 to all
possible combinations of state (n = 6) Eg : [ (1+1+1+1+1+1) + 1] [ (1+1+1+3) + 1] [ (1+1+3+1) + 1] [
(1+3+1+1) + 1] [ (3+1+1+1) + 1] [ (3+3) + 1] [ (1+5) + 1] [ (5+1) + 1] 2) Adding 3 to all possible
combinations of state (n = 4); Eg : [(1+1+1+1) + 3] [(1+3) + 3] [(3+1) + 3] 3) Adding 5 to all possible
combinations of state(n = 2) Eg : [ (1+1) + 5] Now, think carefully and satisfy yourself that the above
three cases are covering all possible ways to form a sum total of 7; Therefore, we can say that result
for state(7) = state (6) + state (4) + state (2) or state(7) = state (7-1) + state (7-3) + state (7-5) In
general, state(n) = state(n-1) + state(n-3) + state(n-5) So, our code will look like:
CPP
// form 'n'
int solve(int n)
// base case
if (n < 0)
return 0;
if (n == 0)
return 1;
The above code seems exponential as it is calculating the same state again and again. So, we just
need to add a memoization.
Step 4 : Adding memoization or tabulation for the state This is the easiest part of a dynamic
programming solution. We just need to store the state answer so that next time that state is
required, we can directly use it from our memory Adding memoization to the above code
CPP
// initialize to -1
int dp[MAXN];
int solve(int n)
// base case
if (n < 0)
return 0;
if (n == 0)
return 1;
if (dp[n]!=-1)
return dp[n];
}
Another way is to add tabulation and make solution iterative.
Mark as Read
There are two different ways to store the values so that the values of a sub-problem can be reused.
Here, will discuss two patterns of solving dynamic programming (DP) problems:
1. Tabulation: Bottom Up
Before getting to the definitions of the above two terms consider the following statements:
• Version 1: I will study the theory of DP from GeeksforGeeks, then I will practice some
problems on classic DP and hence I will master DP.
• Version 2: To Master DP, I would have to practice Dynamic problems and practice problems -
Firstly, I would have to study some theories of DP from GeeksforGeeks
Both versions say the same thing, the difference simply lies in the way of conveying the message and
that's exactly what Bottom-Up and Top-Down DP do. Version 1 can be related to Bottom-Up DP and
Version-2 can be related as Top-Down DP.
As the name itself suggests starting from the bottom and accumulating answers to the top. Let's
discuss in terms of state transition.
Let's describe a state for our DP problem to be dp[x] with dp[0] as base state and dp[n] as our
destination state. So, we need to find the value of destination state i.e dp[n].
If we start our transition from our base state i.e dp[0] and follow our state transition relation to reach
our destination state dp[n], we call it the Bottom-Up approach as it is quite clear that we started our
transition from the bottom base state and reached the topmost desired state.
To know this let's first write some code to calculate the factorial of a number using a bottom-up
approach. Once, again as our general procedure to solve a DP we first define a state. In this case, we
define a state as dp[x], where dp[x] is to find the factorial of x.
int dp[MAXN];
// base case
int dp[0] = 1;
dp[i] = dp[i-1] * i;
The above code clearly follows the bottom-up approach as it starts its transition from the bottom-
most base case dp[0] and reaches its destination state dp[n]. Here, we may notice that the DP table
is being populated sequentially and we are directly accessing the calculated states from the table
itself and hence, we call it the tabulation method.
Once, again let's describe it in terms of state transition. If we need to find the value for some state
say dp[n] and instead of starting from the base state that i.e dp[0] we ask our answer from the states
that can reach the destination state dp[n] following the state transition relation, then it is the top-
down fashion of DP.
Here, we start our journey from the top most destination state and compute its answer by taking in
count the values of states that can reach the destination state, till we reach the bottom-most base
state.
Once again, let's write the code for the factorial problem in the top-down fashion
// of calculated states
// initialized to -1
int dp[MAXN]
// return fact x!
int solve(int x)
if (x==0)
return 1;
if (dp[x]!=-1)
return dp[x];
As we can see we are storing the most recent cache up to a limit so that if next time we got a call
from the same state we simply return it from the memory. So, this is why we call it memoization as
we are storing the most recent state values.
In this case, the memory layout is linear that's why it may seem that the memory is being filled in a
sequential manner like the tabulation method, but you may consider any other top-down DP having
2D memory layout like Min Cost Path, here the memory is not filled in a sequential manner.
Mark as Read
Description -
Following are common definition of Binomial Coefficients -
1. A binomial coefficient C(n, k) can be defined as the coefficient of X^k in the expansion of (1 +
X)^n.
2. A binomial coefficient C(n, k) also gives the number of ways, disregarding order, that k
objects can be chosen from among n objects; more formally, the number of k-element
subsets (or k-combinations) of an n-element set.
Write a function that takes two parameters n and k and returns the value of Binomial Coefficient C(n,
k). For example, your function should return 6 for n = 4 and k = 2, and it should return 10 for n = 5
and k = 2.
Optimal Substructure
The value of C(n, k) can be recursively calculated using following standard formula for Binomial
Coefficients.
Overlapping Subproblems
It should be noted that the above function computes the same subproblems again and again. See the
following recursion tree for n = 5 an k = 2. The function C(3, 1) is called two times. For large values of
n, there will be many common subproblems.
C(5, 2)
/ \
C(4, 1) C(4, 2)
/ \ / \
C(3, 0) C(3, 1) C(3, 1) C(3, 2)
/ \ / \ / \
C(2, 0) C(2, 1) C(2, 0) C(2, 1) C(2, 1) C(2, 2)
/ \ / \ / \
C(1, 0) C(1, 1) C(1, 0) C(1, 1) C(1, 0) C(1, 1)
Since same suproblems are called again, this problem has Overlapping Subproblems property
Pseudo Code
// Returns value of Binomial Coefficient C(n, k)
int binomialCoeff(int n, int k)
{
int C[n+1][k+1]
Given an array of integers where each element represents the max number of steps that can be
made forward from that element. Write a function to return the minimum number of jumps to reach
the end of the array (starting from the first element).
Example
Solution -
we build a jumps[ ] array from left to right such that jumps[ i ] indicates the minimum number of
jumps needed to reach arr[ i ] from arr[ 0 ]. Finally, we return jumps[ n-1 ].
Pseudo Code
jumps[0] = 0
Description-
The Longest Increasing Subsequence (LIS) problem is to find the length of the longest subsequence of
a given sequence such that all elements of the subsequence are sorted in increasing order. For
example, the length of LIS for {10, 22, 9, 33, 21, 50, 41, 60, 80} is 6 and LIS is {10, 22, 33, 50, 60, 80}.
More Examples:
Optimal Substructure
Let arr[0..n-1] be the input array and L(i) be the length of the LIS ending at index i such that arr[i] is
the last element of the LIS.
To find the LIS for a given array, we need to return max(L(i)) where 0 < i < n.
Thus, we see the LIS problem satisfies the optimal substructure property as the main problem can be
solved using solutions to subproblems.
Overlapping Subproblems
Considering the above implementation, following is recursion tree for an array of size 4. lis(n) gives
us the length of LIS for arr[ ].
lis(4)
/ | \
lis(3) lis(2) lis(1)
/ \ /
lis(2) lis(1) lis(1)
/
lis(1)
We can see that there are many subproblems which are solved again and again. So this problem has
Overlapping Substructure property and recomputation of same subproblems can be avoided by
either using Memoization or Tabulation.
Pseudo Code
Description -
Given weights and values of n items, put these items in a knapsack of capacity W to get the
maximum total value in the knapsack.
In other words, given two integer arrays val[0..n-1] and wt[0..n-1] which represent values and
weights associated with n items respectively. Also given an integer W which represents knapsack
capacity, find out the maximum value subset of val[] such that sum of the weights of this subset is
smaller than or equal to W. You cannot break an item, either pick the complete item, or don't pick it
(0-1 property).
Optimal Substructure
To consider all subsets of items, there can be two cases for every item: (1) the item is included in the
optimal subset, (2) not included in the optimal set.
Therefore, the maximum value that can be obtained from n items is a max of the following two
values.
1. Maximum value obtained by n-1 items and W weight (excluding nth item).
2. Value of nth item plus maximum value obtained by n-1 items and W minus weight of the nth
item (including nth item).
If weight of nth item is greater than W, then the nth item cannot be included and case 1 is the only
possibility.
Overlapping Subproblems
Since suproblems are evaluated again, this problem has Overlapping Subprolems property. So the 0-1
Knapsack problem has both properties -
Pseudo Code
// Returns the maximum value that can be put in a knapsack of capacity W
int knapSack(int W, int wt[], int val[], int n)
{
int K[n+1][W+1]
// Build table K[][] in bottom up manner
for (i = 0; i <= n; i++)
{
for (w = 0; w <= W; w++)
{
if (i==0 || w==0)
K[i][w] = 0
else if (wt[i-1] <= w)
K[i][w] = max(val[i-1] + K[i-1][w-wt[i-1]], K[i-1][w])
else
K[i][w] = K[i-1][w]
}
}
return K[n][W]
}
Mark as Read
LCS Problem Statement: Given two sequences, find the length of longest subsequence present in
both of them. A subsequence is a sequence that appears in the same relative order, but not
necessarily contiguous. For example, "abc", "abg", "bdf", "aeg", '"acefg", .. etc are subsequences of
"abcdefg".
In order to find out the complexity of brute force approach, we need to first know the number of
possible different subsequences of a string with length n, i.e., find the number of subsequences with
lengths ranging from 1,2,..n-1. Recall from theory of permutation and combination that number of
combinations with 1 element are nC1. Number of combinations with 2 elements are nC2 and so forth
and so on. We know that nC0 + nC1 + nC2 + ... nCn = 2n. So a string of length n has 2n-1 different possible
subsequences since we do not consider the subsequence with length 0. This implies that the time
complexity of the brute force approach will be O(n * 2n). Note that it takes O(n) time to check if a
subsequence is common to both the strings. This time complexity can be improved using dynamic
programming.
It is a classic computer science problem, the basis of diff (a file comparison program that outputs the
differences between two files), and has applications in bioinformatics.
Examples:
LCS for input Sequences "ABCDGH" and "AEDFHR" is "ADH" of length 3.
LCS for input Sequences "AGGTAB" and "GXTXAYB" is "GTAB" of length 4.
The naive solution for this problem is to generate all subsequences of both given sequences and find
the longest matching subsequence. This solution is exponential in term of time complexity. Let us see
how this problem possesses both important properties of a Dynamic Programming (DP) Problem.
1) Optimal Substructure:
Let the input sequences be X[0..m-1] and Y[0..n-1] of lengths m and n respectively. And let L(X[0..m-
1], Y[0..n-1]) be the length of LCS of the two sequences X and Y. Following is the recursive definition
of L(X[0..m-1], Y[0..n-1]).
• If last characters of both sequences do not match (or X[m-1] != Y[n-1]) then
L(X[0..m-1], Y[0..n-1]) = MAX ( L(X[0..m-2], Y[0..n-1]), L(X[0..m-1], Y[0..n-2]) )
Examples:
1) Consider the input strings "AGGTAB" and "GXTXAYB". Last characters match for the strings. So
length of LCS can be written as:
L("AGGTAB", "GXTXAYB") = 1 + L("AGGTA", "GXTXAY")
2) Consider the input strings "ABCDGH" and "AEDFHR. Last characters do not match for the strings.
So length of LCS can be written as:
L(“ABCDGH”, “AEDFHR”) = MAX ( L(“ABCDG”, “AEDFHR”), L(“ABCDGH”, “AEDFH”) )
So the LCS problem has optimal substructure property as the main problem can be solved using
solutions to subproblems.
2) Overlapping Subproblems:
Following is simple recursive implementation of the LCS problem. The implementation simply follows
the recursive structure mentioned above.
C++Java
#include <bits/stdc++.h>
if (m == 0 || n == 0)
return 0;
if (X[m-1] == Y[n-1])
else
/* Driver code */
int main()
int m = strlen(X);
int n = strlen(Y);
return 0;
Output
Length of LCS is 4
Time complexity of the above naive recursive approach is O(2^n) in worst case and worst case
happens when all characters of X and Y mismatch i.e., length of LCS is 0.
Considering the above implementation, following is a partial recursion tree for input strings "AXYT"
and "AYZX"
In the above partial recursion tree, lcs("AXY", "AYZ") is being solved twice. If we draw the complete
recursion tree, then we can see that there are many subproblems which are solved again and again.
So this problem has Overlapping Substructure property and recomputation of same subproblems can
be avoided by either using Memoization or Tabulation.
C++Java
#include <bits/stdc++.h>
if (m == 0 || n == 0)
return 0;
if (dp[m][n] != -1) {
return dp[m][n];
lcs(X, Y, m - 1, n, dp));
/* Driver code */
int main()
int m = strlen(X);
int n = strlen(Y);
return 0;
Output
Length of LCS is 4
Mark as Read
LCS Problem Statement: Given two sequences, find the length of longest subsequence present in
both of them. A subsequence is a sequence that appears in the same relative order, but not
necessarily contiguous. For example, "abc", "abg", "bdf", "aeg", '"acefg", .. etc are subsequences of
"abcdefg".
In order to find out the complexity of brute force approach, we need to first know the number of
possible different subsequences of a string with length n, i.e., find the number of subsequences with
lengths ranging from 1,2,..n-1. Recall from theory of permutation and combination that number of
combinations with 1 element are nC1. Number of combinations with 2 elements are nC2 and so forth
and so on. We know that nC0 + nC1 + nC2 + ... nCn = 2n. So a string of length n has 2n-1 different possible
subsequences since we do not consider the subsequence with length 0. This implies that the time
complexity of the brute force approach will be O(n * 2n). Note that it takes O(n) time to check if a
subsequence is common to both the strings. This time complexity can be improved using dynamic
programming.
It is a classic computer science problem, the basis of diff (a file comparison program that outputs the
differences between two files), and has applications in bioinformatics.
Examples:
LCS for input Sequences "ABCDGH" and "AEDFHR" is "ADH" of length 3.
LCS for input Sequences "AGGTAB" and "GXTXAYB" is "GTAB" of length 4.
The naive solution for this problem is to generate all subsequences of both given sequences and find
the longest matching subsequence. This solution is exponential in term of time complexity. Let us see
how this problem possesses both important properties of a Dynamic Programming (DP) Problem.
1) Optimal Substructure:
Let the input sequences be X[0..m-1] and Y[0..n-1] of lengths m and n respectively. And let L(X[0..m-
1], Y[0..n-1]) be the length of LCS of the two sequences X and Y. Following is the recursive definition
of L(X[0..m-1], Y[0..n-1]).
• If last characters of both sequences do not match (or X[m-1] != Y[n-1]) then
L(X[0..m-1], Y[0..n-1]) = MAX ( L(X[0..m-2], Y[0..n-1]), L(X[0..m-1], Y[0..n-2]) )
Examples:
1) Consider the input strings "AGGTAB" and "GXTXAYB". Last characters match for the strings. So
length of LCS can be written as:
L("AGGTAB", "GXTXAYB") = 1 + L("AGGTA", "GXTXAY")
2) Consider the input strings "ABCDGH" and "AEDFHR. Last characters do not match for the strings.
So length of LCS can be written as:
L(“ABCDGH”, “AEDFHR”) = MAX ( L(“ABCDG”, “AEDFHR”), L(“ABCDGH”, “AEDFH”) )
So the LCS problem has optimal substructure property as the main problem can be solved using
solutions to subproblems.
2) Overlapping Subproblems:
Following is simple recursive implementation of the LCS problem. The implementation simply follows
the recursive structure mentioned above.
C++Java
#include <bits/stdc++.h>
if (m == 0 || n == 0)
return 0;
if (X[m-1] == Y[n-1])
else
}
/* Driver code */
int main()
int m = strlen(X);
int n = strlen(Y);
return 0;
Output
Length of LCS is 4
Time complexity of the above naive recursive approach is O(2^n) in worst case and worst case
happens when all characters of X and Y mismatch i.e., length of LCS is 0.
Considering the above implementation, following is a partial recursion tree for input strings "AXYT"
and "AYZX"
lcs("AXYT", "AYZX")
/
lcs("AXY", "AYZX") lcs("AXYT", "AYZ")
/ /
lcs("AX", "AYZX") lcs("AXY", "AYZ") lcs("AXY", "AYZ") lcs("AXYT", "AY")
In the above partial recursion tree, lcs("AXY", "AYZ") is being solved twice. If we draw the complete
recursion tree, then we can see that there are many subproblems which are solved again and again.
So this problem has Overlapping Substructure property and recomputation of same subproblems can
be avoided by either using Memoization or Tabulation.
#include <bits/stdc++.h>
if (m == 0 || n == 0)
return 0;
if (dp[m][n] != -1) {
return dp[m][n];
lcs(X, Y, m - 1, n, dp));
/* Driver code */
int main()
int m = strlen(X);
int n = strlen(Y);
Output
Length of LCS is 4
C++Java
#include <bits/stdc++.h>
if (i == 0 || j == 0)
L[i][j] = 0;
return L[m][n];
int main()
int m = strlen(X);
int n = strlen(Y);
return 0;
Output
Length of LCS is 4
Time Complexity of the above implementation is O(mn) which is much better than the worst-case
time complexity of Naive Recursive implementation.
Auxiliary Space: O(m * n)
Mark as Read
Coin Change
Given a value sum, if we want to make change for sum cents, and we have an infinite supply of each
of coins[] = { coins1, coins2, .. , coinsn} valued coins, how many ways can we make the change? The
order of coins doesn't matter.
Examples:
1) Optimal Substructure
To count the total number of solutions, we can divide all set solutions into two sets.
Let count(coins[], n, sum) be the function to count the number of solutions, then it can be written as
sum of count(coins[], n-1, sum) and count(coins[], n, sum-coins[n-1]).
Therefore, the problem has optimal substructure property as the problem can be solved using
solutions to subproblems.
2) Overlapping Subproblems
Following is a simple recursive implementation of the Coin Change problem. The implementation
simply follows the recursive structure mentioned above.
3) Approach (Algorithm)
See, here each coin of a given denomination can come an infinite number of times. (Repetition
allowed), this is what we call UNBOUNDED KNAPSACK. We have 2 choices for a coin of a particular
denomination, either i) to include, or ii) to exclude. But here, the inclusion process is not for just
once; we can include any denomination any number of times until sum<0.
Basically, If we are at coins[n-1], we can take as many instances of that coin ( unbounded inclusion )
i.e count(coins, n, sum - coins[n-1] ); then we move to coins[n-2]. After moving to coins[n-2], we
can't move back and can't make choices for coins[n-1] i.e count(coins, n-1, sum ).
Finally, as we have to find the total number of ways, so we will add these 2 possible choices,
i.e count(coins, n, sum - coins[n-1] ) + count(coins, n-1, sum ); which will be our required answer.
C++Java
#include <bits/stdc++.h>
using namespace std;
if (sum == 0)
return 1;
// solution exists
if (sum < 0)
return 0;
// solution exist
if (n <= 0)
return 0;
// Driver code
int main()
{
int i, j;
int coins[] = { 1, 2, 3 };
int sum = 4;
return 0;
Output
It should be noted that the above function computes the same subproblems again and again. See the
following recursion tree for coins[] = {1, 2, 3} and n = 5.
The function C({1}, 3) is called two times. If we draw the complete tree, then we can see that there
are many subproblems being called more than once.
C({1,2,3}, 5)
/ \
/ \
C({1,2,3}, 2) C({1,2}, 5)
/ \ / \
/ \ / \
/ \ / \ / \
/ \ / \ / \
/\ /\ /\ / \
/ \ / \ / \ / \
. . . . . . C({1}, 3) C({}, 4)
/\
/ \
. .
Since same subproblems are called again, this problem has Overlapping Subproblems property. So
the Coin Change problem has both properties (see this and this) of a dynamic programming problem.
Like other typical Dynamic Programming(DP) problems, recomputations of same subproblems can be
avoided by constructing a temporary array table[][] in bottom up manner.
C++Java
#include <bits/stdc++.h>
int i, j, x, y;
// is constructed in bottom up
table[0][i] = 1;
// in bottom up manner
for (i = 1; i < sum + 1; i++) {
: 0;
y = (j >= 1) ? table[i][j - 1] : 0;
// total count
table[i][j] = x + y;
// Driver Code
int main()
int coins[] = { 1, 2, 3 };
int sum = 4;
return 0;
// by Akanksha Rai(Abby_akku)
Output
4
Time Complexity: O(n * sum)
Following is a simplified version of method 2. The auxiliary space required here is O(sum) only.
C++Java
memset(table, 0, sizeof(table));
table[0] = 1;
return table[sum];
Output:
C++Java
#include <bits/stdc++.h>
if (v == 0)
return dp[n][v] = 1;
if (n == 0)
return 0;
if (dp[n][v] != -1)
return dp[n][v];
if (a[n - 1] <= v) {
+ coinchange(a, v, n - 1, dp);
int32_t main()
int tc = 1;
while (tc--) {
int n, v;
n = 3, v = 4;
vector<int> a = { 1, 2, 3 };
vector<int>(v + 1, -1));
Output
Mark as Read
Given two strings str1 and str2 and below operations that can be performed on str1. Find minimum
number of edits (operations) required to convert 'str1' into 'str2'.
1. Insert
2. Remove
3. Replace
Examples:
1. If last characters of two strings are same, nothing much to do. Ignore last characters and get
count for remaining strings. So we recur for lengths m-1 and n-1.
2. Else (If last characters are not same), we consider all operations on 'str1', consider all three
operations on last character of first string, recursively compute minimum cost for all three
operations and take minimum of three values.
C++Java
#include <bits/stdc++.h>
if (m == 0)
return n;
if (n == 0)
return m;
// remaining strings.
return 1
editDist(str1, str2, m - 1,
n - 1) // Replace
);
// Driver code
int main()
str2.length());
return 0;
Output
The time complexity of above solution is exponential. In worst case, we may end up doing O(3m)
operations. The worst case happens when none of characters of two strings match. Below is a
recursive call diagram for worst case.
We can see that many subproblems are solved, again and again, for example, eD(2, 2) is called three
times. Since same subproblems are called again, this problem has Overlapping Subproblems
property. So Edit Distance problem has both properties (see this and this) of a dynamic programming
problem. Like other typical Dynamic Programming(DP) problems, recomputations of same
subproblems can be avoided by constructing a temporary array that stores results of subproblems
C++Java
#include <bits/stdc++.h>
if (i == 0)
else if (j == 0)
else
dp[i][j]
=1
return dp[m][n];
}
// Driver code
int main()
str2.length());
return 0;
Output
Space Complex Solution: In the above-given method we require O(m x n) space. This will not be
suitable if the length of strings is greater than 2000 as it can only create 2D array of 2000 x 2000. To
fill a row in DP array we require only one row the upper row. For example, if we are filling the i = 10
rows in DP array we require only values of 9th row. So we simply create a DP array of 2 x str1 length.
This approach reduces the space complexity. Here is the C++ implementation of the above-
mentioned problem
C++Java
#include <bits/stdc++.h>
// of previous computations
DP[0][i] = i;
// characters
if (j == 0)
DP[i % 2][j] = i;
else {
// to get row
// Driver program
int main()
EditDistDP(str1, str2);
return 0;
}
Output
C++Java
#include <bits/stdc++.h>
if (n == 0)
return m;
if (m == 0)
return n;
if (dp[n][m] != -1)
return dp[n][m];
else {
// Driver program
int main()
return 0;
}
Output
Applications: There are many practical applications of edit distance algorithm, refer Lucene API for
sample. Another example, display all the words in a dictionary that are near proximity to a given
wordincorrectly spelled word.
Mark as Read
The Longest Increasing Subsequence (LIS) problem is to find the length of the longest subsequence of
a given sequence such that all elements of the subsequence are sorted in increasing order. For
example, the length of LIS for {10, 22, 9, 33, 21, 50, 41, 60, 80} is 6 and LIS is {10, 22, 33, 50, 60, 80}.
Examples:
Method 1: Recursion.
Optimal Substructure: Let arr[0..n-1] be the input array and L(i) be the length of the LIS ending at
index i such that arr[i] is the last element of the LIS.
L(i) = 1 + max( L(j) ) where 0 < j < i and arr[j] < arr[i]; or
To find the LIS for a given array, we need to return max(L(i)) where 0 < i < n.
Formally, the length of the longest increasing subsequence ending at index i, will be 1 greater than
the maximum of lengths of all longest increasing subsequences ending at indices before i, where
arr[j] < arr[i] (j < i).
Thus, we see the LIS problem satisfies the optimal substructure property as the main problem can be
solved using solutions to subproblems.
The recursive tree given below will make the approach clearer:
(LIS(1)=1)
/ | \
f(1) f(2) f(3) {f(3) = 1, f(2) and f(1) are > f(3)}
| | \
f(1) {f(1) = 1}
C++Java
of LIS problem */
#include <iostream>
*/
/* Base case */
if (n == 1)
return 1;
max_ending_here = res + 1;
*max_ref = max_ending_here;
return max_ending_here;
_lis(arr, n, &max);
// returns max
return max;
int main()
return 0;
Output
Length of lis is 5
Complexity Analysis:
• Time Complexity: The time complexity of this recursive approach is exponential as there is a
case of overlapping subproblems as explained in the recursive tree diagram above.
• Auxiliary Space: O(1). No external space used for storing values apart from the internal stack
space.
Iteration-wise simulation :
1. arr[2] > arr[1] {LIS[2] = max(LIS [2], LIS[1]+1)=2}
We can avoid recomputation of subproblems by using tabulation as shown in the below code:
C++Java
of LIS problem */
#include <bits/stdc++.h>
int lis[n];
lis[0] = 1;
bottom up manner */
lis[i] = 1;
lis[i] = lis[j] + 1;
}
// Return maximum value in lis[]
int main()
return 0;
Output
Length of lis is 5
Complexity Analysis:
Method 3 : Memoization DP
We can see that there are many subproblems in the above recursive solution which are solved again
and again. So this problem has Overlapping Substructure property and recomputation of same
subproblems can be avoided by either using Memoization
C++Java
of LIS problem */
#include <bits/stdc++.h>
#include <iostream>
*/
if (idx == n) {
return 0;
if (dp[idx][prev_idx + 1] != -1) {
}
// Function to find length of longest increasing
// subsequence.
int main()
return 0;
Output
Length of lis is 3
Complexity Analysis:
Mark as Read
Given an array of random numbers. Find longest increasing subsequence (LIS) in the array. I know
many of you might have read recursive and dynamic programming (DP) solutions. There are few
requests for O(N log N) algo in the forum posts.
For the time being, forget about recursive and DP solutions. Let us take small samples and extend the
solution to large instances. Even though it may look complex at first time, once if we understood the
logic, coding is simple.
Consider an input array A = {2, 5, 3}. I will extend the array during explanation.
By observation we know that the LIS is either {2, 3} or {2, 5}. Note that I am considering only strictly
increasing sequences.
Let us add two more elements, say 7, 11 to the array. These elements will extend the existing
sequences. Now the increasing sequences are {2, 3, 7, 11} and {2, 5, 7, 11} for the input array {2, 5, 3,
7, 11}.
Further, we add one more element, say 8 to the array i.e. input array becomes {2, 5, 3, 7, 11, 8}. Note
that the latest element 8 is greater than smallest element of any active sequence (will discuss shortly
about active sequences). How can we extend the existing sequences with 8? First of all, can 8 be part
of LIS? If yes, how? If we want to add 8, it should come after 7 (by replacing 11).
Since the approach is offline (what we mean by offline?), we are not sure whether adding 8 will
extend the series or not. Assume there is 9 in the input array, say {2, 5, 3, 7, 11, 8, 7, 9 ...}. We can
replace 11 with 8, as there is potentially best candidate (9) that can extend the new series {2, 3, 7, 8}
or {2, 5, 7, 8}.
Our observation is, assume that the end element of largest sequence is E. We can add (replace)
current element A[i] to the existing sequence if there is an element A[j] (j > i) such that E < A[i] < A[j]
or (E > A[i] < A[j] - for replace). In the above example, E = 11, A[i] = 8 and A[j] = 9.
In case of our original array {2, 5, 3}, note that we face same situation when we are adding 3 to
increasing sequence {2, 5}. I just created two increasing sequences to make explanation simple.
Instead of two sequences, 3 can replace 5 in the sequence {2, 5}.
I know it will be confusing, I will clear it shortly!
The question is, when will it be safe to add or replace an element in the existing sequence?
Let us consider another sample A = {2, 5, 3}. Say, the next element is 1. How can it extend the current
sequences {2, 3} or {2, 5}. Obviously, it can't extend either. Yet, there is a potential that the new
smallest element can be start of an LIS. To make it clear, consider the array is {2, 5, 3, 1, 2, 3, 4, 5, 6}.
Making 1 as new sequence will create new sequence which is largest.
The observation is, when we encounter new smallest element in the array, it can be a potential
candidate to start new sequence.
From the observations, we need to maintain lists of increasing sequences.
In general, we have set of active lists of varying length. We are adding an element A[i] to these lists.
We scan the lists (for end elements) in decreasing order of their length. We will verify the end
elements of all the lists to find a list whose end element is smaller than A[i] (floor value).
Our strategy determined by the following conditions,
Note that at any instance during our construction of active lists, the following condition is
maintained.
"end element of smaller list is smaller than end elements of larger lists".
It will be clear with an example, let us take example from wiki {0, 8, 4, 12, 2, 10, 6, 14, 1, 9, 5, 13, 3,
11, 7, 15}.
0.
-----------------------------------------------------------------------------
0.
0, 8.
-----------------------------------------------------------------------------
0.
0, 4.
0, 8. Discarded
-----------------------------------------------------------------------------
0.
0, 4.
0, 4, 12.
-----------------------------------------------------------------------------
0.
0, 2.
0, 4. Discarded.
0, 4, 12.
-----------------------------------------------------------------------------
0.
0, 2.
0, 2, 10.
0, 4, 12. Discarded.
-----------------------------------------------------------------------------
0.
0, 2.
0, 2, 6.
0, 2, 10. Discarded.
-----------------------------------------------------------------------------
0.
0, 2.
0, 2, 6.
0, 2, 6, 14.
-----------------------------------------------------------------------------
0.
0, 1.
0, 2. Discarded.
0, 2, 6.
0, 2, 6, 14.
-----------------------------------------------------------------------------
0.
0, 1.
0, 2, 6.
0, 2, 6, 9.
0, 2, 6, 14. Discarded.
-----------------------------------------------------------------------------
0.
0, 1.
0, 1, 5.
0, 2, 6. Discarded.
0, 2, 6, 9.
-----------------------------------------------------------------------------
0.
0, 1.
0, 1, 5.
0, 2, 6, 9.
0, 2, 6, 9, 13.
-----------------------------------------------------------------------------
0.
0, 1.
0, 1, 3.
0, 1, 5. Discarded.
0, 2, 6, 9.
0, 2, 6, 9, 13.
-----------------------------------------------------------------------------
0.
0, 1.
0, 1, 3.
0, 2, 6, 9.
0, 2, 6, 9, 11.
0, 2, 6, 9, 13. Discarded.
-----------------------------------------------------------------------------
0.
0, 1.
0, 1, 3.
0, 1, 3, 7.
0, 2, 6, 9. Discarded.
0, 2, 6, 9, 11.
----------------------------------------------------------------------------
0.
0, 1.
0, 1, 3.
0, 1, 3, 7.
0, 2, 6, 9, 11.
----------------------------------------------------------------------------
It is required to understand above strategy to devise an algorithm. Also, ensure we have maintained
the condition, "end element of smaller list is smaller than end elements of larger lists". Try with few
other examples, before reading further. It is important to understand what happening to end
elements.
Algorithm:
Querying length of longest is fairly easy. Note that we are dealing with end elements only. We need
not to maintain all the lists. We can store the end elements in an array. Discarding operation can be
simulated with replacement, and extending a list is analogous to adding more elements to array.
We will use an auxiliary array to keep end elements. The maximum length of this array is that of
input. In the worst case the array divided into N lists of size one (note that it doesn't lead to worst
case complexity). To discard an element, we will trace ceil value of A[i] in auxiliary array (again
observe the end elements in your rough work), and replace ceil value with A[i]. We extend a list by
adding element to auxiliary array. We also maintain a counter to keep track of auxiliary array length.
Bonus: You have learnt Patience Sorting technique partially :)
Here is a proverb, "Tell me and I will forget. Show me and I will remember. Involve me and I will
understand." So, pick a suit from deck of cards. Find the longest increasing sub-sequence of cards
from the shuffled suit. You will never forget the approach. :)
Update - 17 July, 2016: Quite impressive responses from the readers and few sites referring the post,
feeling happy as my hardwork helping others. It looks like readers are not doing any homework prior
to posting comments. Requesting to run through some examples after reading the article, and please
do your work on paper (don't use editor/compiler). The request is to help yourself. Profess to 'know'
is different from real understanding (no disrespect). Given below was my personal experience.
Initial content preparation took roughly 6 hours to me. But, it was a good lesson. I finished initial code
in an hour. When I start writing content to explain the reader, I realized I didn't understand the cases.
Took my note book (I have habit of maintaining binded note book to keep track of my rough work),
and after few hours I filled nearly 15 pages of rough work. Whatever the content you are seeing in
the gray colored example is from these pages. All the thought process for the solution triggered by a
note in the book 'Introduction to Algorithms by Udi Manber', I strongly recommend to practice the
book.
I suspect, many readers might not get the logic behind CeilIndex (binary search). I leave it as an
exercise to the reader to understand how it works. Run through few examples on paper. I realized I
have already covered the algorithm in another post.
Update - 5th August, 2016:
The following link worth referring after you do your work. I got to know the link via my recently
created Disqus profile. The link has explanation of approach mentioned in the Wiki.
http://stackoverflow.com/questions/2631726/how-to-determine-the-longest-increasing-
subsequence-using-dynamic-programming
Given below is code to find length of LIS (updated to C++11 code, no C-style arrays),
C++Java
#include <iostream>
#include <vector>
while (r - l > 1) {
int m = l + (r - l) / 2;
r = m;
else
l = m;
return r;
}
int LongestIncreasingSubsequenceLength(std::vector<int>& v)
if (v.size() == 0)
return 0;
tail[0] = v[0];
tail[0] = v[i];
tail[length++] = v[i];
else
}
return length;
int main()
return 0;
Output:
Complexity:
The loop runs for N elements. In the worst case (what is worst case input?), we may end up querying
ceil value using binary search (log i) for many A[i].
Therefore, T(n) < O( log N! ) = O(N log N). Analyse to ensure that the upper and lower bounds are
also O( N log N ). The complexity is THETA (N log N).
Exercises:
1. Design an algorithm to construct the longest increasing list. Also, model your solution using DAGs.
2. Design an algorithm to construct all increasing lists of equal longest size.
3. Is the above algorithm an online algorithm?
4. Design an algorithm to construct the longest decreasing list..
Alternate implementation in various languages using their built in binary search functions are
given below:
CPPJava
#include <bits/stdc++.h>
int LongestIncreasingSubsequenceLength(std::vector<int>& v)
return 0;
tail[length++] = v[i];
else
*it = v[i];
return length;
int main()
std::cout
<< LongestIncreasingSubsequenceLength(v);
return 0;
Output:
Mark as Read
Given a rod of length L, the task is to cut the rod in such a way that the total number of segments of
length p, q and r is maximized. The segments can only be of length p, q, and r.
Examples:
Input: l = 11, p = 2, q = 3, r = 5
Output: 5
Segments of 2, 2, 2, 2 and 3
Input: l = 7, p = 2, q = 5, r = 5
Output: 2
Segments of 2 and 5
Approach 1:
This can be visualized as a classical recursion problem , which further narrows down
to memoization ( top-down ) method of Dynamic Programming. Initially , we have length l present
with us , we'd have three size choices to cut from this , either we can make a cut of length p , or q ,
or r. Let's say we made a cut of length p , so the remaining length would be l-p and similarly with
cuts q & r resulting in remaining lengths l-q & l-r respectively. We will call recursive function for the
remaining lengths and at any subsequent instance we'll have these three choices. We will store the
answer from all these recursive calls & take the maximum out of them +1 as at any instance we'll
have 1 cut from this particular call as well. Also , note that the recursive call would be made if and
only if the available length is greater than length we want to cut i.e. suppose p=3 , and after certain
recursive calls the available length is 2 only , so we can't cut this line in lengths of p anymore.
return 0;
int a,b,c;
if(p<=l)
a=func(l-p,p,q,r);
if(q<=l)
b=func(l-q,p,q,r);
if(r<=l)
c=func(l-r,p,q,r);
return 1+max({a,b,c});
One can clearly observe that at each call , the given length ( 4 initially ) is divided into 3 different
subparts. Also , we can see that the recursion is being repeated for certain entries ( Red arrow
represents repetitive call for l=2, Yellow for l=3 and Blue for l=1). Therefore , we can memoize the
results in any container or array , so that repetition of same recursive calls is avoided.
return 0;
return dp[l];
int a,b,c;
if(p<=l)
a=func(l-p,p,q,r);
if(q<=l)
b=func(l-q,p,q,r);
if(r<=l)
c=func(l-r,p,q,r);
Let's now follow the code for implementation of the above code :
C++
// Memoization DP
#include <bits/stdc++.h>
int dp[10005];
if(l==0)
return dp[l];
a=func(l-p,p,q,r);
b=func(l-q,p,q,r);
c=func(l-r,p,q,r);
if(ans<0)
return 0; // If returned answer is negative , that means cuts are not possible
return ans;
int main()
int l,p,q,r;
cin>>l;
cin>>p>>q>>r;
cout<<"THE MAXIMUM NUMBER OF SEGMENTS THAT CAN BE CUT OF LENGTH p,q & r FROM A ROD
OF LENGTH l are "<<maximizeTheCuts(l,p,q,r)<<endl;
return 0;
Time Complexity : O(n) where n is the length of rod or line segment that has to be cut.
Space Complexity : O(n) where n is the length of rod or line segment that has to be cut.
Approach 2:
As the solution for a maximum number of cuts that can be made in a given length depends on the
maximum number of cuts previously made in shorter lengths, this question could be solved by the
approach of Dynamic Programming. Suppose we are given a length 'l'. For finding the maximum
number of cuts that can be made in length 'l', find the number of cuts made in shorter
previous length 'l-p', 'l-q', 'l-r' lengths respectively. The required answer would be the max(l-p,l-q,l-
r)+1 as one more cut should be needed after this to cut length 'l'. So for solving this problem for a
given length, find the maximum number of cuts that can be made in lengths ranging from '1' to 'l'.
Example:
l = 11, p = 2, q = 3, r = 5
Analysing lengths from 1 to 11:
Algorithm:
3. If DP[i]=-1 means it's not possible to divide it using giving segments p,q,r so continue;
4. DP[i+p]=max(DP[i+p],DP[i]+1)
5. DP[i+q]=max(DP[i+q],DP[i]+1)
6. DP[i+r]=max(DP[i+r],DP[i]+1)
7. print DP[l]
Pseudo Code:
DP[l+1]={-1}
DP[0]=0
for(i from 0 to l)
if(DP[i]==-1)
continue
DP[i+p]=max(DP[i+p],DP[i]+1)
DP[i+q]=max(DP[i+q],DP[i]+1)
DP[i+r]=max(DP[i+r],DP[i]+1)
print(DP[l])
Implementation:
C++Java
#include <bits/stdc++.h>
// of segments possible
dp[0] = 0;
if (dp[i] == -1)
continue;
// if a segment of p is possible
if (i + p <= l)
// if a segment of q is possible
if (i + q <= l)
// if a segment of r is possible
if (i + r <= l)
if (dp[l] == -1) {
dp[l] = 0;
return dp[l];
// Driver Code
int main()
int l = 11, p = 2, q = 3, r = 5;
// Calling Function
return 0;
Output
Complexity Analysis:
Note: This problem can also be thought of as a minimum coin change problem because we are given
a certain length to acquire which is the same as the value of the amount whose minimum change is
needed. Now the x,y,z are the same as the denomination of the coin given. So length is the same as
the amount and x y z are the same as denominations, thus we need to change only one condition
that is instead of finding minimum we need to find the maximum and we will get the answer. As the
minimum coin change problem is the basic dynamic programming question so this will help to solve
this question also.
for(ll i=1;i<=n;i++)
for(ll j=1;j<=3;j++)
if(i>=a[j]&&m[i-a[j]]!=-1)
dp[i]=max(dp[i],1+dp[i-a[j]]);
Mark as Read
Given a value V, if we want to make a change for V cents, and we have an infinite supply of each of C
= { C1, C2, .., Cm} valued coins, what is the minimum number of coins to make the change? If it's not
possible to make a change, print -1.
Examples:
This problem is a variation of the problem discussed Coin Change Problem. Here instead of finding
the total number of possible solutions, we need to find the solution with the minimum number of
coins.
The minimum number of coins for a value V can be computed using the below recursive formula.
C++Java
#include<bits/stdc++.h>
// base case
if (V == 0) return 0;
// Initialize result
int res = INT_MAX;
if (coins[i] <= V)
res = sub_res + 1;
return res;
int main()
int m = sizeof(coins)/sizeof(coins[0]);
int V = 11;
return 0;
Output
Minimum coins required is 2
The time complexity of the above solution is exponential and space complexity is way greater than
O(n). If we draw the complete recursion tree, we can observe that many subproblems are solved
again and again. For example, when we start from V = 11, we can reach 6 by subtracting one 5 times
and by subtracting 5 one time. So the subproblem for 6 is called twice.
Since the same subproblems are called again, this problem has the Overlapping Subproblems
property. So the min coins problem has both properties (see this and this) of a dynamic programming
problem. Like other typical Dynamic Programming(DP) problems, recomputations of the same
subproblems can be avoided by constructing a temporary array table[][] in a bottom-up manner.
Below is Dynamic Programming based solution.
C++Java
#include<bits/stdc++.h>
int table[V+1];
table[0] = 0;
table[i] = INT_MAX;
// values from 1 to V
if (coins[j] <= i)
table[i] = sub_res + 1;
if(table[V]==INT_MAX)
return -1;
return table[V];
int main()
int m = sizeof(coins)/sizeof(coins[0]);
int V = 11;
return 0;
Output
Given an array arr[] where each element represents the max number of steps that can be made
forward from that index. The task is to find the minimum number of jumps to reach the end of the
array starting from index 0. If the end isn’t reachable, return -1.
Examples:
Minimum number of jumps to reach end using Dynamic Programming from left to right:
For example in array, arr[] = {1, 3, 5, 8, 9, 2, 6, 7, 6, 8, 9} minJumps(3, 9) will be called two times as
arr[3] is reachable from arr[1] and arr[2]. So this problem has both properties (optimal
substructure and overlapping subproblems) of Dynamic Programming
• Create jumps[] array from left to right such that jumps[i] indicate the minimum number of
jumps needed to reach arr[i] from arr[0].
• To fill the jumps array run a nested loop inner loop counter is j and the outer loop count is i.
o If i is less than j + arr[j] then set jumps[i] to minimum of jumps[i] and jumps[j] + 1.
initially set jump[i] to INT MAX
• Return jumps[n-1].
C++Java
#include <bits/stdc++.h>
int i, j;
if (n == 0 || arr[0] == 0)
return INT_MAX;
jumps[0] = 0;
jumps[i] = INT_MAX;
break;
// Driver code
int main()
int arr[] = { 1, 3, 5, 8, 9, 2, 6, 7, 6, 8, 9 };
return 0;
Output
Build jumps[] array from right to left such that jumps[i] indicate the minimum number of jumps
needed to reach arr[n-1] from arr[i]. Finally, we return jumps[0]. Use Dynamic programming in a
similar way of the above method.
C++Java
#include <bits/stdc++.h>
int min;
// from here
if (arr[i] == 0)
jumps[i] = INT_MAX;
jumps[i] = 1;
// those points
else {
min = INT_MAX;
j++) {
min = jumps[j];
}
// Handle overflow
if (min != INT_MAX)
jumps[i] = min + 1;
else
return jumps[0];
int main()
int arr[] = { 1, 3, 5, 8, 9, 2, 6, 7, 6, 8, 9 };
return 0;
Output
Mark as Read
Given weights and values of n items, put these items in a knapsack of capacity W to get the
maximum total value in the knapsack. In other words, given two integer arrays val[0..n-1] and
wt[0..n-1] which represent values and weights associated with n items respectively. Also given an
integer W which represents knapsack capacity, find out the maximum value subset of val[] such that
sum of the weights of this subset is smaller than or equal to W. You cannot break an item, either pick
the complete item or don't pick it (0-1 property).
Therefore, the maximum value that can be obtained from 'n' items is the max of the following two
values.
1. Maximum value obtained by n-1 items and W weight (excluding nth item).
2. Value of nth item plus maximum value obtained by n-1 items and W minus the weight of the
nth item (including nth item).
If the weight of 'nth' item is greater than 'W', then the nth item cannot be included and Case 1 is the
only possibility.
C++Java
// Base Case
if (n == 0 || W == 0)
return 0;
if (wt[n - 1] > W)
else
return max(
val[n - 1]
// Driver code
int main()
int W = 50;
return 0;
Output
220
It should be noted that the above function computes the same sub-problems again and again. See
the following recursion tree, K(1, 1) is being evaluated twice. The time complexity of this naive
recursive solution is exponential (2^n).
K(n, W)
K(3, 2)
/ \
/ \
K(2, 2) K(2, 1)
/ \ / \
/ \ / \
/ \ / \ / \
/ \ / \ / \
Complexity Analysis:
Since subproblems are evaluated again, this problem has Overlapping Sub-problems property. So the
0-1 Knapsack problem has both properties (see this and this) of a dynamic programming problem.
Approach: In the Dynamic programming we will work considering the same cases as mentioned in
the recursive approach. In a DP[][] table let's consider all the possible weights from '1' to 'W' as the
columns and weights that can be kept as the rows.
The state DP[i][j] will denote maximum value of 'j-weight' considering all values from '1 to ith'. So if
we consider 'wi' (weight in 'ith' row) we can fill it in all columns which have 'weight values > wi'. Now
two possibilities can take place:
Now we have to take a maximum of these two possibilities, formally if we do not fill 'ith' weight in
'jth' column then DP[i][j] state will be same as DP[i-1][j] but if we fill the weight, DP[i][j] will be equal
to the value of 'wi'+ value of the column weighing 'j-wi' in the previous row. So we take the
maximum of these two possibilities to fill the current state. This visualisation will make the concept
clear:
Capacity=6
0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
1 0 10 10 10 10 10 10
2 0 10 15 25 25 25 25
3 0
Explanation:
we take maximum of
(10, 15 + DP[1][3-2]) = 25
| |
not filled
0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
1 0 10 10 10 10 10 10
2 0 10 15 25 25 25 25
3 0 10 15 40 50 55 65
Explanation:
= 50
= 55
= 65
C++Java
#include <bits/stdc++.h>
return (a > b) ? a : b;
int i, w;
vector<vector<int>> K(n + 1, vector<int>(W + 1));
if (i == 0 || w == 0)
K[i][w] = 0;
K[i][w] = max(val[i - 1] +
K[i - 1][w]);
else
return K[n][W];
// Driver Code
int main()
int W = 50;
return 0;
}
Output
220
Complexity Analysis:
Scope for Improvement :- We used the same approach but with optimized space complexity
C++Java
#include <bits/stdc++.h>
// complexity
int i, w;
// of 2d array K
if (i == 0 || w == 0)
K[i % 2][w] = 0;
val[i - 1]
K[(i - 1) % 2][w]);
else
// Driver Code
int main()
int W = 50;
return 0;
Complexity Analysis:
C++Java
// dynamic programming
#include <bits/stdc++.h>
int** dp)
// base condition
if (i < 0)
return 0;
if (dp[i][W] != -1)
return dp[i][W];
if (wt[i] > W) {
val, i - 1,
dp);
return dp[i][W];
}
else {
dp[i][W] = max(val[i]
+ knapSackRec(W - wt[i],
wt, val,
i - 1, dp),
i - 1, dp));
return dp[i][W];
// table dynamically
int** dp;
dp = new int*[n];
// table with -1
dp[i][j] = -1;
// Driver Code
int main()
int W = 50;
return 0;
Output
220
Complexity Analysis:
Mark as Read