MODULE 2
FUNDAMENTAL ALGORITHMIC
STRATEGIES
ALGORITHMIC STRATEGIES
Algorithmic Strategies
Brute-Force
Greedy Technique
Dynamic Programming
Branch and Bound
Backtracking
Brute-Force
Straightforward methods of solving a problem that rely
on sheer computing power and trying every possibility
rather than advanced techniques to improve efficiency.
Examples:
Padlock with 4 digits
Linear Search
Selection and Bubble sort
String Matching Problem
Travelling Salesman Problem (TSP): Given a set of cities and
distance between every pair of cities, the problem is to find the
shortest possible route that visits every city exactly once and
returns to the starting point.
Greedy Approach
Always makes the choice that seems to be the best at
that moment.
The closest solution that seems to provide an optimum
solution is chosen.
Greedy algorithms try to find a localized optimum
solution, which may eventually lead to globally
optimized solutions.
So the problems where choosing locally optimal also
leads to global solution are best fit for Greedy.
Example: Counting Coins
Counting Coins
Problem: To count to a desired value by choosing the least
possible coins and the greedy approach forces the algorithm to
pick the largest possible coin. If we are provided coins of ₹ 1, 2, 5
and 10 and we are asked to count ₹ 18 then the greedy
procedure will be −
1 − Select one ₹ 10 coin, the remaining count is 8
2 − Then select one ₹ 5 coin, the remaining count is 3
3 − Then select one ₹ 2 coin, the remaining count is 1
4 − And finally, the selection of one ₹ 1 coins solves the
problem
Though, it seems to be working fine, for this count we need to
pick only 4 coins. But if we slightly change the problem then the
same approach may not be able to produce the same optimum
result.
Counting Coins Continue…
For the currency system, where we have coins of 1, 7, 10
value, counting coins for value 18 will be absolutely
optimum but for count like 15, it may use more coins than
necessary.
For example, the greedy approach will use 10 + 1 + 1 + 1 +
1 + 1, total 6 coins. Whereas the same problem could be
solved by using only 3 coins (7 + 7 + 1)
Hence, we may conclude that the greedy approach picks an
immediate optimized solution and may fail where global
optimization is a major concern.
Other Examples of Greedy Approach
Job Sequencing with Deadline
Knapsack Problem
Huffman Coding
Optimal Merge Pattern
Single Source Shortest Path
Minimum Cost Spanning Tree
Dynamic Programming
An algorithmic technique for solving an optimization
problem by breaking it down into simpler sub-problems
and utilizing the fact that the optimal solution to the
overall problem depends upon the optimal solution to its
sub-problems.
Wherever we see a recursive solution that has repeated
calls for same inputs, we can optimize it using Dynamic
Programming.
The idea is to simply store the results of sub-problems,
so that we do not have to re-compute them when
needed later. This simple optimization reduces time
complexities from exponential to polynomial.
Examples of Dynamic Programming
Fibonacci Series
Longest Common Sub-sequence (LCS)
Multistage Graph
Travelling Salesman Problem
All pairs shortest path
0/1 knapsack
Branch and Bound Approach
An algorithm design paradigm which is generally used
for solving combinatorial optimization problems.
These problems are typically exponential in terms of
time complexity and may require exploring all possible
permutations in worst case.
The Branch and Bound Algorithm technique solves these
problems relatively quickly.
The general idea of B&B is a BFS-like search for the
optimal solution, but not all nodes get expanded
(i.e., their children generated). Rather, a carefully
selected criterion determines which node to expand and
when, and another criterion tells the algorithm when an
optimal solution has been found.
Examples of B&B Approach
0-1 Integer Programming
Network Flow Problem.
Boolean Satisfiability Problem
Backtracking Approach
An algorithmic-technique for solving problems
recursively by trying to build a solution incrementally,
one piece at a time, removing those solutions that fail to
satisfy the constraints of the problem at any point of
time.
Abandons a candidate (“backtracks”) as soon as it
determines that the candidate cannot possibly be
completed to a valid solution
Examples:
N-Queen Problem
M-Coloring problem
Knight’s tour problem
GREEDY TECHNIQUE
FRACTIONAL KNAPSACK
Fractional Knapsack Problem
Given weights and values of n items, we need to put these
items in a knapsack of capacity W to get the maximum total
value in the knapsack.
Problem Scenario:
A thief is robbing a store and can carry a maximal weight
of W into his knapsack. There are n items available in the store
and weight of ith item is wi and its profit is pi. What items should
the thief take?
In this context, the items should be selected in such a way that
the thief will carry those items for which he will gain
maximum profit. Hence, the objective of the thief is to
maximize the profit.
A brute-force solution would be to try all possible subset
with all different fraction but that will be too much time
taking.
An efficient solution is to use Greedy approach.
Fractional Knapsack Problem
Steps Involved:
i. Calculate the ratio value/weight (Pi/wi) for each item.
ii. Sort the items on basis of this ratio (in decreasing order).
iii. Take one by one item until capacity(W) of the knapsack
becomes zero(0).
Algorithm
Fractional Knapsack (Array v, Array w, int M)
1. for i= 1 to size (v)
2. do p [i] = v [i] / w [i]
3. Sort-Descending (p)
4. i ← 1, profit=0
5. while (M!=0)
6. if (M >= w[i])
7. then M = M - w[i]
8. profit = profit + v[i]
9. i=i+1
10. else
11. profit = profit + ((M/w[i])*v[i])
12. M=0
10. return profit
Questions on Fractional Knapsack
P1: For the given set of items and knapsack capacity =
60 kg, find the optimal solution for the fractional knapsack
problem making use of greedy approach.
Ans. 230 units
Item Weight Profit
1 5 30
2 10 40
3 15 45
4 22 77
5 25 90
Questions on Fractional Knapsack
P2: For the given set of items and knapsack capacity =
30 kg, find the optimal solution for the fractional knapsack
problem making use of greedy approach.
Ans. 360 units
Item Weight Profit
1 15 150
2 7 100
3 12 80
4 5 90
Questions on Fractional Knapsack
P3: For the given set of items and knapsack capacity =
20 kg, find the optimal solution for the fractional knapsack
problem making use of greedy approach.
Item Weight Profit
1 2 5
2 3 7
3 5 15
4 1 12
5 10 2
6 3 16
7 4 8
Ans. 63.4 units
JOB SEQUENCING WITH DEADLINE
Job Sequencing with Deadline
Given an array of jobs where every job has a deadline and
associated profit if the job is finished before the deadline. It is
also given that every job takes a single unit of time, so the
minimum possible deadline for any job is 1. How to maximize
total profit if only one job can be scheduled at a time.
Assume, deadline of ith job Ji is di and the profit received from
this job is pi. Hence, the optimal solution of this algorithm is a
feasible solution with maximum profit.
Thus, D(i) > 0 for 1⩽ i ⩽n.
Constraints
Only one processor is available for processing all jobs.
Processor takes one unit of time to complete a job.
All the jobs have to be completed within their respective
deadlines to obtain the profits associated with them.
Job Sequencing with Deadline
Steps Involved:
i. Sort all the given jobs in decreasing order of their
profit.
ii. Check the value of maximum deadline and draw a
Gantt chart where maximum time on Gantt chart is
the value of maximum deadline.
iii. Pick up the jobs one by one and put the job on
Gantt chart as far as possible from 0 ensuring that
the job gets completed before its deadline.
Algorithm
1. Sort all the jobs based on profit Pi so P1 > P2 > P3 >…..> Pn.
2. d = Maximum deadline of jobs in Array A.
3. for i ← 1 to d
4. do S[i] = -1
5. for each job(x) in decreasing order of their profits
6. do for( j=x.deadline ; j>=1; j--)
7. do if (j<=d && S[j] = -1)
8. then S[j] = x.jobid
9. profit + = x.profit
10. break
11. end for loop
12. end for loop
13. Return profit.
Questions on Job Seq. with Deadline
P1. Let us consider a set of given jobs as shown in the
following table. We have to find a sequence of jobs, which
will be completed within their deadlines and will give
maximum profit. Each job is associated with a deadline
and profit.
Job J1 J2 J3 J4 J5
Deadline 2 1 3 2 1
Profit 60 100 20 40 20
Questions on Job Seq. with Deadline
P2. Given the jobs, their deadlines and associated profits as
shown-
Jobs J1 J2 J3 J4 J5 J6
Deadlines 5 3 3 2 4 2
Profits 200 180 190 300 120 100
Answer the following questions-
Write the optimal schedule that gives maximum profit.
Ans. J2 , J4 , J3 , J5 , J1
Are all the jobs completed in the optimal schedule?
Ans. All the jobs are not completed in optimal
schedule.
What is the maximum earned profit?
Ans. 990 units
Questions on Job Seq. with Deadline
P3. Input: Four Jobs with following deadlines and profits
JobID Deadline Profit
a 4 20
b 1 10
c 1 40
d 1 30
Answer the following questions-
Write the optimal schedule that gives maximum profit.
Ans. c, a
Are all the jobs completed in the optimal schedule?
Ans. All the jobs are not completed in optimal schedule.
What is the maximum earned profit?
Ans. 60 units
DYNAMIC PROGRAMMING
APPROACH
OPTIMAL BINARY SEARCH
TREE
Optimal Binary Search Tree
Given a sorted array key [0.. n-1] of search keys and an
array freq[0.. n-1] of frequency counts, where freq[i] is
the number of searches for keys[i]. Construct a binary
search tree of all keys such that the total cost of all the
searches is as small as possible.
Cost of a BST: The cost of a BST node is the level of
that node multiplied by its frequency. The level of the
root is 1.
Examples:
Input: keys[] = {10, 12}, freq[] = {34, 50}
Input: keys[] = {10, 12, 20}, freq[] = {34, 8, 50}
Continue…
Input: keys[] = {10, 12}, freq[] = {34, 50}
There can be following two possible BSTs
10 12
\ /
12 10
I II
Frequency of searches of 10 and 12 are 34 and 50
respectively.
The cost of tree I is 34*1 + 50*2 = 134
The cost of tree II is 50*1 + 34*2 = 118
Continue…
Input: keys[] = {10, 12, 20}, freq[] = {34, 8, 50}
There can be following possible BSTs
10 12 20 10 20
\ / \ / \ /
12 10 20 12 20 10
\ / / \
20 10 12 12
I II III IV V
Among all possible BSTs, cost of the fifth BST is minimum.
Cost of the fifth BST is 1*50 + 2*34 + 3*8 = 142
Continue…
if the number of nodes are less than or equal to 3 we
can find optimal BST by checking all possible
arrangements.
But if the nodes are greater than 3 like 4,5,6….. then
respectively 14,42,132…..((2n)! / (n+1)! n!) , different
BSTs are possible so by checking all arrangements to
find Optimal Cost may lead to extra overhead.
So we will use another approach to solve the problem of
Optimal BST i.e. using Dynamic Programming
Approach.
Solution by using Dynamic
Programming Approach
Steps involved:
Take a cost matrix of n+1 by n+1 where n is given
number of nodes.
Compute cost for given freq[i..j] using the following
formula.
Now draw the optimal binary search tree using above
computed matrix.
Example
Consider the below table, which contains the keys and
frequencies.
1 2 3 4
Keys 10 20 30 40
freq 4 2 6 3
Find out the cost of optimal binary search tree.
Solution
Take a cost matrix of 5 by 5.
Continue…
First, we will calculate the values where j-i is equal to zero.
c[0, 0] = 0, c[1 , 1] = 0, c[2,2] = 0, c[3,3] = 0, c[4,4] = 0
Now we will calculate the values where j-i equal to 1.
The cost of c[0,1] is 4 (The key is 10, and the cost corresponding to key 10 is 4).
The cost of c[1,2] is 2 (The key is 20, and the cost corresponding to key 20 is 2).
The cost of c[2,3] is 6 (The key is 30, and the cost corresponding to key 30 is 6)
The cost of c[3,4] is 3 (The key is 40, and the cost corresponding to key 40 is 3)
Continue…
Now we will calculate the values where j-i = 2
When i=0 and j=2, then keys 10 and 20. There are two possible
trees that can be made out from these two keys and the minimum
cost is 8; therefore, c[0,2] = 8
When i=1 and j=3, then keys 20 and 30. There are two possible
trees that can be made out from these two keys and the minimum
cost is 10; therefore, c[1,3] = 10
When i=2 and j=4, we will consider the keys at 3 and 4, i.e., 30
and 40. There are two possible trees that can be made out from
these two keys and the minimum cost is 12, therefore, c[2,4] =
12.
Now we will calculate the values when j-i = 3
When i=0, j=3 then we will consider three keys, i.e., 10, 20, and
30. There are five possible trees that can be made out from these
three keys and the minimum cost is 20; therefore, c[0,3] = 20.
When i=1 and j=4 then we will consider the keys 20, 30, 40. There
are five possible trees that can be made out from these three keys
and the minimum cost is 20; therefore, c[1,4] = 16.
Continue…
Now we will calculate the values when j-i = 4
In this case, we will consider four keys, i.e., 10, 20, 30 and 40. There are
fourteen possible trees that can be made out from these three keys and
the minimum cost is 20; therefore, c[0,4] = 26.
Continue…
The optimal binary tree can be created as:
References
https://www.geeksforgeeks.org/fractional-knapsack-problem/
https://www.tutorialspoint.com/design_and_analysis_of_algorithms/
design_and_analysis_of_algorithms_fractional_knapsack.htm
https://www.gatevidyalay.com/fractional-knapsack-problem-using-greedy-
approach/
https://www.radford.edu/~nokie/classes/360/greedy.html
https://www.geeksforgeeks.org/job-sequencing-problem/
https://www.tutorialspoint.com/design_and_analysis_of_algorithms/
design_and_analysis_of_algorithms_job_sequencing_with_deadline.htm
https://www.gatevidyalay.com/job-sequencing-with-deadlines/
https://www.javatpoint.com/optimal-binary-search-tree
https://www.kodnest.com/free-online-courses/algorithm-2/lessons/all-pairs-
shortest-paths/topic/optimal-binary-search-trees/
https://www.youtube.com/watch?v=vLS-zRCHo-Y
https://www.geeksforgeeks.org/optimal-binary-search-tree-dp-24/
TRAVELLING SALESMAN
PROBLEM
Travelling Salesman Problem (TSP)
Given a set of cities and distance between every pair of
cities, the problem is to find the shortest possible route
that visits every city exactly once and returns to the
starting point.
Difference between Hamiltonian Cycle and TSP.
The Hamiltonian cycle problem is to find if there exist a
tour that visits every city exactly once. Here we know that
Hamiltonian Tour exists (because the graph is complete)
and in fact many such tours exist, the problem is to find a
minimum weight Hamiltonian Cycle.
BACKTRACKING APPROACH
N QUEEN PROBLEM
N Queen Problem
The problem of placing N chess queens on an N×N
chessboard so that no two queens attack each other.
This can be solved by Backtracking Approach.
IDEA:
The idea is to place queens one by one in different
columns, starting from the leftmost column.
When we place a queen in a column, we check for clashes
with already placed queens.
In the current column, if we find a row for which there is no
clash, we mark this row and column as part of the solution.
If we do not find such a row due to clashes then we
backtrack and return false.
Continue…
For example, following is a solution for 4 Queen problem.
The expected output is a binary matrix which has 1s for the blocks where
queens are placed. For example, following is the output matrix for above
4 queen solution.
{ 0, 1, 0, 0}
{ 0, 0, 0, 1}
{ 1, 0, 0, 0}
{ 0, 0, 1, 0}
Algorithm
Backtracking can be used to solve thisproblem.
1. Begin with the left-most column.
2. For every row in the column:
i. Try placing the queen such that it cannot attack the queen in
the previous columns.
ii. If such a placement is possible, add this cell to the solution set
and recursively check if this leads to a solution by calling the
function on the subsequent column. If it does, return one.
iii. Else, remove this cell from the solution set.
3. Backtrack to the previous column by returning zero if
no solution exists after the completion of step 2.
4. Stop the recursion when all the queens are placed.
Pseudocode
function function isSafe(row, col):
solveNQueens(row, n): for i from 1 to row-1:
for col from 1 to n: if sol[i] == col:
if isSafe(row, col): return false
sol[row] = col else
if row == n if abs(sol[i]-col) == abs(i-
for i from 1 to n row):
print sol[i] return false
return True return i
else
solveNQueens(row+1,n)
Complexity Analysis
The first queen has N placements, the second queen
must not be in the same column as the first as well as at
an oblique angle, so the second queen has N-1
possibilities, and so on, with a time complexity of O(N!)
HAMILTONIAN CYCLE
Hamiltonian Path
Hamiltonian Path in an undirected graph is a path that
visits each vertex exactly once.
A Hamiltonian cycle (or Hamiltonian circuit) is a
Hamiltonian Path such that there is an edge (in the
graph) from the last vertex to the first vertex of the
Hamiltonian Path.
Problem: To determine whether a given graph contains
Hamiltonian Cycle or not. If it contains, then prints the
path.
Following are the input and output of the required
function.
Input:
A 2D array graph[V][V] where V is the number of
vertices in graph and graph[V][V] is adjacency matrix
representation of the graph. A value graph[i][j] is 1 if
Contd…
Output:
An array path[V] that should contain the Hamiltonian
Path. path[i] should represent the ith vertex in the
Hamiltonian Path. The code should also return false if
there is no Hamiltonian Cycle in the graph.
For example, a Hamiltonian Cycle in the following graph
is {0, 1, 2, 4, 3, 0}.
Algorithm Hamiltonian(k)
1. for(i ← 0 to n-1)
2. do path[i] ← -1
3. Path[0] ← 0 // Set first vertex as 0
4. if (cyclefound(1) = false) cyclefound(k)
5. print solution does not exist. 1. if (k=n) //All vertices covered
6. return false 2. if (g[ path[k-1] ] [ path[0] ] = 1
7. DisplayCycle() 3. return true
4. else
5. return false
isvalid(v,k)
6. for(v ← 1 to n-1)
1. if (g[path[k-1]][v]=0) // if
7. if( isvalid(v,k) ) // to check
there is no edge.
possibility of vertex v to add in
2. return false
path
3. for(i ← 0 to k-1) // if vertex
8. path[k] ← v
already taken, skip that
9. if ( cyclefound(k+1 ) = true)
4. if (path[i] = v)
10. return true
5. return false
11. path[k] = -1; // when vertex k
6. return true
will not in the solution
12. return false
Analysis
Backtracking Approach has the following
Complexities:
Time Complexity: O(N!), where N is the number
of vertices
Space Complexity: O(1)
The Hamiltonian cycle problem is a
combination of both decision and an
optimization problem.