0% found this document useful (0 votes)
24 views

Module2 ADA_4225_BCS401_12-03-2025

The document covers various algorithm design strategies, focusing on brute force, decrease and conquer, and divide and conquer methods. It details the exhaustive search approach, the traveling salesman problem, and discusses algorithms like binary search, merge sort, and quicksort, along with their analyses. Additionally, it introduces the multiplication of large integers using divide-and-conquer techniques, highlighting the efficiency of these algorithms.

Uploaded by

wimaka7316
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

Module2 ADA_4225_BCS401_12-03-2025

The document covers various algorithm design strategies, focusing on brute force, decrease and conquer, and divide and conquer methods. It details the exhaustive search approach, the traveling salesman problem, and discusses algorithms like binary search, merge sort, and quicksort, along with their analyses. Additionally, it introduces the multiplication of large integers using divide-and-conquer techniques, highlighting the efficiency of these algorithms.

Uploaded by

wimaka7316
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 65

Subject Code: BCS401

Subject Name: Analysis & Design of Algorithms


Module Number: 02
Name of the Module: Brute Force Approaches (contd..), Decrease and Conquer and
Divide and Conquer
Scheme: 2022
Exhaustive Search
 Exhaustive search is simply a brute-force approach to combinatorial problems.
 It suggests generating each and every element of the problem domain, selecting those of
them that satisfy all the constraints, and then finding a desired element (e.g., the one that
optimizes some objective function).
 Note that although the idea of exhaustive search is quite straightforward, its
implementation typically requires an algorithm for generating certain combinatorial
objects.
 We illustrate exhaustive search by applying it to three important problems: the traveling
salesman problem, the knapsack problem, and the assignment problem.
Traveling Salesman Problem
We will be able to apply the branch-and-bound technique to instances of the traveling
salesman problem if we come up with a reasonable lower bound on tour lengths. One
very simple lower bound can be obtained by finding the smallest element in the intercity
distance matrix D and multiplying it by the number of cities n.
But there is a less obvious and more informative lower bound for instances with
symmetric matrix D, which does not require a lot of work to compute. We can compute
a lower bound on the length l of any tour as follows. For each city i, 1≤ i ≤ n, find the
sum si of the distances from city i to the two nearest cities; compute the sums of these n
numbers, divide the result by 2, and, if all the distances are integers, round up the result
to the nearest integer:
lb = ⌈s/2⌉... (1)
For example, for the instance in Figure 2.2a, formula (1) yields
Divide-and-Conquer

The most-well known algorithm design strategy


1. Divide instance of problem into two or more
smaller instances
2. Solve smaller instances recursively
3. Obtain solution to original (larger) instance by
combining these solutions

Department of
ISE
Divide and conquer involves three steps, at each
level of recursion.
• Divide: Divide the problem into a number of sub
problems
• Conquer: Conquer the sub problems by solving them
recursively. If the sub – problem sizes are small enough,
then solve the sub- problem in a straight forward manner.
• Combine: combine the solutions to the sub- problems to
get the solution to the original problem.
Divide-and-Conquer Technique (cont.)

a problem of size n
(instance)

subproblem 1 subproblem 2
of size n/2 of size n/2

a solution to a solution to
subproblem 1 subproblem 2

T(n) = a T(n/b) + f (n) a solution to In general leads to a


the original problem
where f(n)  (nd), d  recursive
0 algorithm!
Control Abstraction for divide and conquer:

• In the above specification,


• Initially DAndC(P) is invoked, where ‘P’ is the problem to be solved.
• Small (P) is a Boolean-valued function that determines whether the input size is small enough that the
answer can be computed without splitting. If this so, the function ‘S’ is invoked. Otherwise, the problem P is
divided into smaller sub problems. These sub problems P1, P2 …Pk are solved by recursive application of
DAndC.
• Combine is a function that determines the solution to P using the solutions to the ‘k’ sub problems.
Recurrence equation for Divide and Conquer:
If the size of problem ‘p’ is n and the sizes of the ‘k’ sub problems are n1,
n2….nk, respectively, then the computing time of divide and conquer is
described by the recurrence relation

Where,
 T(n) is the time for divide and conquer method on any input of size n and
 g(n) is the time to compute answer directly for small inputs.
 The function f(n) is the time for dividing the problem ‘p’ and combining the
solutions to sub problems.
Recurrence equation for Divide and Conquer:
For divide and conquer based algorithms that produce sub problems of the
same type as the original problem, it is very natural to first describe them by
using recursion.
More generally, an instance of size n can be divided into b instances of size
n/b, with a of them needing to be solved. (Here, a and b are constants; a>=1
and b > 1.). Assuming that size n is a power of b(i.e. n=bk), to simplify our
analysis, we get the following recurrence for the running time T(n):

..... (1)

where f(n) is a function that accounts for the time spent on dividing the
problem into smaller ones and on combining their solutions.
The recurrence relation can theorem. be solved by i) substitution method or by
using ii) master theorem.
1. Substitution Method - This method repeatedly makes substitution for each
occurrence of the function T in the right-hand side until all disappear. such
occurrences
2. Master Theorem - The efficiency analysis of many divide-and-conquer
algorithms is greatly simplified by the master theorem. It states that, in
recurrence equation
T(n) = aT(n/b) + f (n), If f (n)∈ Θ (nd) where d ≥ 0 then

Analogous results hold for the Ο and Ω notations, too.


Divide-and-Conquer Examples
• Sorting: merge sort and quicksort
• Finding min and max element in an array
• Binary search
• Multiplication of large integers
• Matrix multiplication: Strassen’s algorithm

Department of
ISE
Binary Search (Iterative)
Algorithm Binary_Search( A[0…n-1], Key)
Input: Given an array of n elements in sorted order and key is an element to be searched.
Output: Returns the position of key element, if successful and returns -1 otherwise.
1. Set first = 0, last = n-1
2. While (first < = last)
mid = (first +last) / 2
if (key == A[mid])
return (mid+1); // successful
else if ( key < A[mid] )
last = mid – 1
else first = mid+1
end while
3. return -1 // unsuccessful Department of
ISE
Recursive binary search algorithm
Algorithm Binary_Search( A[0…n-1], Key)
Input: Given an array of n elements in sorted order and key is an element to be searched.
Output: Returns the position of key element, if successful and returns -1 otherwise.
1. Set first = 0, last = n-1
2. While (first < = last)
mid = (first +last) / 2
if (key == A[mid])
return (mid+1); // successful
else if ( key < A[mid] )
return Binary_Search( A{0, …., mid-1], key)
else Binary_Search( A{mid+1,……n-1], key)
end while
3. return -1 // unsuccessful Department of
ISE
Analysis
Best Case: Best case occurs, when we are searching the middle
element itself. In that case, total number of comparisons required is
1. there fore best case time complexity of binary search is Ω(1).

Worst Case: Let T(n) be the cost involved to search ‘n’ elements. Let
T(n/2) be the cost involved to search either left part or the right
part of an array.
Analysis
T(n) = a if n = 1
T(n/2) + b otherwise

T(n/2)  Time required to search either the left part or


the right part of the array.
b  Time required to compare the middle element. Where
a and b are some positive integer constants.
T(n) = O(log 2n )

Department of
ISE
Analysis
Average Case:
The average case occurs when an element is found some where
in the recursive calls, but not till the recursive call ends.
The average number of key comparisons made by binary search is only slightly
smaller than that in this worst case.
T(n) = log 2n
The average number of comparison in a successful search is
T(n) = log 2n – 1
The average number of comparison in a unsuccessful search is
T(n) = log 2n + 1
Department of
ISE
Merge Sort Algorithm
Mergesort(low, high)
//Given an array A of n elements. This algorithm sorts the elements in
//ascending order. The variables low and high are used to identify the
//positions of first and last element in each partition.
1. If (low< high)
2. mid = (low+high)/2;
3. Mergesort (low,mid);
4. Mergesort(mid+1,high);
5. Merge(low,mid,high);
6. End if
7. Exit
Merge Algorithm
Merge(low, mid, high)
// The variables low, mid, and high are used to identify the portions of elements in each
partition.
1. Initialize i=low, j= mid+1, h=low;
2. while ((h <= mid) && (j <= high))
3. if (a[h] < a[j])
b[i++] = a[h++];
else
b[i++] = a[j++];
4. if (h > mid)
for(k = j; k <= high; k++)
b[i++] = a[k];
else
for (k = h; k <= mid; k++)
b[i++] = a[k];
5. for (k = low; k <= high; k++)
a[k] = b[k];
Mergesort
• Split array A[0..n-1] into about equal halves and make copies of
each half in arrays B and C
• Sort arrays B and C recursively
• Merge sorted arrays B and C into array A as follows:
– Repeat the following until no elements remain in one of the
arrays:
• compare the first elements in the remaining unprocessed
portions of the arrays
• copy the smaller of the two into A, while incrementing the index
indicating the unprocessed portion of that array
– Once all elements in one of the arrays are processed, copy the
remaining unprocessed elements from the other array into A.
Analysis of Mergesort

All cases have same efficiency: Θ(n log n)


• Number of comparisons in the worst case is close to
theoretical minimum for comparison-based sorting:
[log2 n!] ≈ n log2 n - 1.44n
• Space requirement: Θ(n) (not in-place)
• Can be implemented without recursion
Mergesort Example

The non-recursive
version of
Mergesort
starts from merging
single elements into
sorted pairs.
Quicksort
Quicksort is the other important sorting algorithm that is based on the divide-and-conquer
approach. Unlike mergesort, which divides its input elements according to their position in the
array, quicksort divides ( or partitions) them according to their value.
A partition is an arrangement of the array’s elements so that all the elements to the left of
some element A[s] are less than or equal to A[s], and all the elements to the right of A[s] are
greater than or equal to it:

Obviously, after a partition is achieved, A[s] will be in its final position in the sorted array, and
we can continue sorting the two subarrays to the left and to The right of A[s] independently
(e.g., by the same method).
In quick sort, the entire workhappens in the division stage, with no work required to combine
the solutions to the subproblems.
Quicksort Algorithm
Quick sort(low, high)
// A is an array of elements.
// The variables low and high are used to identify the positions of
first and
// last elements in each partition.
If(low< high) then
J= partition(low, high)
Quick sort( low, j-1)
Quick sort(j+1, high)
End if Exit
Partition(low, high)
Partition Algorithm
//This procedure partitions the element into two lists and places the pivot
//element into a appropriate place. Low = first element of the array, high =
//last element of the array, a[low] = pivot.
Step 1. Set pivot = a[low];
i = low +1;
j = high;
Step 2. Repeat step 3 while (a[i] < pivot && i < high)
Step 3. i++;
Step 4. Repeat step 5 while (a[j] > pivot)
Step 5. j--;
Step 6. If(i<j)
swap a[i] and a[j]
go to step 2
else
swap a[j] and pivot
Step 7. Return (j)
Analysis of Quicksort
• Best case: split in the middle — Θ(n log n)
• Worst case: sorted array! — Θ(n2)
• Average case: random arrays — Θ(n log n)
Multiplication of Large Integers
Consider the problem of multiplying two (large) n-digit integers
represented by arrays of their digits such as:

A = 12345678901357986429 B = 87654321284820912836

The grade-school algorithm:


a1 a2 …

an b1 b2 …

bn
(d10) d11d12 … d1n
(d20) d21d22 … d2n
…………………
(dn0) dn1dn2 … dnn
First Divide-and-Conquer
Algorithm
A small example: A  B where A = 2135 and B = 4014
A = (21·102 + 35), B = (40 ·102 + 14)
So, A  B = (21 ·102 + 35)  (40 ·102 + 14)
= 21  40 ·104 + (21  14 + 35  40) ·102 + 35  14

In general, if A = A1A2 and B = B1B2 (where A and B are n-


digit, A1, A2, B1, B2 are n/2-digit numbers),
A  B = A1  B1·10n + (A1  B2 + A2  B1) ·10n/2 + A2  B2

Recurrence for the number of one-digit multiplications M(n):


M(n) = 4M(n/2), M(1) = 1
Solution: M(n) = n2
Department of
ISE
Second Divide-and-Conquer
Algorithm
A  B = A1  B1·10n + (A1  B2 + A2  B1) ·10n/2 + A2  B2

The idea is to decrease the number of multiplications from 4 to 3:


(A1 + A2 )  (B1 + B2 ) = A1  B1 + (A1  B2 + A2  B1) + A2  B2,

I.e., (A1  B2 + A2  B1) = (A1 + A2 )  (B1 + B2 ) - A1  B1 - A2  B2,


which requires only 3 multiplications at the expense of (4-1) extra
add/sub.

Recurrence for the number of multiplications M(n):


M(n) = 3M(n/2), What if we count
both multiplications
M(1) = 1 and additions?

Solution: M(n) = 3log 2n = nlog 23 ≈ n1.585


Example of Large-Integer
Multiplication

= (21*10^2 + 35) * (40*10^2 + 14)


= (21*40)*10^4 + c1*10^2 + 35*14
where c1 = (21+35)*(40+14) - 21*40 - 35*14, and
21*40 = (2*10 + 1) * (4*10 + 0)
= (2*4)*10^2 + c2*10 + 1*0
where c2 = (2+1)*(4+0) - 2*4 - 1*0, etc.

This process requires 9 digit multiplications as opposed to


16.
Matrix Multiplication

• Brute-force algorithm

c11 c12 a11 a12 b11 b12


= *
a21 a22 b21 b22
c21 c22

+ a12 * b21 + a12 * b22


a11 * b11 a11 * b12
=
+ a22 * b21 + a22 * b22
a21 * b11 a21 * b12
8
multiplications Efficiency class in general: 
4 (n3)
additionsDepartment of
ISE
Strassen’s Matrix Multiplication
• Strassen’s algorithm for two 2x2 matrices (1969):
c11 c12 a11 a12 b11
b12
= *
c21 c22 a21 a22 b21
C2 = D +
b22 G
=
C3 = E + F C4 = D + H
D = A1(B2 – B4) C1
+ J =- FE
E = A4( B3 – B1)
+I +J-G
F = (A3 + A4)
B1 G = (A1 + 7
A2) B4 multiplications
H = (A3 – A1) 18
(B1 + B2) additions
I = (A2 – A4) (B3 +B4)
J = (A1 +A4)(B1 +B4)
Strassen’s Matrix Multiplication
A= 1 2 B= 1 1
3 4 2 2

A1 = 1, A2 =2, A3 = 3, A4 = 4
B1 = 1, B2 = 1, B3 = 2, B4 = 2
1. D = A1(B2 – B4) = 1(1 – 2) = -1
2. E = A4(B3-B1) = 4(2-1) = 4
3. F = (A3 + A4) B1 = (3+4)1 = 7
4. G = (A1 + A2)B4 = (1+2)2 = 6
5. H = (A3 – A1) (B1 + B2) = (3-1)(1+1) = 4
6. I = (A2 – A4)(B3+B4) = (2-4)(2+2) = -8
7. J = (A1+A4)(B1+B4) = (1+4)(1+2) = 15

C1 = E +I+J-G = 4+(-8) +15-6 = 5


5 5
C2 = D + G = -1 +6 = 5 C = 11 11
C3 = E + F = 4 + 7 = 11
C4 = D + H + J – F = -1 +4 +15 -7 = 11
Strassen’s Matrix Multiplication
Strassen observed [1969] that the product of
two matrices can be computed in general as
follows:
C00 C01 A00 A01 B00 B01
= *
A10 A11 B10 B11
C10 C11

M1 + M4 - M5 + M7 M3 + M5
=
M2 + M4 M1 + M3 - M2 + M6
Formulas for Strassen’s Algorithm
M1 = (A00 + A11)  (B00 + B11)

M2 = (A10 + A11)  B00

M3 = A00  (B01 - B11)

M4 = A11  (B10 - B00)

M5 = (A00 + A01)  B11

M6 = (A10 - A00)  (B00 + B01)

M7 = (A01 - A11)  (B10 + B11)


M4 = A11  (B10 - B00)
= 4 * (2 - 1)
A00 A01 = 4
B00 B01
M5 = (A00 + A01)  B11
1 2 1 1 = (1 + 2) * 2
=
6 (A10 - A00)  (B00 + B01)
M6 =
3 4 2 2 = (3 - 1) * (1 + 1) = 4

A10 A11 B10 M7 = (A01 - A11)  (B10 + B11)


B1
1 = (2 - 4) * (2 + 2) = -8
M1 = (A00 + A11)  (B00 + B11)
= (1 + 4) * (1+2) = C00 C01
15
M1 + M4 - M5 + M7 M3 + M5 5 5
M2 = 10 + A11)  B00
(A = (3 + 4) * 1 = 7 =
M2 + M4 M1 + M3 - M2 + M6 11 11
M3 = A00  (B01 - B11 )
= 1 * (1 - 2) = -1
C10 C1
1
M4 = A11  (B10 - B00)
= 4 * (1 - 5) =
A00 A01 -
B00 B01
M5 =16(A00 + A01)  B11
2 1 5 2 = (2 + 1) * 2
=
M6 =6 (A10 - A00)  (B00 + B01)
3 4 1 2 = (3 - 2) * (5 + 2) = 7

A10 A11 B10 M7 = (A01 - A11)  (B10 + B11)


B1
1 = (1 - 4) * (1 + 2) = -9
M1 = (A00 + A11)  (B00 + B11)
= (2 + 4) * (5+2) = C00 C01
42
M1 + M4 - M5 + M7 M3 + M5 11 6
M2 = 10 + A11)  B00
(A = (3 + 4) * 5 = 35 =
M2 + M4 M1 + M3 - M2 + M6 19 14
M3 = A00  (B01 - B11 )
= 2 * (2 - 2) =
0 C10 C1
1
Solve
1021
0101
4110
2104
A = 0130 B =
2011
5021
1 350

A1 A2 B1 B2
10 21 01 01
41 10 21 04

01 30 20 11
50 21 13 50
A3 A4 B3 B4
1. D = A1 (B2 – B4)

01 11
10 -
* 04 50
41

10 -1 0
*
41 -5 4

-6 0
-9 4

2. E = A4 (B3 – B1)

Department
Department of of ISE
ISE
Analysis of Strassen’s Algorithm
If n is not a power of 2, matrices can be padded with zeros.

Number of multiplications:
M(n) = M(1) = 1
7M(n/2),
M(n) = 7M(2 k-1)
= 7[7M(2 k-2)] = 7 k M(2 k-k)] = 7 k (1)
= 7 2 M(2 k-2)]
Solution: M(n) = 7log 2 = n 2 ≈ n vs. n of brute-force
alg. n log 7 2.807 3
Advantages and Disadvantages
• Difficult problems is broken down into sub
problems and each sub problem is solved
independently.
• It gives efficient algorithms like quick sort,
merge sort, streassen’s matrix multiplication.
• Sub problems can be executed on parallel
processor.
Disadvantage
• It makes use of recursive methods and
the recursion is slow and complex.
Decrease-and-Conquer
The decrease and conquer technique is almost similar to the
divide and conquer technique, but instead of dividing the
problem into size n/2, it is decremented by a constant or
constant factor.
There are three variations of decrease and conquer
• Decrease by a constant
• Decrease by a constant factor
• Variable size decrease
The problems can be solved either top down (recursively)
or bottom up ( without recursion)
Decrease by a constant
• In this type of variation, the size of an instance
is reduced by the same constant ‘1’ on each
iteration. So, if a problem is of size ‘n’ , then a
sub problem of size ‘n-1’ is solved first but
before a sub sub problem of size ‘n-2’ is solved
and so on.
Decrease by a constant
Example: Consider a problem for computing a n
where n is a positive integer exponent
Let f(n) = a n
a n = a n-1 . a
= a n-2 . a . a F(n) = f(n-1 ) . a if n> 1
a if n
=a .a.a.a
n-3
=1

= a. a. a. a. . . n times
The above definition is a recursive definition i.e, a top down approach

Eg: Insertion sort, Depth First Search, Breath First Search,


Topological Sort
Decrease by a constant factor
• In this type of variation, the size of instance is
reduced by a constant factor on each iteration
(most of the case it is 2).
• So, if a problem of size ‘n’ is to be solved then
first the sub problem of size n/2 is to be solved
which in-turn requires the solution for the sub
sub problem n/4 and so on.

Department of
ISE
Decrease by a constant factor
Example: Consider a problem for computing an
As the problem is to be halved each time (Since
the constant factor is 2, to solve a n, first solve
an/2
, but before solve an/4 and so on.

an
= (an/2 ) 2
if n is even and > 1
if n is odd and >
(a n-1/2 ) 2
a if1n = 1
Decrease by a constant factor

The efficiency of this variation i.e decrease by a


constant factor is O(log n) because, the size is
reduced by at least one half at the expense of
no more than two multiplications on each
iteration

Eg: Binary search and the method of bisection,


Fake coin problem
Variable size decrease
In this type, the reduction in the size of the
problem instance is varied from one iteration to
another.
Eg: Euclid’s algorithm for computing
GCD of two nos.
gcd (m,n) = gcd (n, m mod n) if n> 0
m if n=0
Eg: Computing a median, Interpolation Search
and Binary Search Tree
Insertion sort
 Insertion sort is a simple sorting algorithm that
works by iteratively building a sorted portion of the
list, one element at a time, by inserting each
unsorted element into its correct position within
the sorted part of the list.
 The algorithm is called "insertion sort" because it
involves inserting elements into their proper
positions.
Insertion sort
1. Start with the second element (index 1) of the list since the first
element (at index 0) is considered already sorted.
2. Compare the second element with the first element and swap
them if necessary to ensure that the two elements are in the
correct order (ascending or descending).
3. Move to the third element (index 2) and compare it with the
elements in the sorted portion on the left (elements with lower
indices). Shift larger elements to the right until the correct
position for the current element is found.
4. Repeat this process for all the remaining elements in the list
until the entire list is sorted.
Insertion sort Algorithm with Example
Topological Sorting
Definition of Directed Acyclic Graph DAG)
A dag: a directed acyclic graph, i.e. a directed graph with no (directed) cycles

c d c d

Based on the principal of DAG, specific ordering of vertices is possible. This


method of arranging the vertices in some specific manner is called Topological
Sort.
Vertices of a dag can be linearly ordered so that for every edge its starting
vertex is listed before its ending vertex (topological sorting). Being a dag is
also a necessary condition for topological sorting to be possible.
Topological Sorting

Topological sort is an algorithm used to linearly order the vertices


of a directed acyclic graph (DAG) in such a way that for every
directed edge (u, v), vertex u comes before vertex v in the
ordering.

Topological sorting Techniques


1. DFS Based Algorithm
2. Source Removal Algorithm
DFS-based Algorithm
DFS-based algorithm for topological sorting
Step 1: first find DFS and push the visited nodes in the stack.
Step 2: Now pop the contents of Stack.
Step 3: Reverse the popped contents . The list which are getting is
a topologically sorted list.
Example:

b a c d

e f g h
2. Source Removal Algorithm
This is a direct implementation of Decrease and Conquer Method
Algorithm follow these steps
1. From a given graph find a vertex with no incoming edges.
Delete it along with all the edges outgoing from it.
2. Note the vertices that are deleted.
3. All these recorded vertices give Topologically sorted list.

Department of
ISE
Source Removal Algorithm
Repeatedly identify and remove a source (a vertex with no
incoming edges) and all the edges incident to it until either no
vertex is left or there is no source among the remaining
vertices (not a dag)
a b c d
Example: 1

e f g h
Efficiency: same as efficiency of the DFS-based algorithm, but how would you
identify a source? How do you remove a source from the dag?

a d
Example 2 c
b e
Department
Department of of ISE
ISE
Source Removal Algorithm
Topological Sort(G)
1. Find the indegree INDG(n) of each node n
of G.
2. Put in a queue Q all the nodes with zero
indegree.
3. Repeat step 4 and 5 until G becomes
empty.
4. Repeat the element n of the queue Q and
add it to T (Set Front = Front +1).
Source Removal Algorithm
5.Repeat the following for each neighbour, m of
the node n
a) Set INDEG(m) = INDG(m)-1
b) If INDEG(m) = 0 then add m to the rear
end of the Q.
6. Exit.

Note: For Problems refer class notes

Department
Department of of ISE
ISE
Questions???
1. Explain the divide and conquer technique.
2. Write an algorithm for merge sort and quick sort.
3. Apply merge sort algorithm to sort the elements 8,3, 2, 9, 7, 1, 5, 4.
4. Explain Decrease and conquer. Or What are the three major variations of decrease and
conquer technique?
5. Design an insertion sort algorithm and obtain its time complexity. Apply insertion sort
on these elements. 25,75,40,10,20,
6. Explain Strassen’s matrix multiplication and derive its time complexity
7. Explain topological sorting with example.
8. Apply source removal method to obtain topological sort for the given graph.

You might also like