0% found this document useful (0 votes)
13 views

ADA Unit 2 - 1711437399

Uploaded by

kingofmemes7302
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

ADA Unit 2 - 1711437399

Uploaded by

kingofmemes7302
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 124

CHAMELI DEVI GROUP OF

INSTITUTIONS

Analysis and Design of Algorithm


UNIT-2

Rupanshi Patidar
Assistant Professor
CI Department
Study of Greedy strategy
• A greedy strategy is an algorithmic paradigm that
follows the problem-solving heuristic of making the
locally optimal choice at each stage with the hope
of finding a global optimum.
• In other words, at each step of the algorithm, it
makes the best decision without considering the
consequences of that decision on future steps.
Greedy algorithms are known for their simplicity
and efficiency in many scenarios, although they
don't always guarantee an optimal solution.
Study of Greedy strategy
Applications of Greedy Approach:
(8) Dijkstra’s algorithm
(1) Make a change problem
(9) Greedy coloring
(2) Knapsack problem
(10) Minimum cost
(3) Minimum spanning tree spanning tree
(4) Single source shortest
(11) Job scheduling
path
(12) Interval scheduling
(5) Activity selection
problem (13) Greedy set cover
(6) Job sequencing problem (14) Knapsack with
(7) Huffman code generation. fractions
Characteristic components of greedy
algorithm:

• The feasible solution: A subset of given inputs that


satisfies all specified constraints of a problem is
known as a “feasible solution”.

• Optimal solution: The feasible solution that achieves


the desired extreme is called an “optimal solution”. In
other words, the feasible solution that either
minimizes or maximizes the objective function
specified in a problem is known as an “optimal
solution”.’
Characteristic components of greedy
algorithm:

• Feasibility check: It investigates whether the selected


input fulfils all constraints mentioned in a problem or not.
If it fulfils all the constraints then it is added to a set of
feasible solutions; otherwise, it is rejected.

• Optimality check: It investigates whether a selected input


produces either a minimum or maximum value of the
objective function by fulfilling all the specified constraints.
If an element in a solution set produces the desired
extreme, then it is added to a set of optimal solutions.
Characteristic components of greedy
algorithm:

• Optimal substructure property: The globally optimal


solution to a problem includes the optimal sub
solutions within it.

• Greedy choice property: The globally optimal


solution is assembled by selecting locally optimal
choices. The greedy approach applies some locally
optimal criteria to obtain a partial solution that
seems to be the best at that moment and then find
out the solution for the remaining sub-problem.
Optimal merge pattern
• Optimal merge pattern is a pattern that
relates to the merging of two or more sorted
files in a single sorted file. This type of
merging can be done by the two-way merging
method.
• If we have two sorted files containing n and m
records respectively then they could be
merged together, to obtain one sorted file in
time O (n+m).
Optimal merge pattern
Optimal merge pattern
• Let us consider the given files, f 1, f2, f3, f4 and f5 with 20, 30,
10, 5 and 30 number of elements respectively.
• If merge operations are performed according to the
provided sequence, then
• M1 = merge f1 and f2 => 20 + 30 = 50
• M2 = merge M1 and f3 => 50 + 10 = 60
• M3 = merge M2 and f4 => 60 + 5 = 65
• M4 = merge M3 and f5 => 65 + 30 = 95
• Hence, the total number of operations is
• 50 + 60 + 65 + 95 = 270
Optimal merge pattern example
• Given a set of unsorted files: 20, 30, 10, 5, 30
• Now, arrange these elements in ascending order: 5,
10, 20, 30, 30.
• After this, pick two smallest numbers and repeat this
until we left with only one number.
Now follow following steps:
Optimal merge pattern example
Step 1:
Optimal merge pattern example
Step 2:
Optimal merge pattern example
Step 3:
Optimal merge pattern example

Step 4:

Hence, the solution takes 15 + 35 + 60 + 95 = 205 number


Optimal merge pattern example 2

Input: n = 6, size = {2, 3, 4, 5, 6, 7}


Output: 68
Explanation: Optimal way to combine these
files
Optimal merge pattern example 3

• Given a set of unsorted files: 5, 3, 2, 7, 9, 13


• Now, arrange these elements in ascending
order: 2, 3, 5, 7, 9, 13
Optimal merge pattern example 3

• So, The merging cost = 5 + 10 + 16 + 23 + 39 =


93
Huffman Coding
• Huffman Coding is a technique of compressing
data to reduce its size without losing any of
the details. It was first developed by David
Huffman.
• Huffman Coding is generally useful to
compress the data in which there are
frequently occurring characters.
Algorithm

• Algorithm Huffman (c)


• {
• n= |c|
• Q=c
• for i<-1 to n-1
• do
• {
• temp <- get node ()
• left (temp] Get min (Q) right [temp] Get Min (Q)
• a = left [temp b = right [temp]
• F [temp]<- f[a] + [b]
• insert (Q, temp)
• }
• return Get_min (0)
• }
Example

character Frequency
a 5
b 9
c 12
d 13
e 16
f 45
Example
Step 1. Build a min heap that contains 6 nodes where
each node represents root of a tree with single node.
Step 2 Extract two minimum frequency nodes from min
heap. Add a new internal node with frequency
5 + 9 = 14.
Example
• Now min heap contains 5 nodes where 4 nodes are
roots of trees with single element each, and one
heap node is root of tree with 3 elements

character Frequency
c 12
d 13
Internal node 14
e 16
f 45
Example
• Step 3: Extract two minimum frequency nodes from
heap. Add a new internal node with frequency 12 +
13 = 25
Example
• Now min heap contains 4 nodes where 2 nodes are
roots of trees with single element each, and two
heap nodes are root of tree with more than one
nodes

character Frequency
Internal Node 14
e 16
Internal Node 25
f 45
Example
• Step 4: Extract two minimum frequency nodes. Add a
new internal node with frequency 14 + 16 = 30
Example
• Now min heap contains 3 nodes.
character Frequency
Internal Node 25
Internal Node 30
f 45

• Step 5: Extract two minimum frequency nodes. Add a


new internal node with frequency 25 + 30 = 55
Example
Example
• Now min heap contains 2 nodes.

character Frequency
f 45
Internal node 55

• Step 6: Extract two minimum frequency nodes. Add a


new internal node with frequency 45 + 55 = 100
Example
Example
• Now min heap contains only one node.
character Frequency
Internal Node 100

Since the heap contains only one node, the algorithm stops here.
Steps to print codes from Huffman Tree:
• Traverse the tree formed starting from the root.
• Maintain an auxiliary array.
• While moving to the left child, write 0 to the array.
• While moving to the right child, write 1 to the array.
• Print the array when a leaf node is encountered.
Example
Example
• The codes are as follows:

character code-word
f 0
c 100
d 101
a 1100
b 1101
e 111
Huffman Coding
• Time complexity: O(nlogn)
• Space complexity :- O(N)
Applications of Huffman Coding:
• They are used for transmitting fax and text.
• They are used by conventional compression formats
like PKZIP, GZIP, etc.
• Multimedia codes like JPEG, PNG, and MP3 use
Huffman encoding(to be more precise the prefix
codes).
Huffman Coding Example : 1
Que – 2. How many bits may be required for encoding
the message ‘mississippi’?
• Solution: Following is the frequency table of
characters in ‘mississippi’ in non-decreasing order of
frequency:
Huffman Coding Example : 1
Huffman Coding Example : 1
Huffman Coding Example : 1

• Total number of bits = freq(m) * code length(m) +


freq(p) * code length(p) + freq(s) * code length(s) +
freq(i) * code length(i) = 1*3 + 2*3 + 4*2 + 4*1 = 21
Also, average bits per character can be found as:
Total number of bits required / total number of
characters = 21/11 = 1.909
Huffman Coding Example : 2
Huffman Coding Example : 2
Huffman Coding Example : 2
Huffman Coding Example : 3
• Que – A networking company uses a compression technique
to encode the message before transmitting over the network.
Suppose the message contains the following characters with
their frequency:
Huffman Coding Example : 3
• Note that each character in input message takes 1 byte. If the
compression technique used is Huffman Coding, how many
bits will be saved in the message? (A) 224 (B) 800 (C) 576 (D)
324
Huffman Coding Example : 3
• Solutions: Finding number of bits without using
Huffman, Total number of characters = sum of
frequencies = 100 size of 1 character = 1byte = 8 bits
Total number of bits = 8*100 = 800 Using Huffman
Encoding, Total number of bits needed can be
calculated as: 5*4 + 9*4 + 12*3 + 13*3 + 16*3 + 45*
1 = 224 Bits saved = 800-224 = 576.
Spanning tree - A spanning tree is the sub-graph of an
undirected connected graph.

Minimum Spanning tree - Minimum spanning tree can be


defined as the spanning tree in which the sum of the
weights of the edge is minimum. The weight of the
spanning tree is the sum of the weights given to the edges
of the spanning tree.
• A minimum spanning tree (MST) or minimum weight
spanning tree for a weighted, connected, undirected
graph is a spanning tree with a weight less than or equal
to the weight of every other spanning tree.
Kruskal’s Algorithm
• Kruskal's algorithm is a minimum spanning
tree algorithm that takes a graph as input and finds
the subset of the edges of that graph which form a
tree that includes every vertex has the minimum sum
of weights among all the trees that can be formed
from the graph
Kruskal’s Algorithm
How Kruskal's algorithm works
We start from the edges with the lowest weight and keep adding
edges until we reach our goal.
The steps for implementing Kruskal's algorithm are as follows:
• Sort all the edges from low weight to high
• Take the edge with the lowest weight and add it to the
spanning tree. If adding the edge created a cycle, then reject
this edge.
• Keep adding edges until we reach all vertices.
Kruskal’s Algorithm
Kruskal’s Algorithm
Weight Source Destination

1 7 6
2 8 2
2 6 5
4 0 1
4 2 5
6 8 6
7 2 3
7 7 8
8 0 7
8 1 2
9 3 4
10 5 4
11 1 7
14 3 5
Kruskal’s Algorithm
Kruskal’s Algorithm
Kruskal’s Algorithm
Kruskal’s Algorithm
Kruskal’s Algorithm
Kruskal’s Algorithm
Kruskal’s Algorithm
Kruskal’s Algorithm
Kruskal’s Algorithm

Start with a weighted graph

Choose the edge with the least weight, if there are more than 1,
choose anyone
Kruskal’s Algorithm

Choose the next shortest edge and add it

Choose the next shortest edge that doesn't create a cycle and add it
Kruskal’s Algorithm

Choose the next shortest edge that doesn't create a cycle and add it

Repeat until you have a spanning tree


Kruskal’s Algorithm
Kruskal's Algorithm Complexity
• The time complexity Of Kruskal's Algorithm
is: O(E log E).

Kruskal's Algorithm Applications


• In order to layout electrical wiring
• In computer network (LAN connection)
Prims Algorithm
• Prim's algorithm is a minimum spanning
tree algorithm that takes a graph as input and finds
the subset of the edges of that graph which form a
tree that includes every vertex has the minimum sum
of weights among all the trees that can be formed
from the graph.
• Prim's algorithm finds the subset of edges that
includes every vertex of the graph such that the sum
of the weights of the edges can be minimized.
Prims Algorithm
• How does Prim’s Algorithm Work?
Step 1: Determine an arbitrary vertex as the starting vertex of the
MST.
Step 2: Follow steps 3 to 5 till there are vertices that are not
included in the MST (known as fringe vertex).
Step 3: Find edges connecting any tree vertex with the fringe
vertices.
Step 4: Find the minimum among these edges.
Step 5: Add the chosen edge to the MST if it does not form any
cycle.
Step 6: Return the MST and exit
Prims Algorithm
• Illustration of Prim’s Algorithm:

Start with a weighted graph


Prims Algorithm

Choose a vertex

Choose the shortest edge from this vertex and add it


Prims Algorithm

Choose the nearest vertex not yet in the solution

Choose the nearest edge not yet in the solution, if there are multiple
choices, choose one at random
Prims Algorithm

Repeat until you have a spanning tree


Prims Algorithm
Prims Algorithm

The final structure of the MST is as follows and the weight of the edges
of the MST is (4 + 8 + 1 + 2 + 4 + 2 + 7 + 9) = 37.
Prims Algorithm
Prim's Algorithm Complexity
• The time complexity of Prim's algorithm is O(E log V).

Prim's Algorithm Application


• Laying cables of electrical wiring
• In network designed
• To make protocols in network cycles
Applications of Minimum
Spanning Trees
• Network design: Spanning trees can be used in network design to
find the minimum number of connections required to connect all
nodes. Minimum spanning trees, in particular, can help minimize
the cost of the connections by selecting the cheapest edges.
• Image processing: Spanning trees can be used in image
processing to identify regions of similar intensity or color, which
can be useful for segmentation and classification tasks.
• Biology: Spanning trees and minimum spanning trees can be used
in biology to construct phylogenetic trees to represent
evolutionary relationships among species or genes.
• Social network analysis: Spanning trees and minimum spanning
trees can be used in social network analysis to identify important
connections and relationships among individuals or groups.
Knapsack Problem
• Given the weights and profits of N items, in the form
of {profit, weight} put these items in a knapsack of
capacity W to get the maximum total profit in the knapsack.
In Fractional Knapsack, we can break items for maximizing
the total value of the knapsack.
Knapsack Algorithm
Algorithm
• Consider all the items with their weights and profits
mentioned respectively.
• Calculate Pi/Wi of all the items and sort the items in
descending order based on their Pi/Wi values.
• Without exceeding the limit, add the items into the
knapsack.
• If the knapsack can still store some weight, but the weights
of other items exceed the limit, the fractional part of the
next time can be added.
• Hence, giving it the name fractional knapsack problem.
Knapsack Problem Example
• For the given set of items and the knapsack capacity
of 10 kg, find the subset of the items to be added in
the knapsack such that the profit is maximum.
Items 1 2 3 4 5
Weights (in kg) 3 3 2 5 1
Profits 10 15 10 12 8
Knapsack Problem Example
Solution:
Step 1
• Given, n = 5
• Wi = {3, 3, 2, 5, 1}
• Pi = {10, 15, 10, 12, 8}
• Calculate Pi/Wi for all the items
Items 1 2 3 4 5
Weights (in kg) 3 3 2 5 1
Profits 10 15 10 12 8
Pi/Wi 3.3 5 5 4 8
Knapsack Problem Example
• Step 2
• Arrange all the items in descending order based on
Pi/Wi .
Items 5 2 3 4 1
Weights (in kg) 1 3 2 5 3
Profits 8 15 10 20 10
Pi/Wi 8 5 5 4 3.3
Knapsack Problem Example
Step 3
• Without exceeding the knapsack capacity, insert the
items in the knapsack with maximum profit.
• Knapsack = {5, 2, 3}
• However, the knapsack can still hold 4 kg weight, but
the next item having 5 kg weight will exceed the
capacity. Therefore, only 4 kg weight of the 5 kg will
be added in the knapsack.
Knapsack Problem Example

Items 5 2 3 4 1
Weights (in kg) 1 3 2 5 3
Profits 8 15 10 20 10
Knapsack 1 1 1 4/5 0

Total Profit: 8 + 15 + 10 + 4/5*20


: 49
Knapsack Problem Example
Job sequencing with deadlines
• Job scheduling algorithm is applied to schedule the
jobs on a single processor to maximize the profits.

• The greedy approach of the job scheduling algorithm


states that, “Given ‘n’ number of jobs with a starting
time and ending time, they need to be scheduled in
such a way that maximum profit is received within
the maximum deadline”.
Job sequencing with deadlines
Job sequencing with deadlines
Task Deadline Marks
Citronics 5 3
Assignment 1 10
Quiz 2 10
Files 3 5
Sinchan 3 2
Internals 4 10
External 5 20
Mentorship 1 1
Job Scheduling Algorithm
• Set of jobs with deadlines and profits are taken as an input
with the job scheduling algorithm and scheduled subset of
jobs with maximum profit are obtained as the final output.
Algorithm
• Step1 − Find the maximum deadline value from the input
set of jobs.
• Step2 − Once, the deadline is decided, arrange the jobs in
descending order of their profits.
• Step3 − Selects the jobs with highest profits, their time
periods not exceeding the maximum deadline.
• Step4 − The selected set of jobs are the output.
Job sequencing with deadlines
Examples
Consider the following tasks with their deadlines and profits.
Schedule the tasks in such a way that they produce maximum
profit after being executed −
Jobs Deadlines Profits
Job 1 5 200
Job 2 3 180
Job 3 3 190
Job 4 2 300
Job 5 4 120
Job 6 2 100
Job sequencing with deadlines
Solution:
Step 1:
• Sort the jobs in decreasing order of their profit.
Jobs Deadlines Profits
Job 4 2 300
Job 1 5 200
Job 3 3 190
Job 2 3 180
Job 5 4 120
Job 6 2 100
Job sequencing with deadlines
• As you can see from the table above, the jobs have
been sorted in descending order of their profit.
Step 2:
• Here we can see that value of the maximum deadline
is 5.
Its Gantt chart will be :
Job sequencing with deadlines
Step 3
• Now, pick the jobs one by one as presented in step, 1, and
place them on the Gantt chart as far as possible from 0.
• We will pick Job 4. Its deadline is 2. So placing the job in the
empty slot available just before the deadline.
Job sequencing with deadlines
• We will now pick Job 1. Its deadline is 5. So placing
the job in the empty slot available just before the
deadline.
Job sequencing with deadlines
• We will now pick Job 3. Its deadline is 3. So placing
the job in the empty slot available just before the
deadline.
Job sequencing with deadlines
• We will now pick Job 2. Its deadline is 3. Here second
and third slots are already filled. So place job 2 on
the next available free slot farthest from 0, i.e first
slot.
Job sequencing with deadlines
• We will now pick Job 5. Its deadline is 4. Place the job
in the first empty slot before the deadline, i. e fourth
slot.
Job sequencing with deadlines
• We will now pick Job 6. Its deadline is 2. Now we need
to place the job in the first empty slot before the
deadline. Since, no such slot is available, hence Job 6
can not be completed.
• So, the most optimal sequence of jobs to maximize
profit is Job 2, Job 4, Job 3, Job 5, and Job 1.
• And the maximum profit earned can be calculated as:
• Profit of Job 2 + Profit of Job 4 + Profit of Job 3 + profit
of Job 5 + profit of Job 1
=180+300+190+120+200=990
Examples
• Input: Four Jobs with following deadlines and
profits
• JobID Deadline Profit
a 4 20
b 1 10
c 1 40
d 1 30
• Output: Following is maximum profit sequence
of jobs: c, a
Examples
Single source shortest path
algorithm
Dijkstra's Algorithm:

• Dijkstra’s algorithm is a popular algorithms for solving many


single-source shortest path problems having non-negative
edge weight in the graphs i.e., it is to find the shortest
distance between two vertices on a graph. It was conceived
by Dutch computer scientist Edsger W. Dijkstra in 1956
Single source shortest path
algorithm
Dijkstra's Algorithm:

• The algorithm maintains a set of visited vertices and a set of


unvisited vertices. It starts at the source vertex and iteratively
selects the unvisited vertex with the smallest tentative
distance from the source. It then visits the neighbors of this
vertex and updates their tentative distances if a shorter path
is found. This process continues until the destination vertex is
reached, or all reachable vertices have been visited.
Dijkstra's Algorithm with an
Example
• Step 1: First, we will mark the source node with a current
distance of 0 and set the rest of the nodes to INFINITY.
• Step 2: We will then set the unvisited node with the
smallest current distance as the current node, suppose X.
• Step 3: For each neighbor N of the current node X: We will
then add the current distance of X with the weight of the
edge joining X-N. If it is smaller than the current distance
of N, set it as the new current distance of N.
• Step 4: We will then mark the current node X as visited.
• Step 5: We will repeat the process from 'Step 2' if there is
any node unvisited left in the graph.
Dijkstra's Algorithm with an
Example 1
Dijkstra's Algorithm with an
Example 1
• The Distance from the source node to itself is 0. In this
example the source node is 0.
• The distance from the source node to all other node is
unknown so we mark all of them as infinity.
Example: 0 -> 0, 1-> ∞,2-> ∞,3-> ∞,4-> ∞,5-> ∞,6-> ∞.
• we’ll also have an array of unvisited elements that will keep
track of unvisited or unmarked Nodes.
• Algorithm will complete when all the nodes marked as visited
and the distance between them added to the path. Unvisited
Nodes:- 0 1 2 3 4 5 6.
Dijkstra's Algorithm with an
Example 1
• Step 1: Start from Node 0 and mark Node as visited as you
can check in below image visited Node is marked red.
Dijkstra's Algorithm with an
Example 1
• Step 2: Check for adjacent Nodes, Now we have to
choices (Either choose Node1 with distance 2 or
either choose Node 2 with distance 6 ) and choose
Node with minimum distance. In this step Node 1 is
Minimum distance adjacent Node, so marked it as
visited and add up the distance.
• Distance: Node 0 -> Node 1 = 2
Dijkstra's Algorithm with an
Example 1
Dijkstra's Algorithm with an
Example 1
• Step 3: Then Move Forward and check for adjacent Node
which is Node 3, so marked it as visited and add up the
distance, Now the distance will be:
• Distance: Node 0 -> Node 1 -> Node 3 = 2 + 5 = 7
Dijkstra's Algorithm with an
Example 1
• Step 4: Again we have two choices for adjacent
Nodes (Either we can choose Node 4 with distance
10 or either we can choose Node 5 with distance 15)
so choose Node with minimum distance. In this
step Node 4 is Minimum distance adjacent Node, so
marked it as visited and add up the distance.
• Distance: Node 0 -> Node 1 -> Node 3 -> Node 4 = 2
+ 5 + 10 = 17
Dijkstra's Algorithm with an
Example 1
Dijkstra's Algorithm with an
Example 1
• Step 5: Again, Move Forward and check for adjacent Node
which is Node 6, so marked it as visited and add up the
distance, Now the distance will be:
• Distance: Node 0 -> Node 1 -> Node 3 -> Node 4 -> Node 6 =
2 + 5 + 10 + 2 = 19
Dijkstra's Algorithm with an
Example 1
• So, the Shortest Distance from the Source
Vertex is 19 which is optimal one
Dijkstra's Algorithm with an
Example 2
Dijkstra's Algorithm with an
Example 2
Dijkstra's Algorithm with an
Example 2
Dijkstra's Algorithm with an
Example 2
Dijkstra's Algorithm with an
Example 2
Dijkstra's Algorithm with an
Example 2
Dijkstra's Algorithm with an
Example 2
Dijkstra's Algorithm with an
Example 2
Dijkstra's Algorithm with an
Example 3
Dijkstra's Algorithm with an
Example 3
Hence, the final paths we concluded are:
• A=0
• B = 4 (A -> B)
• C = 5 (A -> C)
• D = 4 + 9 = 13 (A -> B -> D)
• E = 5 + 3 = 8 (A -> C -> E)
• F = 5 + 3 + 6 = 14 (A -> C -> E -> F)
Dijkstra's Algorithm with an
Example 4
Dijkstra's Algorithm with an
Example 4
• Output: 0 4 12 19 21 11 9 8 14
Explanation: The distance from 0 to 1 = 4.
The minimum distance from 0 to 2 = 12. 0->1->2
The minimum distance from 0 to 3 = 19. 0->1->2->3
The minimum distance from 0 to 4 = 21. 0->7->6->5-
>4
The minimum distance from 0 to 5 = 11. 0->7->6->5
The minimum distance from 0 to 6 = 9. 0->7->6
The minimum distance from 0 to 7 = 8. 0->7
The minimum distance from 0 to 8 = 14. 0->1->2->8
Dijkstra's Algorithm: Pseudo code:
function dijkstra(G, S)
for each vertex V in G
distance[V] <- infinite
previous[V] <- NULL
If V != S, add V to Priority Queue Q
distance[S] <- 0

while Q IS NOT EMPTY


U <- Extract MIN from Q
for each unvisited neighbour V of U
tempDistance <- distance[U] + edge_weight(U, V)
if tempDistance < distance[V]
distance[V] <- tempDistance
previous[V] <- U
Applications of Greedy Algorithms:
• Finding an optimal solution (Activity
selection, Fractional Knapsack, Job
Sequencing, Huffman Coding).
• Finding close to the optimal solution for NP-Hard
problems like TSP.
• Network design: Greedy algorithms can be used to
design efficient networks, such as minimum spanning
trees, shortest paths, and maximum flow networks.
These algorithms can be applied to a wide range of
network design problems, such as routing, resource
allocation, and capacity planning.
Applications of Greedy Algorithms:
• Machine learning: Greedy algorithms can be used in machine
learning applications, such as feature selection, clustering, and
classification. In feature selection, greedy algorithms are used
to select a subset of features that are most relevant to a given
problem. In clustering and classification, greedy algorithms can
be used to optimize the selection of clusters or classes.
• Image processing: Greedy algorithms can be used to solve a
wide range of image processing problems, such as image
compression, denoising, and segmentation. For example,
Huffman coding is a greedy algorithm that can be used to
compress digital images by efficiently encoding the most
frequent pixels.
Applications of Greedy Algorithms:

• Combinatorial optimization: Greedy algorithms can be used to solve


combinatorial optimization problems, such as the traveling salesman
problem, graph coloring, and scheduling. Although these problems are
typically NP-hard, greedy algorithms can often provide close-to-optimal
solutions that are practical and efficient.
• Game theory: Greedy algorithms can be used in game theory
applications, such as finding the optimal strategy for games like chess or
poker. In these applications, greedy algorithms can be used to identify
the most promising moves or actions at each turn, based on the current
state of the game.
• Financial optimization: Greedy algorithms can be used in financial
applications, such as portfolio optimization and risk management. In
portfolio optimization, greedy algorithms can be used to select a subset
of assets that are most likely to provide the best return on investment,
based on historical data and current market trends.

You might also like