Wa0021.
Wa0021.
selecting the most favorable option at each step without considering the overall problem. -
Example: In the "coin change" problem, choosing the largest denomination coin available first.
- No Reconsideration: Once a decision is made, it remains final; there's no reconsideration or
backtracking. - Example: In the "activity selection" problem, after picking an activity, it's
scheduled and moves to the next available activity. Control Abstraction: - High-Level
Perspective: Greedy algorithms abstract the problem-solving approach by focusing on
decision-making flow rather than intricate details - Example: In Huffman coding, prioritize
merging least frequent characters first without considering future implications. - Identify
Problem & Criteria: Understanding the problem requirements and establishing the rule for
making locally optimal choices. - Example: In Dijkstra's algorithm for shortest path, selecting
the next node based on the shortest known path length. - Systematic Procedure: Implement a
step-by-step approach for decision-making without backtracking, ensuring each step
contributes to the solution. - Example: Prim's algorithm for minimum spanning tree,
continually selecting the minimum edge that connects an already selected vertex. Time
Analysis of Control Abstraction:- Efficient Time Complexity: Greedy algorithms often
exhibit efficient complexities, such as linear or polynomial, especially suitable for large
datasets. - Example: Kruskal's algorithm for minimum spanning tree generally operates in O(E
log V) time. - Simple Structure: Typically, Greedy algorithms involve straightforward
operations or iteration through elements, leading to efficient time complexities. - Example:
Fractional Knapsack problem selects items based on maximum value-to-weight ratio in a single
pass. - Sum of Individual Steps' Time Complexities: The overall time complexity is the sum of
complexities of each step, leading to an aggregate computational cost. - Example: In Huffman
coding, constructing the tree involves summing up complexities of individual character merges.
Job Scheduling Problem Explanation: The Job Scheduling problem involves scheduling a
set of jobs with respective start and finish times while maximizing the number of jobs
completed. Example: Consider the following jobs with their start and finish times:
| Job | Start Time | Finish Time |
|--------|------------|-------------|
|A |1 |3 |
|B |2 |5 |
|C |4 |6 |
|D |6 |7 |
Pseudocode for Job Scheduling:
Function jobScheduling(jobs[])
Sort jobs by finish times in ascending order
selectedJobs = []
lastFinishTime = -infinity
for each job in jobs
if job.start >= lastFinishTime
select job
lastFinishTime = job.finish
add job to selectedJobs
return selectedJobs
Step-by-Step Explanation: 1. Sorting: - Sort jobs based on their finish times in ascending
order. 2. Iteration: - Traverse through each job in the sorted order. 3. Selecting Jobs: - If the
job's start time is greater than or equal to the last job's finish time, select the job. - Update the
last finish time accordingly. 4. Storing Selected Jobs - Store the selected jobs in a list.
Activity Selection Problem Explanation: The Activity Selection problem involves selecting
a maximum number of non-overlapping activities, given their start and finish times. Example:
Consider the following activities with their start and finish times:
| Activity | Start Time | Finish Time |
|----------|------------|-------------|
|A |1 |4 |
|B |3 |5 |
|C |0 |6 |
|D |5 |7 |
|E |3 |8 |
|F |5 |9 |
|G |6 | 10 |
|H |8 | 11 |
Pseudocode for Activity Selection:
Function activitySelection(activities[])
Sort activities by finish times in ascending order
selectedActivities = []
lastFinishTime = -infinity
for each activity in activities
if activity.start >= lastFinishTime
select activity
lastFinishTime = activity.finish
add activity to selectedActivities
return selectedActivities
Step-by-Step Explanation: 1. Sorting: - Sort activities based on their finish times in ascending
order. 2. Iteration: - Traverse through each activity in the sorted order. 3. Selecting Activities:
- If the activity's start time is greater than or equal to the last activity's finish time, select the
activity. - Update the last finish time accordingly. 4. Storing Selected Activities: - Store the
selected activities in a list.
Dynamic Programming→ Principle: - Optimal Substructure: Breaking down a complex
problem into smaller overlapping subproblems, solving each subproblem just once, and storing
the solutions to avoid redundant computations. – Memorization: Storing the solutions of
subproblems in a table or cache to be utilized when needed. Control Abstraction: - Top-Down
(Memorization): Recursive approach breaking down the problem into smaller subproblems and
storing solutions in a memorization table to avoid recalculating them. - Bottom-Up
(Tabulation): Iterative approach solving smaller subproblems first and using their solutions to
build up to the final solution. Time Analysis of Control Abstraction: - Memorization (Top-
Down): - Time Complexity: Depends on the number of unique subproblems solved. - Space
Complexity: Utilizes space for memorization tables proportional to the number of
subproblems. - Tabulation (Bottom-Up): - Time Complexity: Depends on the number of
iterations or steps required to solve the problem. - Space Complexity: Utilizes space for tables
or arrays to store intermediate results, often proportional to the size of the input.
Python Program Fibonacci Sequence using Memorization (Top-Down):
def fibonacci(n, memo={}):
if n in memo:
return memo[n]
if n <= 2:
return 1
memo[n] = fibonacci(n - 1, memo) + fibonacci(n - 2, memo)
return memo[n]
result = fibonacci(6)
print(result) # Output: 8
Binomial Coefficients: The binomial coefficient "n choose k" represents the number of ways
to choose k elements from a set of n distinct items without regard to the order. It can be
calculated using dynamic programming with Pascal's Triangle or using combinatorial formulas.
Pseudocode for Calculating Binomial Coefficients:
Function binomialCoefficient(n, k)
Initialize a 2D array C[n+1][k+1]
for i = 0 to n
for j = 0 to min(i, k)
if j equals 0 or j equals i
C[i][j] = 1
else
C[i][j] = C[i-1][j-1] + C[i-1][j]
return C[n][k]
Optimal Binary Search Tree (OBST): Optimal Binary Search Tree is a binary search tree
where the total cost of searches, based on the probabilities of accessing different keys, is
minimized. It can be constructed using dynamic programming.
Pseudocode for Constructing Optimal Binary Search Tree:
Function optimalBST(keys[], freq[], n)
Initialize a 2D array cost[n][n]
for i = 0 to n
cost[i][i] = freq[i]
for L = 2 to n
for i = 0 to n - L + 1
j=i+L-1
cost[i][j] = infinity
for r = i to j
temp = sum(freq[i...j]) + (if r > i then cost[i][r-1] else 0) + (if r < j then cost[r+1][j]
else 0)
if temp < cost[i][j]
cost[i][j] = temp
return cost[0][n-1]
0/1 Knapsack Problem: The 0/1 Knapsack Problem involves selecting items to maximize the
total value without exceeding a given weight capacity. Dynamic programming is used to solve
this problem efficiently.
Pseudocode for 0/1 Knapsack Problem:
Function knapsack(weights[], values[], capacity, n)
Initialize a 2D array dp[n+1][capacity+1]
for i = 0 to n
for w = 0 to capacity
if i equals 0 or w equals 0
dp[i][w] = 0
else if weights[i-1] <= w
dp[i][w] = max(values[i-1] + dp[i-1][w-weights[i-1]], dp[i-1][w])
else
dp[i][w] = dp[i-1][w]
return dp[n][capacity]
8-Queen Problem: The 8-queen problem is a classic chess problem where the goal is to place
8 queens on an 8x8 chessboard so that no two queens threaten each other. Queens threaten each
other if they share the same row, column, or diagonal.
Pseudo code:
function solveNQueens(board, row):
if row >= board.size:
// All queens are successfully placed
return true
for each column in board:
if isSafe(board, row, column):
// Place queen on the board
board[row][column] = 1
// Recursively check for the next row
if solveNQueens(board, row + 1):
return true
// If placing the queen leads to a conflict, backtrack
board[row][column] = 0
// If no solution is found for this row
return false
function isSafe(board, row, column):
// Check if there is no queen in the current row
for each col until column:
if board[row][col] == 1:
return false
// Check upper diagonal on the left side
for i, j from (row, column) to (0, 0) step -1:
if board[i][j] == 1:
return false
// Check lower diagonal on the left side
for i, j from (row, column) to (board.size - 1, 0) step (-1, -1):
if board[i][j] == 1:
return false
return true
Explanation: - `solveNQueens()` is a recursive function that tries to place queens on the board
row by row, checking for conflicts and backtracking if necessary. - `isSafe()` checks whether
it's safe to place a queen at a given position by examining the current row, upper and lower
diagonals on the left side.
Graph Coloring Problem: The graph coloring problem involves coloring the vertices of a
graph in such a way that no two adjacent vertices have the same color while using the fewest
number of colors.
Pseudo code:
function graphColoring(graph, numColors, vertex):
if vertex == graph.size:
// All vertices are colored
return true
for each color from 1 to numColors:
if isSafeColor(graph, vertex, color):
// Assign the color to the vertex
graph[vertex].color = color
// Recursively color the next vertex
if graphColoring(graph, numColors, vertex + 1):
return true
// If the coloring doesn't lead to a solution, backtrack
graph[vertex].color = 0
// If no solution is found for this vertex
return false
function isSafeColor(graph, vertex, color):
for each adjacentVertex in graph[vertex]:
if adjacentVertex.color == color:
return false
return true
Explanation: - `graphColoring()` is a recursive function that tries to color the vertices one by
one while checking for conflicts. - `isSafeColor()` checks if assigning a certain color to a vertex
is safe by verifying if any adjacent vertices have the same color.
Sum of Subsets Problem: The sum of subsets problem involves finding subsets of a set where
the sum of elements in each subset equals a given target sum.
Pseudo code:
function findSubsets(nums, targetSum, subset, index):
if targetSum == 0:
// Subset with the target sum is found
print(subset)
return
if index >= nums.size or targetSum < 0:
// No subset found
return
// Include current element in the subset
subset.add(nums[index])
findSubsets(nums, targetSum - nums[index], subset, index + 1)
subset.removeLast() // Backtrack
// Exclude current element from the subset
findSubsets(nums, targetSum, subset, index + 1)
Explanation: - `findSubsets()` is a recursive function that searches for subsets by including or
excluding elements and backtracking when necessary. - It explores two possibilities at each
step: including the current element or excluding it in the subset while maintaining the target
sum.
Branch and Bound is a systematic algorithmic technique used for solving optimization
problems, particularly combinatorial optimization problems where the goal is to find the best
solution among a large set of feasible solutions. It works by systematically searching the
solution space while pruning the search tree by employing bounds or criteria, thereby reducing
the search space and improving efficiency. Principle: 1. Branching: The algorithm begins with
an initial feasible solution and progressively explores the solution space by branching into
smaller subproblems. 2. Bounding: At each step, the algorithm establishes bounds or criteria to
discard or prune parts of the search space that cannot possibly contain the optimal solution.
This pruning helps avoid unnecessary exploration. Control Abstraction: 1. Divide and
Conquer: The problem space is divided into smaller subproblems or branches, usually
represented as a tree structure. Each node in the tree represents a subproblem. 2. Bounding: To
efficiently explore the solution space, bounds or criteria are applied to evaluate the nodes.
Nodes that cannot lead to an optimal solution (either due to being worse than the current best
solution found so far or due to other criteria) are pruned, meaning they are not further explored.
3. Optimal Solution Tracking: Throughout the search process, the algorithm keeps track of the
best solution found so far, updating it whenever a better solution is encountered. Time Analysis
of Control Abstraction: - Worst-case Time Complexity: For some problems, especially those
with exponential complexity, the worst-case time complexity of Branch and Bound can still be
exponential. - Bounding Efficiency: The efficiency of the bounding technique significantly
impacts the time complexity. If the bounding function is tight and effectively prunes a large
portion of the search space, it can lead to substantial reductions in computation time. -
Branching Factor: The number of branches created at each level of the search tree also plays a
crucial role. A lower branching factor reduces the size of the tree and therefore decreases the
overall time complexity.
FIFO (First-In-First-Out): FIFO strategy follows a queue-based approach where the nodes
are explored in the order they were generated. In Branch and Bound, this means that the nodes
are added to a queue data structure, and the nodes at the front of the queue (the oldest nodes)
are explored first. - Usage in Branch and Bound: In this approach, nodes are processed in the
order they are generated. New nodes are added to the end of the queue, and the oldest nodes
are explored first. This strategy can be useful in ensuring that the search explores broader areas
of the solution space before going deeper. - Control Abstraction: FIFO maintains fairness in
exploring nodes generated at the same level of the search tree. It can lead to a more systematic
exploration but may not necessarily prioritize the most promising nodes first.
LIFO (Last-In-First-Out): LIFO strategy follows a stack-based approach where the nodes are
explored in the reverse order of their generation. In Branch and Bound, this means that the
nodes are added to a stack data structure, and the most recently generated node is explored first.
- Usage in Branch and Bound: LIFO explores the deepest unexplored node first, essentially
going as deep as possible along a branch before backtracking. It can be efficient in certain
scenarios where the most promising solutions are likely to be found deeper in the search tree.
- Control Abstraction: LIFO tends to prioritize exploring deeper nodes, which can lead to quick
convergence if a good solution is found deep in the search tree. However, it might overlook
potentially better solutions found at shallower levels.
LC (Least Cost): LC strategy selects the node to explore based on some cost or evaluation
function associated with the nodes. In the context of Branch and Bound, this means selecting
the node that is expected to lead to the most promising solution based on some criterion. -
Usage in Branch and Bound: LC strategy involves selecting nodes with the least cost or
evaluation function value. This approach aims to prioritize exploring nodes that are likely to
lead to the best solution or prune unproductive nodes efficiently. - Control Abstraction: LC uses
a heuristic or evaluation function to guide the exploration, allowing for a more informed choice
in selecting nodes to explore. It tends to focus on nodes that are more likely to lead to an optimal
or better solution.
Knapsack Problem:
function knapsack(weights, values, capacity):
N = length of weights
maxValues = 2D array with dimensions (N+1) x (capacity+1)
for i from 0 to N:
for w from 0 to capacity:
if i == 0 or w == 0:
maxValues[i][w] = 0
else if weights[i - 1] <= w:
maxValues[i][w] = max(values[i - 1] + maxValues[i - 1][w - weights[i - 1]],
maxValues[i - 1][w])
else:
maxValues[i][w] = maxValues[i - 1][w]
return maxValues[N][capacity]
Aggregate Analysis: Aggregate analysis focuses on determining the total cost of a sequence
of operations and then calculating the average cost per operation by dividing the total cost by
the number of operations. Example: Consider a dynamic array that occasionally triggers a
costly resize operation (e.g., O(n)). In aggregate analysis, the total cost of a sequence of n
insertions, which includes occasional expensive resizes, is calculated. Then, this total cost is
divided by n to obtain the average cost per insertion. Advantages: Provides a straightforward
way to find an average cost for a sequence of operations. It's easy to calculate and gives a clear
overall view of the performance.
Accounting Method: The accounting method assigns a "charge" or "cost" to each operation
that may be higher than its actual cost. This extra charge creates a surplus or credit that can be
used by other operations that need additional resources. Example: In a dynamic array, every
cheap operation (e.g., insertions) may be assigned a little extra charge. The surplus generated
from these cheap operations is stored as tokens in the data structure. When an expensive
operation (e.g., resizing) is required, it can use these accumulated tokens to cover the higher
cost. Advantages: Provides a flexible approach by allowing operations to contribute extra
resources, maintaining a balance to cover higher-cost operations.
Potential Function Method: The potential function method associates a "potential" value with
the state of the data structure. It quantifies the difference between the current state and an ideal
or desired state, such as the difference between the actual and optimal sizes of a data structure.
Example: In a dynamic array, the potential function might measure the difference between the
current size of the array and the ideal size. When an expensive operation (e.g., resizing) occurs,
the increase in potential compensates for the higher actual cost of the operation. Advantages:
Helps in quantifying the "wasted" resources or the distance from an optimal state, allowing
compensation for the higher costs of operations based on the potential difference.
Sorting algorithms for embedded systems need to be efficient in terms of both time
complexity and memory usage due to the limited resources available in such systems. One
sorting algorithm that fits these criteria is the Comb Sort, which is a simple and efficient
algorithm suitable for embedded systems. Comb Sort Algorithm for Embedded Systems:
Explanation: Comb Sort is an improvement over Bubble Sort and is known for its simplicity
and effectiveness with limited resources. Algorithm Workflow: 1. Gap-Based Sorting: Comb
Sort works by comparing elements that are distant from each other initially using a gap. It starts
with a relatively large gap size. 2. Comparisons and Swapping: Elements that are distant by the
gap are compared, and if they are out of order, they are swapped. 3. Gap Reduction: After each
iteration, the gap decreases by a shrink factor (commonly 1.3), gradually reducing the gap size.
4. Final Pass: Once the gap becomes 1, the algorithm performs a final pass similar to Bubble
Sort. Advantages for Embedded Systems: 1. Simple Implementation: Comb Sort has a
straightforward implementation, requiring minimal memory and fewer operations compared to
more complex algorithms. 2. Efficient Use of Resources: It operates with a small number of
comparisons and swaps, making it suitable for systems with limited processing power and
memory. 3. Performance: While not as fast as some advanced sorting algorithms (e.g.,
QuickSort or MergeSort), Comb Sort performs well on small to medium-sized data sets. 4. In-
Place Sorting: It sorts the elements in place, requiring minimal extra memory overhead.
Multithreaded Merge Sort: Multithreaded Merge Sort involves parallelizing the process of
sorting by dividing the sorting task into smaller parts that can be sorted concurrently by
multiple threads. Pseudo Code for Multithreaded Merge Sort:
function multithreadedMergeSort(arr, left, right, numThreads):
if left < right:
mid = (left + right) / 2
create an array of threads[numThreads]
// Create threads for left and right halves of the array
threads[0] = spawn thread multithreadedMergeSort(arr, left, mid, numThreads / 2)
threads[1] = spawn thread multithreadedMergeSort(arr, mid + 1, right, numThreads / 2)
join threads[0]
join threads[1]
merge(arr, left, mid, right)
function merge(arr, left, mid, right):
Distributed Minimum Spanning Tree (MST): Approach: Distributed MST algorithms aim
to find the minimum spanning tree across a distributed network of nodes, ensuring that all
nodes are interconnected with minimal total edge weight. Algorithm Steps: 1. Initialization:
Each node may have some initial information about neighboring nodes and edges. 2. Message
Passing: Nodes exchange messages to communicate information about their neighboring
edges, weights, and the edges they've included in their potential MST. 3. MST Construction:
Nodes use distributed algorithms (e.g., Borůvka's algorithm, Kruskal's algorithm) to determine
edges that belong to the minimum spanning tree and share this information with other nodes.
4. Merging and Termination: Nodes merge their local minimum spanning trees received from
neighbors to construct the global minimum spanning tree or until a termination condition is
met. Challenges in Distributed Algorithms: - Communication Overhead: Increased
communication between nodes can lead to network congestion and higher latency. -
Synchronization: Ensuring synchronization and consistency across distributed nodes is
challenging. - Load Balancing: Balancing workload and data distribution among nodes is
crucial for efficiency.
Pseudo Code for Distributed MST:
function distributedMST(Graph G):
Initialize data structures and message passing mechanisms
Initially each node is a single-node tree (MST component)
while MST has not been fully connected:
for each node in the network:
Each node computes its local minimum weight outgoing edge (lightest edge)
Each node shares information about its lightest edge with its neighbors
Each node receives information about its neighbors' lightest edges
Each node merges its local tree with the received lightest edges to form a larger tree
The process continues until a connected minimum spanning tree is formed
Naive String Matching Algorithm: The Naive algorithm compares the pattern with
substrings of the text one by one, checking for a match. It slides the pattern one character at a
time and compares it with the corresponding substring in the text. If a match is found, it reports
the occurrence of the pattern.
Pseudo Code for Naive String Matching Algorithm:
function naiveStringMatch(text, pattern):
n = length of text
m = length of pattern
for i from 0 to n - m:
j=0
while j < m and text[i + j] equals pattern[j]:
j=j+1
if j equals m: // Match found
print "Pattern found at index", i
Steps: Initialize Variables: Get the lengths of the text and pattern (let n be the length of the text
and m be the length of the pattern). Pattern Matching: Iterate through the text and slide the
pattern across the text one character at a time. Comparison: At each position in the text,
compare the characters of the pattern with the corresponding characters in the text. Match
Verification: If the characters match, move to the next character in the pattern to check for a
complete match. Pattern Found: If all characters in the pattern match the substring in the text,
report the occurrence of the pattern at the current index. Repeat: Continue the process until the
entire text is traversed or until all occurrences of the pattern are found.
Rabin-Karp Algorithm: The Rabin-Karp algorithm uses hashing to efficiently search for a
pattern within a text by comparing hash values. It computes the hash value of the pattern and
slides through the text, computing hash values of substrings. If the hash values match, it
performs an additional verification step to confirm the match.
Pseudo Code for Rabin-Karp Algorithm:
function rabinKarp(text, pattern):
n = length of text
m = length of pattern
prime = a large prime number
patternHash = hash(pattern)
textHash = hash(text[0:m])
for i from 0 to n - m:
if patternHash == textHash:
if pattern[0...m-1] equals text[i...i+m-1]:
print "Pattern found at index", i
if i < n - m:
textHash = recalculateHash(textHash, text[i], text[i + m])
function hash(str):
function recalculateHash(oldHash, oldChar, newChar):
// Recalculate hash value efficiently by subtracting the contribution of oldChar and adding
newChar
Steps: Initialize Variables: Get the lengths of the text and pattern (n is the length of the text and
m is the length of the pattern). Choose a large prime number for the hashing calculation.
Compute Pattern Hash: Calculate the hash value of the pattern using a hash function (e.g.,
polynomial rolling hash). Compute Text Hash: Calculate the hash value of the first m characters
of the text. Hash Comparison: Slide the pattern along the text and compare the hash value of
the pattern with the hash value of the current substring in the text. String Comparison (if hash
matches): If the hash values match, perform a direct string comparison of the pattern with the
substring to verify the match. Pattern Found: If the characters in the substring match the pattern,
report the occurrence of the pattern at the current index. Recalculate Hash: If no match is found,
move to the next substring in the text by efficiently recalculating the hash value for the next
window. Repeat: Continue the process until the entire text is traversed or until all occurrences
of the pattern are found.