0% found this document useful (0 votes)
195 views33 pages

Ada Answers Simp M1,2,3

Answers for Ada tie questions.

Uploaded by

Akash Paul
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
195 views33 pages

Ada Answers Simp M1,2,3

Answers for Ada tie questions.

Uploaded by

Akash Paul
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

ADA-BCS401 TIE SIMP Answers- Module 1 to 3

1. Analysis Framework of Algorithms: Worst Case, Best Case, and Average Case Efficiencies

The analysis of algorithms is a fundamental area in computer science that focuses on


understanding the performance characteristics of algorithms. This includes measuring how
an algorithm’s running time or space requirements grow as the size of the input increases.
The analysis is typically conducted using a mathematical framework that considers
different scenarios:

Worst-Case Efficiency:

The worst-case efficiency of an algorithm represents the maximum time or space that an
algorithm could require for any input of size n.

Example: For a linear search algorithm that searches for an element in an unsorted array,
the worst case occurs when the element is either at the last position or is not present in the
array at all. In this case, the algorithm needs to examine all n elements, leading to a worst-
case time complexity of O(n).

Best-Case Efficiency:

The best-case efficiency of an algorithm represents the minimum time or space required by
the algorithm for any input of size n.

Example: For the same linear search algorithm, the best case occurs when the element is the
first one in the array. In this case, the algorithm completes the search in constant time, O(1).

Average-Case Efficiency:

The average-case efficiency represents the expected time or space that an algorithm will
require over all possible inputs of size n.
Example: For linear search, if we assume that the element is equally likely to be anywhere in
the array, the average case would require examining about half of the elements, leading to
an average-case time complexity of O(n).

Example Algorithm: Linear Search

Algorithm:

def linear_search(arr, target):

for i in range(len(arr)):

if arr[i] == target:

return i

return -1

Analysis:

- Worst Case: The target is at the last position or not in the array at all, requiring no
comparisons. Thus, the time complexity is O(n).

- Best Case: The target is at the first position, requiring only 1 comparison. Thus, the time
complexity is O(1).

- Average Case: Assuming that the target is equally likely to be at any position, the average
number of comparisons is n/2, leading to an average-case time complexity of O(n).

2. Asymptotic Notations and Basic Efficiency Classes

Asymptotic Notations provide a mathematical way to describe the growth of an algorithm's


running time or space requirement in terms of the input size n. The three most commonly
used asymptotic notations are:

Big-O Notation (O):


Big-O notation gives an upper bound on the running time of an algorithm, describing the
worst-case scenario.

Example: If an algorithm's running time is O(n^2), it means that the running time will not
exceed cn^2 for some constant c as n grows large.

Theta Notation (θ):

Theta notation gives a tight bound on the running time, indicating both the upper and lower
bounds. It represents the exact growth rate of the algorithm.

Example: If an algorithm's running time is θ(n log n), it means that the running time is both
O(n log n) and Ω(n log n).

Omega Notation (Ω):

Omega notation provides a lower bound on the running time, describing the best-case
scenario.

Example: If an algorithm's running time is Ω(n), it means that the algorithm will take at least
cn time for some constant c as n grows large.

Efficiency Classes:

Efficiency classes categorize algorithms based on their time complexity:

- Constant Time: O(1)

- Logarithmic Time: O(log n)

- Linear Time: O(n)

- Linearithmic Time: O(n log n)

- Quadratic Time: O(n^2)

- Cubic Time: O(n^3)


- Exponential Time: O(2^n)

3. Algorithm and Mathematical Analysis

What is an Algorithm?

An algorithm is a well-defined set of instructions or a step-by-step procedure to solve a


problem or perform a computation. Algorithms are fundamental to programming and
computer science, providing a blueprint for solving specific tasks.

Example: Algorithm to Find Maximum Element in an Array

Algorithm:

```python

def find_maximum(arr):

max_val = arr[0]

for i in range(1, len(arr)):

if arr[i] > max_val:

max_val = arr[i]

return max_val

Mathematical Analysis:

- Best Case: The maximum element is the first element. The algorithm still needs to check all
elements, but no updates are made to max_val. Time complexity: O(n).

- Worst Case: The maximum element is the last element, requiring the algorithm to traverse
the entire array. Time complexity: O(n).

- Average Case: On average, the algorithm needs to compare each element with max_val,
leading to a time complexity of O(n).

---
4. Proving t1(n) ∈ O(g1(n)) and t2(n) ∈ O(g2(n)) Implies t1(n) + t2(n) ∈ O(max {g1(n),
g2(n)})

Proof:

Given:

- t1(n) ∈ O(g1(n)), meaning t1(n) ≤ c1 g1(n) for some constant c1 and for all n ≥ n0.

- t2(n) ∈ O(g2(n)), meaning t2(n) ≤ c2 g2(n) for some constant c2 and for all n ≥ n1.

We need to prove:

t1(n) + t2(n) ∈ O(max {g1(n), g2(n)})

Proof:

For t1(n) + t2(n), we have:

t1(n) + t2(n) ≤ c1 g1(n) + c2 g2(n)

Let g(n) = max {g1(n), g2(n)}.

We know:

t1(n) + t2(n) ≤ c1 g(n) + c2 g(n) = (c1 + c2) g(n)

Thus, t1(n) + t2(n) ≤ c g(n) where c = c1 + c2.

Hence, t1(n) + t2(n) ∈ O(g(n)) = O(max {g1(n), g2(n)}).


5. Recursive Algorithm for Tower of Hanoi and Fibonacci Sequence Analysis

Tower of Hanoi:

The Tower of Hanoi is a classic recursive problem where the goal is to move n disks from a
source peg to a destination peg using an auxiliary peg, following specific rules.

Recursive Algorithm:

```python

def tower_of_hanoi(n, source, destination, auxiliary):

if n == 1:

print(f"Move disk 1 from {source} to {destination}")

return

tower_of_hanoi(n-1, source, auxiliary, destination)

print(f"Move disk {n} from {source} to {destination}")

tower_of_hanoi(n-1, auxiliary, destination, source)

Time Complexity Analysis:

The time complexity T(n) for n disks can be expressed as:

T(n) = 2 T(n-1) + 1

Solving this recurrence:

T(n) = 2^n - 1

Therefore, the time complexity is O(2^n), indicating exponential growth.

Let's continue with the requested format for the questions and answers.
6. Design an Algorithm to Search an Element in an Array Using Sequential Search. Discuss
the Worst Case, Best Case, and Average Case Analysis of This Algorithm

Sequential Search (or Linear Search):

Sequential search is a straightforward algorithm that checks each element in the array until
it finds the target element or reaches the end of the array.

Algorithm:

def sequential_search(arr, target):

for i in range(len(arr)):

if arr[i] == target:

return i Return the index where the target is found

return -1 Return -1 if the target is not found

Worst Case Analysis:

- Scenario: The target element is at the last position of the array or is not present in the
array at all.

- Time Complexity: In this case, the algorithm must check every element in the array,
resulting in a time complexity of O(n), where n is the number of elements in the array.

Best Case Analysis:

- Scenario: The target element is the first element in the array.

- Time Complexity: The algorithm finds the target after the first comparison, resulting in a
time complexity of O(1).

Average Case Analysis:

- Scenario: The target element is equally likely to be at any position in the array.
- Time Complexity: On average, the algorithm will check half of the elements before finding
the target, leading to an average-case time complexity of O(n).

7. Give the Mathematical Analysis of Non-Recursive Matrix Multiplication Algorithms

Algorithm:

def matrix_multiplication(A, B):

n = len(A)

m = len(A[0])

p = len(B[0])

Initialize matrix C with zeros

C = [[0 for _ in range(p)] for _ in range(n)]

Perform multiplication

for i in range(n):

for j in range(p):

for k in range(m):

C[i][j] += A[i][k] B[k][j]


return C

8. Analysis of the Given Algorithm GUESS

Algorithm:

Algorithm GUESS (A[][])

for i ← 0 to n-1

for j ← 0 to i

A[i][j] ← 0

What Does the Algorithm Compute?

The algorithm `GUESS` takes a two-dimensional array `A[][]` of size `n x n` as input and sets
all elements below and on the main diagonal (including the diagonal) of the matrix to `0`.

Specifically:

- For each row `i`, the inner loop iterates over the columns `j` from `0` to `i`.

- For each of these positions `A[i][j]`, the algorithm assigns the value `0`.

Example:

For an input matrix `A` of size `3x3`:

A = [[a11, a12, a13],

[a21, a22, a23],

[a31, a32, a33]]

After running the algorithm, the matrix `A` will look like this:

A = [[0, a12, a13],

[0, 0, a23],
[0, 0, 0 ]]

This indicates that the algorithm zeroes out the lower triangular part of the matrix
(including the main diagonal).

Basic Operation and Efficiency:

Basic Operation:

The basic operation in this algorithm is the assignment `A[i][j] ← 0`. This operation is
performed for all `i` and `j` where `j ≤ i`.

Number of Basic Operations:

To calculate the total number of operations:

- For `i = 0`: The inner loop runs once (`j = 0`), so 1 operation.

- For `i = 1`: The inner loop runs twice (`j = 0, 1`), so 2 operations.

- For `i = 2`: The inner loop runs three times (`j = 0, 1, 2`), so 3 operations.

- For `i = n-1`: The inner loop runs `n` times.

The total number of operations is the sum of the first `n` integers:

\[ 1 + 2 + 3 + ... + n = \frac{n(n+1)}{2} \]

Efficiency:

- Time Complexity:

The time complexity of the algorithm is \( O(n^2) \). This is because the number of
operations is proportional to the sum of the first `n` integers, which is \( O(n^2) \).
9. Explain Divide and Conquer Algorithm with Its Advantages and Disadvantages. Compare
Straightforward Method and Divide and Conquer Method for Finding Max and Min Elements
of the List

Divide and Conquer Algorithm:

The divide and conquer algorithm is a design paradigm based on multi-branched recursion.
The problem is divided into smaller subproblems of the same type, each of which is solved
independently, and then the solutions to the subproblems are combined to solve the
original problem.

Steps Involved:

1. Divide: Break the problem into smaller subproblems.

2. Conquer: Solve each subproblem recursively.

3. Combine: Merge the solutions to the subproblems to get the solution to the original
problem.

Advantages:

- Efficiency: It often reduces the time complexity of the problem.

- Parallelism: The subproblems can often be solved in parallel, making the algorithm more
efficient on parallel processing systems.

- Modularity: The problem is broken down into independent subproblems, making it easier
to manage and understand.

Disadvantages:

- Overhead: The recursive calls and combining steps can add overhead, especially if the
problem doesn't naturally break down into independent subproblems.

- Complexity: For some problems, the divide and conquer approach may be more complex
to implement than straightforward methods.
Comparison of Straightforward Method and Divide and Conquer Method for Finding Max
and Min Elements:

- Straightforward Method:

- Approach: Iterate through the list once, comparing each element to the current max and
min.

- Time Complexity: O(n)

- Advantages: Simple to implement, efficient for small datasets.

- Disadvantages: Cannot be parallelized easily.

- Divide and Conquer Method:

- Approach: Divide the list into two halves, find the max and min in each half recursively,
and then combine the results.

- Time Complexity: O(n)

- Advantages: Can be parallelized, may be more efficient for large datasets.

- Disadvantages: More complex to implement, involves additional overhead due to


recursive calls.

10. Design Merge Sort Algorithm. Write a Descriptive Note on Its Best Case, Average Case,
and Worst-Case Time Efficiency

Merge Sort Algorithm:

Merge sort is a classic example of a divide and conquer algorithm. It works by recursively
dividing the array into two halves, sorting each half, and then merging the sorted halves.

Algorithm:

```python

def merge_sort(arr):
if len(arr) > 1:

mid = len(arr) // 2 Finding the mid of the array

L = arr[:mid] Dividing the elements into 2 halves

R = arr[mid:]

merge_sort(L) Sorting the first half

merge_sort(R) Sorting the second half

i=j=k=0

Copy data to temp arrays L[] and R[]

while i < len(L) and j < len(R):

if L[i] < R[j]:

arr[k] = L[i]

i += 1

else:

arr[k] = R[j]

j += 1

k += 1

Checking if any element was left

while i < len(L):

arr[k] = L[i]

i += 1

k += 1
while j < len(R):

arr[k] = R[j]

j += 1

k += 1

Time Complexity Analysis:

- Best Case: O(n log n)

Even in the best case, merge sort needs to divide the array and merge it back, so the time
complexity remains O(n log n).

- Average Case: O(n log n)

On average, merge sort will always take O(n log n) time because it consistently splits the
array into two halves and merges them back.

- Worst Case: O(n log n)

The worst-case time complexity also remains O(n log n) because the algorithm follows the
same steps regardless of the order of elements.

Space Complexity:

Merge sort requires additional space proportional to the size of the array being sorted,
making its space complexity O(n).

11. Explain Binary Search as an Iterative and Recursive Algorithm

Binary Search:

Binary search is a highly efficient algorithm for finding an element in a sorted array. It
works by repeatedly dividing the search interval in half. If the value of the search key is less
than the item in the middle of the interval, the algorithm narrows the interval to the lower
half. Otherwise, it narrows it to the upper half. This process continues until the search key is
found or the interval is empty.

Iterative Binary Search:

def iterative_binary_search(arr, target):

low = 0

high = len(arr) - 1

while low <= high:

mid = (low + high) // 2

if arr[mid] == target:

return mid

elif arr[mid] < target:

low = mid + 1

else:

high = mid - 1

return -1 Element is not present in the array

Recursive Binary Search:

def recursive_binary_search(arr, low, high, target):

if high >= low:

mid = (low + high) // 2

If the element is present at the middle itself

if arr[mid] == target:
return mid

If the element is smaller than mid, it can only be present in the left subarray

elif arr[mid] > target:

return recursive_binary_search(arr, low, mid - 1, target)

Else the element can only be present in the right subarray

else:

return recursive_binary_search(arr, mid + 1, high, target)

else:

return -1 Element is not present in the array

Time Complexity:

- Best Case: O(1)

The target element is found at the middle index on the first comparison.

- Average Case: O(log n)

On average, binary search will take O(log n) time, as it reduces the search interval by half
each time.

- Worst Case: O(log n)

The worst-case scenario is when the element is not present or is found at the last possible
comparison.
12. What is a Quick Sort Algorithm? Apply a Quick Sort Algorithm to Sort the List E, X, A, M,
P, L, E in Alphabetical Order. Draw the Tree of Recursive Calls Made

Quick Sort Algorithm:

Quick sort is a divide and conquer algorithm that selects a 'pivot' element from the array
and partitions the other elements into two sub-arrays, according to whether they are less
than or greater than the pivot. The sub-arrays are then sorted recursively.

Algorithm:

def quick_sort(arr):

if len(arr) <= 1:

return arr

else:

pivot = arr[len(arr) // 2]

left = [x for x in arr if x < pivot]

middle = [x for x in arr if x == pivot]

right = [x for x in arr if x > pivot]

return quick_sort(left) + middle + quick_sort(right)

Sorting the List `E, X, A, M, P, L, E`:

Initial List: E, X, A, M, P, L, E

1. Choose a pivot (e.g., M).

2. Partition the list:

- Left: E, A, E

- Pivot: M
- Right: X, P, L

3. Recursively sort the left and right sub-arrays.

Sorted List: A, E, E, L, M, P, X

Tree of Recursive Calls:

```

[E, X, A, M, P, L, E]

/ | \

[E, A, E] | [M] | [X, P, L]

/ | \ / | \

[A] [E, E] [M] [P] [L] [X]

Time Complexity:

- Best Case: O(n log n)

When the pivot divides the array into two nearly equal halves.

- Average Case: O(n log n)

On average, quicksort performs well with O(n log n) time complexity.

- Worst Case: O(n^2)

Occurs when the pivot is the smallest or largest element, causing unbalanced partitions.

13. Explain Strassen’s Algorithm and Derive Its Time Complexity

Strassen’s Algorithm:
Strassen’s algorithm is an efficient algorithm for matrix multiplication that reduces the
complexity from \( O(n^3) \) to approximately \( O(n^{2.81}) \) by dividing the matrices
into smaller submatrices.

Steps Involved:

1. Divide two matrices A and B of size n × n into four submatrices each.

2. Perform multiplication and addition operations on the submatrices using 7


multiplications and 18 additions/subtractions.

Strassen’s Algorithm for 2x2 Matrices:

Given matrices A and B:

A = | a11 a12 | B = | b11 b12 |

| a21 a22 | | b21 b22 |

Str

assen’s method computes the product matrix C using 7 multiplications and 18 additions:

```

C11 = M1 + M4 - M5 + M7

C12 = M3 + M5

C21 = M2 + M4

C22 = M1 - M2 + M3 + M6

Where:

M1 = (a11 + a22)(b11 + b22)

M2 = (a21 + a22) b11

M3 = a11 (b12 - b22)

M4 = a22 (b21 - b11)


M5 = (a11 + a12) b22

M6 = (a21 - a11) (b11 + b12)

M7 = (a12 - a22) (b21 + b22)

Time Complexity:

The time complexity of Strassen’s algorithm is approximately \( O(n^{2.81}) \). Although


the algorithm reduces the number of multiplications, the increased number of additions and
the recursive nature adds some overhead.

14. Explain in Detail About the Travelling Salesman Problem Using Exhaustive Search

Travelling Salesman Problem (TSP):

The Travelling Salesman Problem (TSP) is an optimization problem where the goal is to find
the shortest possible route that visits a set of cities exactly once and returns to the starting
city.

Exhaustive Search:

In the exhaustive search approach, all possible permutations of the cities are generated, and
the total distance for each permutation is calculated. The permutation with the minimum
distance is the optimal solution.

Steps Involved:

1. List all possible permutations of the cities.

2. Calculate the total distance for each permutation.

3. Identify the permutation with the shortest total distance.

Example:
Consider 4 cities A, B, C, and D. The exhaustive search method would involve calculating the
total distance for all permutations (A -> B -> C -> D -> A, A -> C -> B -> D -> A, etc.) and
selecting the shortest one.

Time Complexity:

The time complexity of the exhaustive search method for TSP is \( O(n!) \), which makes it
impractical for large numbers of cities.

Disadvantages:

- The exhaustive search method is computationally expensive and becomes infeasible as the
number of cities increases due to factorial time complexity.

15. Explain in Detail About the Knapsack Problem and Closest Pair Problem

Knapsack Problem:

The knapsack problem is a combinatorial optimization problem where you are given a set of
items, each with a weight and a value, and a knapsack with a maximum capacity. The goal is
to determine the most valuable combination of items that can fit in the knapsack without
exceeding its capacity.

Types of Knapsack Problems:

- 0/1 Knapsack: Each item can either be included or excluded from the knapsack.

- Fractional Knapsack: Items can be broken into fractions, and a fraction of an item can be
included in the knapsack.

Dynamic Programming Solution (0/1 Knapsack):

def knapsack(W, wt, val, n):

K = [[0 for x in range(W + 1)] for x in range(n + 1)]


for i in range(n + 1):

for w in range(W + 1):

if i == 0 or w == 0:

K[i][w] = 0

elif wt[i - 1] <= w:

K[i][w] = max(val[i - 1] + K[i - 1][w - wt[i - 1]], K[i - 1][w])

else:

K[i][w] = K[i - 1][w]

return K[n][W]

- Time Complexity: O(nW), where n is the number of items and W is the maximum weight.

Closest Pair Problem:

The closest pair problem is a computational geometry problem where the goal is to find the
pair of points that are closest to each other in a given set of points in a plane.

Divide and Conquer Solution:

1. Divide: Split the points into two halves by a vertical line.

2. Conquer: Recursively find the closest pairs in the left and right halves.

3. Combine: Find the closest pair of points where one point lies in the left half and the other
in the right half.

- Time Complexity: O(n log n), as the problem is divided into subproblems and the merging
step is linear.

16. Define Heap, Explain the Notion of the Heap with Illustrations Also Explain the
Properties of a Heap
Heap Definition:

A heap is a specialized tree-based data structure that satisfies the heap property. Heaps are
typically used to implement priority queues, where the highest (or lowest) priority element
is always at the root.

Types of Heaps:

- Max-Heap: In a max-heap, the value of each node is greater than or equal to the values of
its children. The largest element is at the root.

- Min-Heap: In a min-heap, the value of each node is less than or equal to the values of its
children. The smallest element is at the root.

Properties of a Heap:

1. Complete Binary Tree: A heap is a complete binary tree, meaning all levels are fully filled
except possibly the last, which is filled from left to right.

2. Heap Property:

- Max-Heap Property: For any given node i, the value of i is greater than or equal to the
values of its children.

- Min-Heap Property: For any given node i, the value of i is less than or equal to the values
of its children.

Illustration of a Max-Heap:

Consider the following array: `[10, 5, 3, 2, 4]`


The corresponding max-heap is:

10

/ \

5 3

/\

2 4

Illustration of a Min-Heap:

Consider the following array: `[1, 3, 6, 5, 9, 8]`

The corresponding min-heap is:

/\

3 6

/\ \

5 9 8

17. Define Heap Sort. Consider the Array: `arr[] = {4, 10, 3, 5, 1}`. Build a Complete Binary
Tree from the Array.

Heap Sort Definition:

Heap sort is a comparison-based sorting technique based on a binary heap data structure. It
is similar to the selection sort where we first find the maximum element and place it at the
end. We repeat the same process for the remaining elements.

Steps Involved in Heap Sort:


1. Build a Max-Heap from the input data.

2. Swap the root (largest value) with the last item of the heap.

3. Reduce the heap size by one and heapify the root element to get the highest element at
the root again.

4. Repeat the above steps until the heap size is greater than one.

Building a Complete Binary Tree from the Array `arr[] = {4, 10, 3, 5, 1}`:

1. Insert elements into the binary tree level by level:

/\

10 3

/\

5 1

2. Now, convert this binary tree into a max-heap by heapifying:

10

/ \

5 3

/\

4 1

Array Representation after Heapify:

`arr[] = {10, 5, 3, 4, 1}`


18. Explain the Principles for Constructing a Heap - Explain Bottom-Up and Top-Down
Heap Construction in Detail with Appropriate Algorithms for Each

Principles of Heap Construction:

Heap construction involves creating a heap from an unsorted array by ensuring that the
heap property is maintained throughout the structure. The two common methods for
constructing a heap are bottom-up and top-down.

Bottom-Up Heap Construction:

In bottom-up heap construction, we start from the lowest non-leaf node and move upwards,
ensuring that each node and its children satisfy the heap property.

Algorithm (Bottom-Up Heap Construction):

def heapify(arr, n, i):

largest = i

left = 2 i + 1

right = 2 i + 2

if left < n and arr[i] < arr[left]:

largest = left

if right < n and arr[largest] < arr[right]:

largest = right

if largest != i:

arr[i], arr[largest] = arr[largest], arr[i]

heapify(arr, n, largest)
def build_max_heap(arr):

n = len(arr)

for i in range(n // 2 - 1, -1, -1):

heapify(arr, n, i)

- Example: Given `arr[] = {4, 10, 3, 5, 1}`, after applying bottom-up heap construction, the
heapified array is `{10, 5, 3, 4, 1}`.

Top-Down Heap Construction:

In top-down heap construction, we start with an empty heap and insert elements one by
one, ensuring the heap property is maintained after each insertion.

Algorithm (Top-Down Heap Construction):

def heap_insert(arr, n, value):

arr.append(value)

i=n

while i > 0 and arr[(i - 1) // 2] < arr[i]:

arr[i], arr[(i - 1) // 2] = arr[(i - 1) // 2], arr[i]

i = (i - 1) // 2

def build_heap_top_down(arr):

heap = []

for value in arr:

heap_insert(heap, len(heap), value)

return heap
19. Explain (i) New Key Insertion (ii) Deletion of a Key (iii) Maximum Key Deletion and (iv)
The Efficiency of Deletion in Heap with Appropriate Illustrations and Algorithmic Examples

(i) New Key Insertion:

To insert a new key into a heap, we add the key at the end of the array (or heap), and then
we "bubble up" or "heapify up" the new key until the heap property is restored.

Algorithm:

def insert_key(heap, key):

heap.append(key)

i = len(heap) - 1

while i != 0 and heap[(i - 1) // 2] < heap[i]:

heap[i], heap[(i - 1) // 2] = heap[(i - 1) // 2], heap[i]

i = (i - 1) // 2

Example: Insert 6 into the max-heap `{10, 5, 3, 4, 1}`. The resulting heap will be `{10, 6, 3, 4,
1, 5}`.

(ii) Deletion of a Key:

To delete a key from a heap, replace the key with the last element in the heap, remove the
last element, and then "bubble down" or "heapify down" the replaced element until the
heap property is restored.

Algorithm:

def delete_key(heap, i):

heap[i] = heap[-1]

heap.pop()

heapify(heap, len(heap), i)
Example: Deleting the root (10) from `{10, 5, 3, 4, 1, 5}` results in the heap `{5, 4, 3, 1, 5}`
after heapifying.

(iii) Maximum Key Deletion:

In a max-heap, the maximum key is always at the root. Deleting the maximum key involves
removing the root and restoring the heap property.

Algorithm:

```python

def extract_max(heap):

if len(heap) == 0:

return None

root = heap[0]

heap[0] = heap[-1]

heap.pop()

heapify(heap, len(heap), 0)

return root

Example: Extracting max from `{10, 6, 3, 4, 1, 5}` results in the heap `{6, 5, 3, 4, 1}`.

(iv) Efficiency of Deletion:

- Time Complexity: The time complexity for deletion (including heapify operation) is O(log
n), where n is the number of elements in the heap.

- Efficiency: Deletion in a heap is efficient due to the logarithmic time complexity, which
makes it faster compared to linear data structures.
20. Discuss with Examples (i) Horspool’s Algorithm (ii) Boyer-Moore Algorithm

(i) Horspool’s Algorithm:

Horspool's algorithm is an efficient string-matching algorithm that is a variation of the


Boyer-Moore algorithm. It preprocesses the pattern to create a shift table that determines
how much the pattern should be shifted when a mismatch occurs.

Steps Involved:

1. Create a shift table for all characters in the alphabet.

2. Align the pattern with the text and compare characters from right to left.

3. If a mismatch occurs, use the shift table to determine how far to shift the pattern.

4. Repeat the process until a match is found or the text is exhausted.

Example:

Text

: "ABC ABCDAB ABCDABCDABDE"

Pattern: "ABCDABD"

Shift table:

A -> 3

B -> 2

C -> 1

D -> 6

Other -> 7 (length of pattern)

Matching process:

1. Align pattern "ABCDABD" with the text.


2. Compare from right to left; shift pattern using the table on mismatch.

3. Continue until match or end of text.

(ii) Boyer-Moore Algorithm:

The Boyer-Moore algorithm is a string-matching algorithm that is particularly efficient for


matching long patterns. It uses two heuristic methods: the "bad character rule" and the
"good suffix rule" to skip unnecessary comparisons.

Steps Involved:

1. Bad Character Rule: When a mismatch occurs, shift the pattern so that the bad character
in the text aligns with its last occurrence in the pattern.

2. Good Suffix Rule: If a mismatch occurs at position i, shift the pattern so that the suffix of
the pattern that matches the text aligns with another occurrence of the suffix in the pattern.

Example:

Text: "ABAAABCD"

Pattern: "ABC"

- Start comparing from rightmost character of the pattern.

- If a mismatch occurs (e.g., C in text vs B in pattern), apply the bad character rule to shift
the pattern.

21. Define AVL Trees? Explain Different Rotation Types in AVL Trees with Sketches

AVL Tree Definition:

An AVL tree is a self-balancing binary search tree where the difference in heights between
the left and right subtrees of any node is at most one. The AVL tree is named after its
inventors Adelson-Velsky and Landis.
Properties of AVL Trees:

- Balance Factor: For each node, the height difference between the left and right subtrees is
called the balance factor. An AVL tree maintains a balance factor of -1, 0, or 1 for every node.

- Rotations: Rotations are used to restore the balance in an AVL tree whenever nodes are
inserted or deleted.

Rotation Types:

1. Single Right Rotation (LL Rotation): Occurs when a node is inserted into the left subtree
of the left child.

z y

/\ / \

y T4 Right Rotate (z) x z

/\ -----------------> / \ / \

x T3 T1 T2 T3 T4

/\

T1 T2

2. Single Left Rotation (RR Rotation): Occurs when a node is inserted into the right subtree
of the right child.

z y

/ \ / \

T1 y Left Rotate(z) z x

/ \ -------------> / \ / \

T2 x T1 T2 T3 T4

/ \

T3 T4

```
3. Left-Right Rotation (LR Rotation): A double rotation, first left on the left child, then right
on the unbalanced node.

z z x

/\ / \ / \

y T4 Left Rotate (y) x T4 Right Rotate(z) y z

/\ -------------> / \ --------------> / \ / \

T1 x y T3 T1 T2 T3 T4

/\ / \

T2 T3 T1 T2

4. Right-Left Rotation (RL Rotation): A double rotation, first right on the right child, then left
on the unbalanced node.

z z x

/ \ / \ / \

T1 y Right Rotate (y) T1 x Left Rotate(z) z y

/ \ -------------> / \ --------------> / \ / \

x T4 T2 y T1 T2 T3 T4

/ \ / \

T2 T3 T3 T4

You might also like