dsa-dsa-notes
dsa-dsa-notes
data<data< next; } i } void main() { stack s; s.push(10); s.display(); cout< finelude class queue { struct node { int data; struct node *next; }*front,*rear; 48Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT public: queue(); void enqueue(int); void dequeue(); void display(); queue::queue() { front=rear-NULL; } void queue::enqueue(int item) { struct node *p; p= new node; p->data = item; p->next = NULL; if{fiont—=NULL) { front=p; } if(rear!=NULL) { rear->next = p; } rear =p; } void queue::dequeue() { struct node *q; if(front==NULL) 49Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT cout<<"\nThe queue is Empty"< next; cout<<"\nThe value popped is "< data< fro if{front—=NULL) { cout<<"\nNothing to Display\n"; } else t cout<<"\nThe contents of Queueln"; while(p!=NULL) { cout< data<<" " po p-next; + t } void main() { queue q; qeenqueue(10); 50Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT adisplay(); ndl; q.enqueue(20); cout< qeenqueue(30); q.enqueue(40); g.enqueue(50); adisplay(); q.dequeue(); q.display(; q.dequeue(); a.display(; q.dequeue(); a.display; getch(); } Chapter Recursion A recursive method is a method that calls itself either directly or indirectly There are two key requirements to make sure that the recursion is successful: + Every recursive call must simplify the computation in some way. «There must be special cases to handle the simplest computations. Iteration Vs. Recursion «Ifa recursive method is called with a base case, the method returns a result. If a method is called with a more complex problem, the method divides the problem into two or more conceptual pieces: a piece that the method knows how to do and a slightly smaller version of the original problem, Because this new problem looks like the original problem, the method launches a recursive call to work on the smaller problem. © For recursion to terminate, each time the recursion method calls itself with a slightly simpler version of the original problem, the sequence of smaller and smaller problems must converge on the base case, When the method recognizes the base case, the result is returned to the previous method call and a sequence of retums ensures all the way up the line until the original call of the method eventually returns the final result. © Both iteration and recursion are based on a control structure: Iteration uses a repetition structure; recursion uses a selection structure. © Both iteration and recursion involve repetition: Iteration explicitly uses a repetition structure; recursion achieves repetition through repeated method calls 31Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT © Iteration and recursion each involve a termination test: Iteration terminates when the Joop-continuation condition fails; recursion terminates when a base case is recognized. © Iteration and recursion can occur infinitely: An infinite loop occurs with iteration if the loop-continuation test never becomes false; infinite recursion occurs if the recursion step does not reduce the problem in a manner that converges on the base case. n repeatedly invokes the mechanism, and consequently the overhead, of method scan be expensive in both processor time and memory space. The advantages and disadvantages of the two are not always obvious, and you should really take it on a case-by-case basis. When in doubt: test it. But generally: 1. Recursion may be slower, and use greater resources, because of the extra function calls, 2. Recursion may lead to simpler, shorter, easier-to-understand functions, especially for mathematicians who are comfortable with induction formulae. Either way, one of the most critical aspects of writing either a recursive or an iterative function is to have an exit clause (or "base case" for recursions) that is checked at every recursiorviteration and is guaranteed to be reached at some finite point, given any input. Otherwise you will get either Infinite Recursion: More and more functions being spawned, never closing, using up resources until there are none left, possibly crashing the system. Infinite loop: A loop that cycle forever, burning CPU and never completing. Recursive Fibonacei Fibonacci can be defined recursively as follows. Each number is the sum of the previous two ‘numbers in the sequence. The 7 Fibonacei number is Fib(n) = Fib(n-1) + Fib(n-2) Fib(0) = 1, Fib(1) = 1. That is, the n® number is formed by adding the two previous numbers together. The equation for Fib(n) is said to be the recurrence relation and the terms for Fib(0) and Fib(1) are the base cases. The code for the recursive Fibonacci sequence is provided in Figure 3 below. int Fib (int n) { if (n<=1) return 0; else return ( Fib(n-1) + Fib(n-2) ); i Fibonacci in C To find Fib(n) you need to call Fib two more times: once for Fib(n-1) and once more for Fib(n-2); then you add the results. But to find Fib(n-1), you need to call Fib two more times: once for 52Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT and once more for Fib(n-3); then you need to add the results. And so on! You can see the recursion tree for Fib(n) in Figure 4. The recursion tree shows each call to Fib() with different values for n. Fib(r-2) {/— Fib(n-3) Fib(n-4) Figure 4: The recursion tree for Fib(). Click to enlarge For example, consider Fib(5), the fifth term in the sequence. To find Fib(5), you would need to find Fib(4) + Fib(3). But to find Fib(4), you would need to find Fib(3) + Fib(2). Luckily, Fib(2) is a base case; the answer to Fib(2) is 1. And so on. Look at Figure 5 for an example recursion tree for Fib(5) ~ it lists all of the computations needed to carry out the calculation recursively. 53Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT Fib(6) Fib(d Fi) oo \ Fib(3) Fib(2) Fib(2) ‘The recursion tree for Fib(5). Base cases are highlighted in blue. It may help to visualize the nested instances of the series of recursive calls for Fib(S) in a sort of table. Figure 6 shows the recursive calls for Fib(5), with each call generating two subtables. The call to FibQ returns when both subtables have been worked down to their respective base cases, and the return value is propagated up the tree (down the table). IFib(S) — Fib(4) + Fib(3) FFib(4) = Fib(3) + Fib(2) Fib(3) ~ Fib(2) + Fib(1) Fib(2) = Fib(1) + Fib(0) D=1 FiO Fib(2) = Fib(1) + Fib(0) Fib(1)= 1 FBC) = 1 Fib(O)= 1 | FBC =1 FBO =T | papa) = Retum 1 Retum1 Retum1 || |Retum1 Return 1 |p a Return 1 =141 E1+t pRetwmn 2 Return 2 tum 2 =241 =2+1 Retum 3 Rewns = 3+2 Return 5 34Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT =543 Return 8 Figure Nested calls to Fib for Fib (5) Iterative Fibonacei The Fibonacci sequence can be defined iteratively by describing how to obtain the next value in the sequence. That is, if you start at the base cases of Fib (0) = Fib (1) = 1, you can build the sequence up to Fib (n) without using recursive calls. For example, calculating Fib (5) may go as follows. Fib(0) = 1 Fib(1)=1 Fib(2) = Fib(1) + Fib(0) = 1+ 1=2 Fib(3) = Fib(2) + Fib(1)=2 +1=3 Fib(4) = Fib(3) + Fib@2)=3 +2=5 Fib(5) = Fib(4) + Fib3)=5 +3=8 A\ll that is needed for this calculation is a while loop! RECURSION VS. ITERATION We have studied both recursion and iteration. They can be applied to a program depending upon the situation, Following table explains the differences between recursion and iteration. Recursion Vs. Iteration Recursion Iteration Recursion is the term given to the mechanism of defining a set or procedure in terms of itself. [The block of statement executed repeatedly using loops. A conditional statement is required in the body lof the function for stopping the function execution, IThe iteration control statement itself contains |statement for stopping the iteration, At every lexecution, the condition is checked. |At some places, use of recursion generates extra Joverhead. Hence, better to skip when easy solution is available with iteration. |All problems can be solved with iteration. [Recursion is expensive in terms of speed and memory. Iteration does not create any overhead. All the programming languages support iteration, 35Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT ADVANTAGES & DISADVANTAGES OF RECURSION Advantages of recursion, 1. Sometimes, in programming a problem can be solved without recursion, but at some situations in programming it is must to use recursion. For example, a program to display the list of all files of the system cannot be solved without recursion, 2. The recursion is very flexible in data structure like stacks, queues, linked list and quick sort, Using recursion, the length of the program can be reduced Chapter 6 Trees Basics Some basic terminology for trees: + Trees are formed from nodes and edges. Nodes are sometimes called vertices. Edges are sometimes called branches. «Nodes may have a number of properties including value and label. # Edges are used to relate nodes to each other. In a tree, this relation is called "parenthood." + An edge {a,b} between nodes a and b establishes a asthe parent of b. Also, bis. called a child of a. # Although edges are usually drawn as simple lines, they are really directed from parent to child, In tree drawings, this is top-to-bottom. + Informal Definition: a iree is a collection of nodes, one of which is distinguished as "root," along with a relation ("parenthood") that is shown by edges. «Formal Definition: This definition is "recursive" in that it defines tree in terms of itself. The definition is also "constructive" in that it describes how to construct a tree. 1. Asingle node is a tree. It is "root." 2. Suppose N is a node and T;, T:, ..., T, are trees with roots ny, np, ....m, respectively. We can construct a new tree T by making N the parent of the nodes m, 1s, ... Mh Then, N is the root of T and T,, T,, .... T, are subtrees. 56Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT The tree T, constructed using k subtrees More terminology A node is either internal or itis a leaf. A leafs a node that has no children. Every node in a tree (except root) has exactly one parent. The degree of a node is the number of children it has The degree ofa tree is the maximum degree of all of its nodes. Paths and Levels * Definition: A path is a sequence of nodes nj, ny, ...m such that node n, is the parent of node ng, for all 1 <=i1 10. Because both the left and right subtrees of a BST are again search trees; the above definition is recursively applied to all internal nodes: Exercise. Given a sequence of numbers: 11, 6, 8, 19, 4, 10, 5, 17, 43, 49, 31 61Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT Draw a binary search tree by inserting the above numbers from left to right. Searching Searching in a BST always starts at the root, We compare a data stored at the root with the key we are searching for (let us call it as toSearch). If the node does not contain the key we proceed either to the left or right child depending upon comparison. If the result of comparison is negative we go to the left child, otherwise - to the right child. The recursive structure of a BST yields a recursive algorithm, Searching in a BST has Oh) worst-case runtime complexity, where his the height of the tree. Since s binary search tree with n nodes has a minimum of O(log n) levels, it takes at least O(log n) comparisons to find a particular node. Unfortunately, a binary serch tree can degenerate to a linked list, reducing the search time to O(n), Deletion Deletion is somewhat more tricky than insertion. There are several cases to consider. A node to be deleted (let us call it as toDelete) # isnot in a tree; * isaleaf, # has only one child; © has two children. IftoDelete is not in the tree, there is nothing to delete. If toDelete node has only one child the procedure of deletion is identical to deleting a node from a linked list - we just bypass that node being deleted 62Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT before deletion after deletion Deletion of an internal node with two children is less straightforward. If we delete such a node, we split a tree into two sub-trees and therefore, some children of the internal node won't be accessible after deletion, In the picture below we delete 8: before deletion after deletion Deletion starategy is the following: replace the node being deleted with the largest node in the left subtree and then delete that largest node. By symmetry, the node being deleted can be swapped with the smallest node is the right subtre Given a sequence of numbers: 11, 6, 8, 19, 4, 10, 5, 17, 43, 49, 31 Draw a binary search tree by inserting the above numbers from left to right and then show the two trees that can be the result after the removal of 11 63Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT Binary search tree Adding a value Adding a value to BST can be divided into two stages: © search for a place to put a new element; insert the new element to this place. Let us see these stages in more detail Search for a place At this stage an algorithm should follow binary search tree property. If a new value is less, than the current node's value, go to the left sub-tree, else go to the right su-btree. Following this simple rule, the algorithm reaches a node, which has no left or right subtree, By the moment a place for insertion is found, we can say for sure, that a new value has no duplicate in the tree. Initially, a new node has no children, so it is a leaf. Let us see it at the picture. Gray circles indicate possible places for a new node. (2) (18) C4) (3) 64Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT Now, let’s go down to algorithm itself, Here and in almost every operation on BST recursion is utilized. Starting from the root, 1. check, whether value in current node and a new value are equal. If so, duplicate is found. Otherwise, 2. if'a new value is less, than the node's value: © fa current node has no left child, place for insertion has been found; © otherwise, handle the left child with the same algorithm, 3. if new value is greater, than the node's value: © ifa current node has no right child, place for insertion has been found; © otherwise, handle the right child with the same algorithm, Just before code snippets, let us have a look on the example, demonstrating a case of insertion in the binary search tree. Example Insert 4 to the tree, shown above. 65Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT Binary search tree. Removing a node Remove operation on binary search tree is more complicated, than add and search. Basically, in can be divided into two stages: © search for a node to remove; «if the node is found, run remove algorithm. Remove algorithm in detail Now, let’s see more detailed description of a remove algorithm. First stage is identical to algorithm for lookup, except we should track the parent of the current node. Second part is more tricky. There are three cases, which are described below. 1. Node to be removed has no children. This case is quite simple, Algorithm sets corresponding link of the parent to NULL and disposes the node. Example. Remove -4 from a BST. 66Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT (2) (8) 4 © 2. Node to be removed has one child. It this case, node is cut from the tree and algorithm links single child (with it's subtree) directly to the parent of the removed node. Example, Remove 18 from a BST. 67Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT OOD ® 3. Node to be removed has two children. This is the most complex case. To solve it, let us see one usefull BST property first, We are going to use the idea, that the same set of values may be represented as different binary-search trees. For example those BSTs: 68Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT contains the same values {5, 19, 21, 25}. To transform first tree into second one, we can do following: choose minimum element from the right sub-tree (19 in the example); © replace 5 by 19; © hang 5 asa left child. The same approach can be utilized to remove a node, which has two children: © find a minimum value in the right sub-tree; replace value of the node to be removed with found minimum, Now, right sub-tree contains a duplicate! © apply remove to the right sub-tree to remove a duplicate Notice, that the node with minimum value has no left child and, therefore, it's removal may result in first or second cases only. Example, Remove 12 froma BST. 69Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT (2) 12) OQ ®O® gg @ © Find minimum clement in the right sub-tree of the node to be removed. In current example it is 19. (2) 12) OQ ®O® gg Replace 12 with 19. Notice, that only values are replaced, not nodes. Now we have two nodes with the same value. 70Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT (2) © OQ ®O® gg @ Remove 19 from the left sub-tree. {/Binary Search Tree Program include #include using namespace std; class BinarySearchTree { private: struct tree_node { tree_node* left; 1Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT tree_node* right; int data; h tree_node* root; public: BinarySearchTree() { root = NULL; + bool isEmpty() const { return root—=NULL; } void print_inorder(); void inorder(tree_node*); void print_preorder(); void preorder(tree_node*); void print_postorder(); void postorder(tree_node*); void insert(int); void remove(int); h i Smaller elements go left I Jarger elements go right void BinarySearchTree::insert(int d) { tree_node* t = new tree_node; tree_node* parent; t>data = d; t left = NULL; NULL; NULL; //is this a new tree? ifisEmpty0) root else { //Note: ALL insertions are as leaf nodes tree_node* curr, curr= root; // Find the Node's parent. while(curr) { parent = curr; if{t->data > curr->data) curr = curr->right; else curr = curr>left; 2Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT t ifft->data < parent->data) parent->left else parent->right } } void BinarySearchTree::removetint d) { //Locate the element bool found = false; iftisEmpty() { cout<<" This Tree is empty! "< data = d) { found = true; break; } else { parent = curr, if{d>curr->data) curr = curr->right; else curt = curr>left; 3 t if{{found) cout<<" Data not found! "< left = NULL && curr>right != NULL)]| (curr->left != NULL && curr>right = NULL)) t if(cur->left = NULL && curr>right ! { if(parent>left = curt) { parent->left = curr->right; delete curr; } else { parent->right = curr->right; delete curr; t } else // left child present, no right child { if{parent>left = curr) { parent->left = curr->left; delete curr; } else { parent->right = curr->left; delete curr, } } return; t ULL) JNWe're looking at a leaf node if{ curr>left == NULL && curr->right NULL) { if(parent>left == curr) parent>left = NULL; else parent->right = NULL; delete curr; retum; 74Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT //Node with 2 children / replace node with smallest value in right subtree if (cun->left != NULL && curr->right ! NULL) { ‘tree_node* chkr; chkr = curr->right; if((chkr->left == NULL) && (chkr->right == NULL)) { curr = chkr; delete chkr; curr->right = NULL; } else // right child has children { ‘if the node's right child has a left child /! Move all the way down left to locate smallest element if\(curr>right)->Ieft != NULL) { tree_node* leurr; tree_node* leurp; loump = curr>right; leurr = (cun->right)->left; while(leurr->left != NULL) { Iourrp = lourr; Teurr = Icurr->left; } curr>data = leurr->data; delete leurr; leurrp->left = NULL; } else { tree_node* tmp; tmp = curr>right; curr->data = tmp->data; return; 15Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT void BinarySearchTree::print_inorder() { inorder(root); t void BinarySearchTree::inorder(tree_node* p) { if ! { iffp->teft) inorder(p>left); cout<<" "< datac<" "; ifp->right) inorder(p->right); } else return; t ULL) void BinarySearchTree::print_preorder() { preorder(root); } void BinarySearchTree::preorder(tree_node* p) { if(p != NULL) { cout<<" "<
left) preorder(p->tleft); if{p->right) preorder(p->right); } else return; } void BinarySearchTree::print_postorder() { postorder(root); t void BinarySearch'Tree::postorder(tree_node* p) { if(p != NULL) { if(p>left) postorder(p>lef); if{p->right) postorder(p->right); cout<<" "< data<<' } 16Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT else return; } int main() { BinarySearchTree b; int ch,tmp,tmp1; while(1) { cout< >ch; switeh(ch) { case | : cout<<" Enter Number to be inserted cin>>tmp; b.insert(tmp); break; case 2 : cout< >tmp1; b.remove(tmp1); break; case 6 1Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT return 0; Rebuild a binary tree from Inorder and Preorder traversals This is a well known problem where given any two traversals of a tree such as inorder & preorder or inorder & postorder or inorder & levelorder traversals we need to rebuild the tree. The following procedure demonstrates on how to rebuild tree from given inorder and preorder traversals of a binary tree: Preorder traversal visits Node, left subtree, right subtree recursively © Inorder traversal visits left subtree, node, right subtree recursively Since we know that the first node in Preorder is its root, we can easily locate the root node in the inorder traversal and hence we can obtain left subtree and right subtree from the inorder traversal recursively Consider the following example: Preorder Traversal: 1 2 4 8 9 10 11 5 3 67 Inorder Traversal: 8 4 109 1125163 7 Iteration 1: Root — {1} Left Subtree — {8,4,10,9,11,2,5} Right Subtree — {6,3,7} 8Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT Iteration 2: [Root — {2} [Root — {3} [Left Subtree — {8,4,10,9,11} [Left Subtree - {6} ight Subtree ~ {5 IRight Subtree ~ {7 Iteration 3: [Root - {2} ‘oot — {3} lLeft Subtree ~ {8,4,10,9,11} [Left Subtree — {6} IRight Subtree — {5 Right Subtree — {7} [Root — {4} [Done jone Left Subtrec ~ (8) Right Subtree 10,9,11 Iteration 4: 79Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT FRoot — {2} [Root — {3} ILeft Subtree — {8,4,10,9,11} [Left Subtree ~ {6} IRight Subtree — {5 IRight Subtree — {7 IRoot — {4} [Done [Done [Left Subtree — {8} Right Subtree 4 10,9,11 [Done IR- {9} [Done [Done left ST 4 {10} IRight IST-{11 Given inorder and postorder traversals construct a binary tree Let us consider the below traversals, Inorder sequence: D BE AF C Postorder sequence: DEB FC A Ina Postorder sequence, rightmost element is the root of the tree. So we know A is root. Search for A in Inorder sequence, Once we know position of A (or index of A) in Inorder sequence, we also know that all elements on left side of A are in left subtree and elements on right are in right subtree. 80Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT A rN iN DBE FC Let i be the index of root (i is 3 in the above case) in Indorder sequence. In Inorder sequence, everything from 0 to i-1 is in left subtree and (i-1)th element in postorder traversal is root of the left subtree. If i-1 <0 then left child of root is NULL In Inorder sequence, everything from (i+1) to (n-1) is in right subtree and (n-1)th element is the root of right subtree. If n-1 is equal to i then right child of root is NULL Recursively follow above steps, and we get the tree shown below. oN ao ae Examples An important example of AVL trees is the behaviors on a worst-case add sequence for regular binary trees: 1,2,3,4,5,6,7 All insertions are right-right and so rotations are all single rotate from the right. All but two insertions require rebalancing: at 1 2 at 3 = > 81Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT a2s ats It can be shown that inserting the sequence 1,...2°"'=1 will make a perfect tree of height n. Here is another example. The insertion sequence is: 50, 25, 10, 5, 7, 3, 30, 20, 8, 15 9) cn és 22 (25) double (25) @ QO @ @ single rot, left te) 25> 82Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT add(30), add(20), add(8) need no rebalancing double rot. right atT> The AVL Tree Rotations 1. Rotations: How they work A tree rotation can be an intimidating concept at first. You end up in a situation where you're juggling nodes, and these nodes have trees attached to them, and it can all become confusing very fast. I find it helps to block out what's going on with any of the sub-trees which are attached to the nodes you're fumbling with, but that can be hard. Left Rotation (LL) Imagine we have this situation: Figure 1-1 a \ b \ To fix this, we must perform a left rotation, rooted at A. This is done in the following steps: b becomes the new root,Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT a takes ownership of b's left child as its right child, or in this case, null. b takes ownership of a as its left child. The tree now looks like this: Figure 1-2 b N ac Right Rotation (RR) A right rotation is a mirror of the left rotation operation described above, Imagine we have this situation: Figure 1-3 © f b i a To fix this, we will perform a single right rotation, rooted at C, This is done in the following steps: b becomes the new root c takes ownership of b's right child, as its left child. In this case, that value is null. b takes ownership of c, as it's right child The resulting tree: Figure 1-4 b iN ac Left-Right Rotation (LR) or "Double left" Sometimes a single left rotation is not sufficient to balance an unbalanced tree. Take this situation: Figure 1-5 a \ © Perfect. It's balanced. Let's insert’ 84Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT Figure 1-6 a © i b Our initial reaction here is to do a single left rotation. Let's try that. Figure 1-7 c / a \ b Our left rotation has completed, and we're stuck in the same situation. If we were to do a single right rotation in this situation, we would be right back where we started. What's causing this? The answer is that this is a result of the right subtree having a negative balance. In other words, because the right subtree was left heavy, our rotation was not sufficient. What can we do? The answer is to perform a right rotation on the right subtree. Read that again. We will perform a right rotation on the right subtree. We are not rotating on our current root. We are rotating on our right child. Think of our right subtree, isolated from our main tree, and perform a right rotation on it Before: Figure 1-8 © / b After: Figure 1-9 b \ After performing a rotation on our right subtree, we have prepared our root to be rotated left, Here is our tree now: Figure 1-10 a \ b 85Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT © Looks like we're ready for a left rotation. Let's do that: Figure 1-11 b N ac Voila. Problem solved. Right-Left Rotiation (RL) or "Double right” A double right rotation, or right-left rotation, or simply RL, is a rotation that must be performed when attempting to balance a tree which has a left subtree, that is right heavy. This is a mirror operation of what was illustrated in the section on Left-Right Rotations, or double left rotations. Let's look at an example of a situation where we need to perform a Right-Left rotation. Figure 1-12 © / a b In this situation, we have a tree that is unbalanced. ‘The left subtree has a height of 2, and the right subtree has a height of 0. This makes the balance factor of our root node, ¢, equal to -2. What do we do? Some kind of right rotation is clearly necessary, but a single right rotation will not solve our problem. Let's try it: Figure 1-13 a \ © / b Looks like that didn't work. Now we have a tree that has a balance of 2. It would appear that we did not accomplish much. That is true. What do we do? Well, let's go back to the original tree, before we did our pointless right rotation: Figure 1-14 c 1 86Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT a \ b The reason our right rotation did not work, is because the left subtree, or ‘a, has a positive balance factor, and is thus right heavy. Performing a right rotation on a tree that has a left subtree that is right heavy will result in the problem we just witnessed. What do we do? The answer is to make our left subtree left-heavy. We do this by performing a left rotation our left subtree. Doing so leaves. us with this situation: Figure 1-15 c i b / a This is a tree which can now be balanced using a single right rotation. We can now perform our right rotation rooted at C. The result: Figure 1-16 Balance at last. 2, Rotations, When to Use Them and Why How to decide when you need a tree rotation is usually easy, but determining which type of rotation you need requires a little thought. A tree rotation is necessary when you have inserted or deleted a node which leaves the tree in an unbalanced state. An unbalanced state is defined as a state in which any subtree has a balance factor of greater than 1, or less than -1. That is, any tree with a difference between the heights of its two sub-trees greater than 1, is considered unbalanced. This is a balanced tree: Figure 2-1 I N 87Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT This is an unbalanced tree: Figure 2-2 This tree is considered unbalanced because the root node has a balance factor of 2. That is, the right subtree of 1 has a height of 2, and the height of I's left subtree is 0, Remember that balance factor of a tree with a left subtree A and a right subtree B is, B-A Simple. In figure 2-2, we see that the tree has a balance of 2. This means that the tree is considered "right heavy". We can correct this by performing what is called a "left rotation". How we determine which rotation to use follows a few basic rules. See psuedo code: IF tree is right heavy { IF tree's right subtree is left heavy { Perform Double Left rotation } ELSE { Perform Single Left rotation 3 } ELSE IF tree is left heavy { IF tree's left subtree is right heavy { Perform Double Right rotation } ELSE { Perform Single Right rotation 88Lecture notes on Data Structures And Algorithms, By Dilendra Bhatt, Assistant professor, NCIT As you can see, there is a situation where we need to perform a "double rotation". A single rotation in the situations described in the pseudo code leave the tree in an unbalanced state. Follow these rules, and you should be able to balance an AVL tree following an insert or delete every time. B-Trees Introduction A Betree is a specialized multiway tree designed especially for use on disk. In a B-tree cach node may contain a large number of keys. The number of subtrees of each node, then, may also be large A B-tree is designed to branch out in this large number of directions and to contain a lot of keys in each node so that the height of the tree is relatively small. This means that only a small number of nodes must be read from disk to retrieve an item. The goal is to get fast access to the data, and with disk drives this means reading a very small number of records, Note that a large node size (with lots of keys in the node) also fits with the fact that with a disk drive one can usually read a fair amount of data at once. Definitions A.multiway tree of order mis an ordered tree where each node has at most m children. For each node, if k is the actual number of children in the node, then k - | is the number of keys in the node. If the keys and subtrees are arranged in the fashion of a search tree, then this is called a multiway search tree of order m. For example, the following is a multiway search tree of order 4, Note that the first row in each node shows the keys, while the second row shows the pointers to the child nodes. Of course, in any useful application there would be a record of data associated with each key, so that the first row in each node might be an array of records where each record contains a key and its associated data, Another approach would be to have the first row of each node contain an array of records where each record contains a key and a record number for the associated data record, which is found in another file. This last method is often used when the data records are large. The example software will use the first method. 89