|
119 | 119 | * Binary Search Tree
|
120 | 120 | * Sorting Algorithms [Wikipedia](https://en.wikipedia.org/wiki/Sorting_algorithm?oldformat=true)
|
121 | 121 | - Using the most efficient sorting algorithm (and correct data structures that implement it) is vital for any program, because data manipulation can be one of the most significant bottlenecks in case of performance and the main purpose of spending time, determining the best algorithm for the job, is to drastically improve said performance. The efficiency of an algorithm is measured in its' "Big O" ([StackOverflow](https://stackoverflow.com/questions/487258/what-is-a-plain-english-explanation-of-big-o-notation)) score. Really good algorithms perform important actions in O(n log n) or even O(log n) time and some of them can even perform certain actions in O(1) time (HashTable insertion, for example). But there is always a trade-off - if some algorithm is really good at adding a new element to a data structure, it is, most certainly, much worse at data access than some other algorithm. If you are proficient with math, you may notice that "Big O" notation has many similarities with "limits", and you would be right - it measures best, worst and average performances of an algorithm in question, by looking at its' function limit. It should be noted that, when we are speaking about O(1) - constant time - we are not saying that this algorithm performs an action in one operation, rather that it can perform this action with the same number of operations (roughly), regrardless of the amount of elements it has to take into account. Thankfully, a lot of "Big O" scores have been already calculated, so you don't have to guess, which algorithm or data structure will perform better in your project. ["Big O" cheat sheet](http://bigocheatsheet.com/)
|
122 |
| - * Bubble sort |
123 |
| - <table> |
124 |
| - <tr> |
125 |
| - <th colspan="3" align="center">Time Complexity</th> |
126 |
| - <th align="center">Space Complexity</th> |
127 |
| - </tr> |
128 |
| - <tr> |
129 |
| - <th align="center">Best</th> |
130 |
| - <th align="center">Avegage</th> |
131 |
| - <th align="center">Worst</th> |
132 |
| - <th align="center">Worst</th> |
133 |
| - </tr> |
134 |
| - <tr> |
135 |
| - <td align="center">Ω(n)</td> |
136 |
| - <td align="center">Θ(n^2)</td> |
137 |
| - <td align="center">O(n^2)</td> |
138 |
| - <td align="center">O(1)</td> |
139 |
| - </td> |
140 |
| - </tr> |
141 |
| - </table> |
142 |
| - * Selection sort |
| 122 | + * Bubble sort [Wikipedia](https://en.wikipedia.org/wiki/Bubble_sort?oldformat=true) |
| 123 | + - Bubble sort is one of the simplest sorting algorithms. It just compares neighbouring elements and if the one that precedes the other is smaller - it changes their places. So over one iteration over the data list, it is guaranteed that **at least** one element will be in its' correct place (the biggest/smallest one - depending on the direction of sorting). This is not a very efficient algorithm, as highly unordered arrays will require a lot of reordering (upto O(n^2)), but one of the advantages of this algorithm is its' space complexity - only two elements are compared at once and there is no need to allocate more memory, than those two will occupy. |
| 124 | + <table> |
| 125 | + <tr> |
| 126 | + <th colspan="3" align="center">Time Complexity</th> |
| 127 | + <th align="center">Space Complexity</th> |
| 128 | + </tr> |
| 129 | + <tr> |
| 130 | + <th align="center">Best</th> |
| 131 | + <th align="center">Avegage</th> |
| 132 | + <th align="center">Worst</th> |
| 133 | + <th align="center">Worst</th> |
| 134 | + </tr> |
| 135 | + <tr> |
| 136 | + <td align="center">Ω(n)</td> |
| 137 | + <td align="center">Θ(n^2)</td> |
| 138 | + <td align="center">O(n^2)</td> |
| 139 | + <td align="center">O(1)</td> |
| 140 | + </td> |
| 141 | + </tr> |
| 142 | + </table> |
| 143 | + * Selection sort [Wikipedia](https://www.wikiwand.com/en/Selection_sort) |
| 144 | + - Firstly, selection sort assumes that the first element of the array to be sorted is the smallest, but to confirm this, it iterates over all other elements to check, and if it finds one, it gets defined as the smallest one. When the data ends, the element, that is currently found to be the smallest, is put in the beginning of the array. This sorting algorithm is quite straightforward, but still not that efficient on larger data sets, because to assign just one element to its' place, it needs to go over all data. |
143 | 145 | <table>
|
144 | 146 | <tr>
|
145 | 147 | <th colspan="3" align="center">Time Complexity</th>
|
|
158 | 160 | <td align="center">O(1)</td>
|
159 | 161 | </td>
|
160 | 162 | </tr>
|
161 |
| - </table> |
| 163 | + </table> |
162 | 164 | * Insertion sort
|
163 |
| - <table> |
| 165 | + <table> |
164 | 166 | <tr>
|
165 | 167 | <th colspan="3" align="center">Time Complexity</th>
|
166 | 168 | <th align="center">Space Complexity</th>
|
|
178 | 180 | <td align="center">O(1)</td>
|
179 | 181 | </td>
|
180 | 182 | </tr>
|
181 |
| - </table> |
182 |
| - * Merge sort |
183 |
| - <table> |
| 183 | + </table> |
| 184 | + * Mergesort |
| 185 | + <table> |
184 | 186 | <tr>
|
185 | 187 | <th colspan="3" align="center">Time Complexity</th>
|
186 | 188 | <th align="center">Space Complexity</th>
|
|
198 | 200 | <td align="center">O(n)</td>
|
199 | 201 | </td>
|
200 | 202 | </tr>
|
201 |
| - </table> |
202 |
| - * Quick sort |
| 203 | + </table> |
| 204 | + * Quicksort |
203 | 205 | <table>
|
204 | 206 | <tr>
|
205 | 207 | <th colspan="3" align="center">Time Complexity</th>
|
|
0 commit comments