Skip to content

Commit ad5a955

Browse files
Merge pull request amitshekhariitbhu#39 from Ifhay/master
Added "Pathfinding algorithms", structured "design patterns"
2 parents 5f3d334 + 79996ed commit ad5a955

File tree

1 file changed

+49
-40
lines changed

1 file changed

+49
-40
lines changed

README.md

Lines changed: 49 additions & 40 deletions
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@
118118
* Binary Search Tree
119119
* Sorting Algorithms [Wikipedia](https://en.wikipedia.org/wiki/Sorting_algorithm?oldformat=true)
120120
- Using the most efficient sorting algorithm (and correct data structures that implement it) is vital for any program, because data manipulation can be one of the most significant bottlenecks in case of performance and the main purpose of spending time, determining the best algorithm for the job, is to drastically improve said performance. The efficiency of an algorithm is measured in its' "Big O" ([StackOverflow](https://stackoverflow.com/questions/487258/what-is-a-plain-english-explanation-of-big-o-notation)) score. Really good algorithms perform important actions in O(n log n) or even O(log n) time and some of them can even perform certain actions in O(1) time (HashTable insertion, for example). But there is always a trade-off - if some algorithm is really good at adding a new element to a data structure, it is, most certainly, much worse at data access than some other algorithm. If you are proficient with math, you may notice that "Big O" notation has many similarities with "limits", and you would be right - it measures best, worst and average performances of an algorithm in question, by looking at its' function limit. It should be noted that, when we are speaking about O(1) - constant time - we are not saying that this algorithm performs an action in one operation, rather that it can perform this action with the same number of operations (roughly), regrardless of the amount of elements it has to take into account. Thankfully, a lot of "Big O" scores have been already calculated, so you don't have to guess, which algorithm or data structure will perform better in your project. ["Big O" cheat sheet](http://bigocheatsheet.com/)
121-
* Bubble sort [Wikipedia](https://en.wikipedia.org/wiki/Bubble_sort?oldformat=true)
121+
- Bubble sort [Wikipedia](https://en.wikipedia.org/wiki/Bubble_sort?oldformat=true)
122122
- Bubble sort is one of the simplest sorting algorithms. It just compares neighbouring elements and if the one that precedes the other is smaller - it changes their places. So over one iteration over the data list, it is guaranteed that **at least** one element will be in its' correct place (the biggest/smallest one - depending on the direction of sorting). This is not a very efficient algorithm, as highly unordered arrays will require a lot of reordering (upto O(n^2)), but one of the advantages of this algorithm is its' space complexity - only two elements are compared at once and there is no need to allocate more memory, than those two will occupy.
123123
<table>
124124
<tr>
@@ -136,10 +136,9 @@
136136
<td align="center">Θ(n^2)</td>
137137
<td align="center">O(n^2)</td>
138138
<td align="center">O(1)</td>
139-
</td>
140139
</tr>
141140
</table>
142-
* Selection sort [Wikipedia](https://www.wikiwand.com/en/Selection_sort)
141+
- Selection sort [Wikipedia](https://www.wikiwand.com/en/Selection_sort)
143142
- Firstly, selection sort assumes that the first element of the array to be sorted is the smallest, but to confirm this, it iterates over all other elements to check, and if it finds one, it gets defined as the smallest one. When the data ends, the element, that is currently found to be the smallest, is put in the beginning of the array. This sorting algorithm is quite straightforward, but still not that efficient on larger data sets, because to assign just one element to its' place, it needs to go over all data.
144143
<table>
145144
<tr>
@@ -157,10 +156,9 @@
157156
<td align="center">Θ(n^2)</td>
158157
<td align="center">O(n^2)</td>
159158
<td align="center">O(1)</td>
160-
</td>
161159
</tr>
162160
</table>
163-
* Insertion sort [Wikipedia](https://en.wikipedia.org/wiki/Insertion_sort?oldformat=true)
161+
- Insertion sort [Wikipedia](https://en.wikipedia.org/wiki/Insertion_sort?oldformat=true)
164162
- Insertion sort is another example of an algorithm, that is not that difficult to implement, but is also not that efficient. To do its' job, it "grows" sorted portion of data, by "inserting" new encountered elements into already (innerly) sorted part of the array, which consists of previously encountered elements. This means that in best case (data is already sorted) it can confirm that its' job is done in Ω(n) operations, while, if all encountered elements are not in their required order as many as O(n^2) operations may be needed.
165163
<table>
166164
<tr>
@@ -178,12 +176,11 @@
178176
<td align="center">Θ(n^2)</td>
179177
<td align="center">O(n^2)</td>
180178
<td align="center">O(1)</td>
181-
</td>
182179
</tr>
183180
</table>
184-
* Merge sort [Wikipedia](https://en.wikipedia.org/wiki/Merge_sort?oldformat=true)
181+
- Merge sort [Wikipedia](https://en.wikipedia.org/wiki/Merge_sort?oldformat=true)
185182
- This is a "divide and conquer" algorithm, meaning it recursively "divides" given array in to smaller parts (up to 1 element) and then sorts those parts, combining them with each other. This approach allows merge sort to acieve very high speed, while doubling required space, of course, but today memory space is more availible than it was a couple of years ago, so this trade-off is considered acceptable.
186-
<table>
183+
<table>
187184
<tr>
188185
<th colspan="3" align="center">Time Complexity</th>
189186
<th align="center">Space Complexity</th>
@@ -199,34 +196,37 @@
199196
<td align="center">Θ(n log(n))</td>
200197
<td align="center">O(n log(n))</td>
201198
<td align="center">O(n)</td>
202-
</td>
203199
</tr>
204200
</table>
205-
* Quicksort [Wikipedia](https://en.wikipedia.org/wiki/Quicksort?oldformat=true)
201+
- Quicksort [Wikipedia](https://en.wikipedia.org/wiki/Quicksort?oldformat=true)
206202
- Quicksort is considered, well, quite quick. When implemented correctly, it can be a significant number of times faster than its' main competitors. This algorithm is also of "divide and conquer" family and its' first step is to choose a "pivot" element (choosing it randomly, statistically, minimizes the chance to get the worst performance), then by comparing elements to this pivot, moving it closer and closer to its' final place. During this process, the elements that are bigger are moved to the right side of it and smaller elements to the left. After this is done, quicksort repeats this process for subarrays on each side of placed pivot (does first step recursively), until the array is sorted.
207203
<table>
208-
<tr>
209-
<th colspan="3" align="center">Time Complexity</th>
210-
<th align="center">Space Complexity</th>
211-
</tr>
212-
<tr>
213-
<th align="center">Best</th>
214-
<th align="center">Avegage</th>
215-
<th align="center">Worst</th>
216-
<th align="center">Worst</th>
217-
</tr>
218-
<tr>
219-
<td align="center">Ω(n^2)</td>
220-
<td align="center">Θ(n^2)</td>
221-
<td align="center">O(n^2)</td>
222-
<td align="center">O(1)</td>
223-
</td>
224-
</tr>
225-
</table>
226-
* Hash Table or Hash Map
227-
* Breadth First Search
228-
* Depth First Search
229-
* Greedy Algorithm
204+
<tr>
205+
<th colspan="3" align="center">Time Complexity</th>
206+
<th align="center">Space Complexity</th>
207+
</tr>
208+
<tr>
209+
<th align="center">Best</th>
210+
<th align="center">Avegage</th>
211+
<th align="center">Worst</th>
212+
<th align="center">Worst</th>
213+
</tr>
214+
<tr>
215+
<td align="center">Ω(n^2)</td>
216+
<td align="center">Θ(n^2)</td>
217+
<td align="center">O(n^2)</td>
218+
<td align="center">O(1)</td>
219+
</tr>
220+
</table>
221+
- There are, of course, more sorting algorithms and their modifications. We strongly recommend all readers to familiarize themselves with a couple more, because knowing algorithms is very important quality of a candidate, applying for a job and it shows understanding of what is happening "under the hood".
222+
223+
* Hash Table or Hash Map
224+
* Pathfinding algorithms [Wikipedia](https://en.wikipedia.org/wiki/Greedy_algorithm?oldformat=true)
225+
- Dijkstra algorithm
226+
- A* algorithm
227+
- Breadth First Search
228+
- Depth First Search
229+
* Greedy Algorithm
230230

231231

232232
### Core Java
@@ -358,13 +358,6 @@
358358
```
359359
Note: For a full explanation of the <b>describeContents()</b> method see [StackOverflow](https://stackoverflow.com/questions/4076946/parcelable-where-when-is-describecontents-used/4914799#4914799).
360360
In Android Studio, you can have all of the parcelable code auto generated for you, but like with everything else, it is always a good thing to try and understand everything that is happening.
361-
362-
* What is Singleton class?
363-
- A singleton is a class that can only be instantiated once. This singleton pattern restricts
364-
the instantiation of a class to one object. This is useful when exactly one object is needed
365-
to coordinate actions across the system. The concept is sometimes generalized to systems
366-
that operate more efficiently when only one object exists, or that restrict the instantiation
367-
to a certain number of objects. [Wikipedia](https://en.wikipedia.org/wiki/Singleton_pattern)
368361
* What are anonymous classes?
369362
* What is the difference between using `==` and `.equals` on a string?
370363
* How is `String` class implemented? Why was it made immutable?
@@ -567,7 +560,7 @@ It is also a good practice to annotate overridden methods with `@Override` to ma
567560
* When is a `static` block run?
568561
* Explain Generics in Java?
569562
- Generics were included in Java language to provide stronger type checks, by allowing the programmer to define, which classes can be used with other classes
570-
> In a nutshell, generics enable types (classes and interfaces) to be parameters when defining classes, interfaces and methods. Much like the more familiar formal parameters used in method declarations, type parameters provide a way for you to re-use the same code with different inputs. The difference is that the inputs to formal parameters are values, while the inputs to type parameters are types. ([Official Java Documentation](https://docs.oracle.com/javase/tutorial/java/generics/why.html))
563+
> In a nutshell, generics enable types (classes and interfaces) to be parameters when defining classes, interfaces and methods. Much like the more familiar formal parameters used in method declarations, type parameters provide a way for you to re-use the same code with different inputs. The difference is that the inputs to formal parameters are values, while the inputs to type parameters are types. ([Official Java Documentation](https://docs.oracle.com/javase/tutorial/java/generics/why.html))
571564

572565
- This means that, for example, you can define:
573566
```java
@@ -586,6 +579,22 @@ It is also a good practice to annotate overridden methods with `@Override` to ma
586579
* What is Java Memory Model? What contracts does it guarantee? How are its' Heap and Stack organized? [Jenkov](http://tutorials.jenkov.com/java-concurrency/java-memory-model.html)
587580
* What is memory leak and how does Java handle it?
588581
* What are the design patterns? [GitHub](https://github.com/iluwatar/java-design-patterns)
582+
- Creational patterns
583+
- Builder [Wikipedia](https://en.wikipedia.org/wiki/Builder_pattern?oldformat=true)
584+
585+
- Factory [Wikipedia](https://en.wikipedia.org/wiki/Factory_method_pattern?oldformat=true)
586+
587+
- Singleton [Wikipedia](https://en.wikipedia.org/wiki/Singleton_pattern)
588+
A singleton is a class that can only be instantiated once. This singleton pattern restricts the instantiation of a class to one object. This is useful when exactly one object is needed to coordinate actions across the system. The concept is sometimes generalized to systems that operate more efficiently when only one object exists, or that restrict the instantiation to a certain number of objects.
589+
590+
- Structural patterns
591+
- Adapter [Wikipedia](https://en.wikipedia.org/wiki/Adapter_pattern?oldformat=true)
592+
- Decorator [Wikipedia](https://en.wikipedia.org/wiki/Decorator_pattern?oldformat=true)
593+
- Facade [Wikipedia](https://en.wikipedia.org/wiki/Facade_pattern?oldformat=true)
594+
- Behavioural patterns
595+
- Chain of responsibility [Wikipedia](https://en.wikipedia.org/wiki/Chain-of-responsibility_pattern?oldformat=true)
596+
- Iterator [Wikipedia](https://en.wikipedia.org/wiki/Iterator_pattern?oldformat=true)
597+
- Strategy [Wikipedia](https://en.wikipedia.org/wiki/Strategy_pattern?oldformat=true)
589598
590599
591600
### Core Android

0 commit comments

Comments
 (0)