You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- Using the most efficient sorting algorithm (and correct data structures that implement it) is vital for any program, because data manipulation can be one of the most significant bottlenecks in case of performance and the main purpose of spending time, determining the best algorithm for the job, is to drastically improve said performance. The efficiency of an algorithm is measured in its' "Big O" ([StackOverflow](https://stackoverflow.com/questions/487258/what-is-a-plain-english-explanation-of-big-o-notation)) score. Really good algorithms perform important actions in O(n log n) or even O(log n) time and some of them can even perform certain actions in O(1) time (HashTable insertion, for example). But there is always a trade-off - if some algorithm is really good at adding a new element to a data structure, it is, most certainly, much worse at data access than some other algorithm. If you are proficient with math, you may notice that "Big O" notation has many similarities with "limits", and you would be right - it measures best, worst and average performances of an algorithm in question, by looking at its' function limit. It should be noted that, when we are speaking about O(1) - constant time - we are not saying that this algorithm performs an action in one operation, rather that it can perform this action with the same number of operations (roughly), regrardless of the amount of elements it has to take into account. Thankfully, a lot of "Big O" scores have been already calculated, so you don't have to guess, which algorithm or data structure will perform better in your project. ["Big O" cheat sheet](http://bigocheatsheet.com/)
- Bubble sort is one of the simplest sorting algorithms. It just compares neighbouring elements and if the one that precedes the other is smaller - it changes their places. So over one iteration over the data list, it is guaranteed that **at least** one element will be in its' correct place (the biggest/smallest one - depending on the direction of sorting). This is not a very efficient algorithm, as highly unordered arrays will require a lot of reordering (upto O(n^2)), but one of the advantages of this algorithm is its' space complexity - only two elements are compared at once and there is no need to allocate more memory, than those two will occupy.
- Firstly, selection sort assumes that the first element of the array to be sorted is the smallest, but to confirm this, it iterates over all other elements to check, and if it finds one, it gets defined as the smallest one. When the data ends, the element, that is currently found to be the smallest, is put in the beginning of the array. This sorting algorithm is quite straightforward, but still not that efficient on larger data sets, because to assign just one element to its' place, it needs to go over all data.
- Insertion sort is another example of an algorithm, that is not that difficult to implement, but is also not that efficient. To do its' job, it "grows" sorted portion of data, by "inserting" new encountered elements into already (innerly) sorted part of the array, which consists of previously encountered elements. This means that in best case (data is already sorted) it can confirm that its' job is done in Ω(n) operations, while, if all encountered elements are not in their required order as many as O(n^2) operations may be needed.
- This is a "divide and conquer" algorithm, meaning it recursively "divides" given array in to smaller parts (up to 1 element) and then sorts those parts, combining them with each other. This approach allows merge sort to acieve very high speed, while doubling required space, of course, but today memory space is more availible than it was a couple of years ago, so this trade-off is considered acceptable.
- Quicksort is considered, well, quite quick. When implemented correctly, it can be a significant number of times faster than its' main competitors. This algorithm is also of "divide and conquer" family and its' first step is to choose a "pivot" element (choosing it randomly, statistically, minimizes the chance to get the worst performance), then by comparing elements to this pivot, moving it closer and closer to its' final place. During this process, the elements that are bigger are moved to the right side of it and smaller elements to the left. After this is done, quicksort repeats this process for subarrays on each side of placed pivot (does first step recursively), until the array is sorted.
207
203
<table>
208
-
<tr>
209
-
<thcolspan="3"align="center">Time Complexity</th>
210
-
<thalign="center">Space Complexity</th>
211
-
</tr>
212
-
<tr>
213
-
<thalign="center">Best</th>
214
-
<thalign="center">Avegage</th>
215
-
<thalign="center">Worst</th>
216
-
<thalign="center">Worst</th>
217
-
</tr>
218
-
<tr>
219
-
<tdalign="center">Ω(n^2)</td>
220
-
<tdalign="center">Θ(n^2)</td>
221
-
<tdalign="center">O(n^2)</td>
222
-
<tdalign="center">O(1)</td>
223
-
</td>
224
-
</tr>
225
-
</table>
226
-
* Hash Table or Hash Map
227
-
* Breadth First Search
228
-
* Depth First Search
229
-
* Greedy Algorithm
204
+
<tr>
205
+
<thcolspan="3"align="center">Time Complexity</th>
206
+
<thalign="center">Space Complexity</th>
207
+
</tr>
208
+
<tr>
209
+
<thalign="center">Best</th>
210
+
<thalign="center">Avegage</th>
211
+
<thalign="center">Worst</th>
212
+
<thalign="center">Worst</th>
213
+
</tr>
214
+
<tr>
215
+
<tdalign="center">Ω(n^2)</td>
216
+
<tdalign="center">Θ(n^2)</td>
217
+
<tdalign="center">O(n^2)</td>
218
+
<tdalign="center">O(1)</td>
219
+
</tr>
220
+
</table>
221
+
- There are, of course, more sorting algorithms and their modifications. We strongly recommend all readers to familiarize themselves with a couple more, because knowing algorithms is very important quality of a candidate, applying for a job and it shows understanding of what is happening "under the hood".
Note:For a full explanation of the <b>describeContents()</b> method see [StackOverflow](https://stackoverflow.com/questions/4076946/parcelable-where-when-is-describecontents-used/4914799#4914799).
360
360
InAndroidStudio, you can have all of the parcelable code auto generated for you, but like with everything else, it is always a good thing to try and understand everything that is happening.
361
-
362
-
*What is Singleton class?
363
-
- A singleton is a classthat can only be instantiated once. This singleton pattern restricts
364
-
the instantiation of a classto one object. This is useful when exactly one object is needed
365
-
to coordinate actions across the system. The concept is sometimes generalized to systems
366
-
that operate more efficiently when only one object exists, or that restrict the instantiation
367
-
to a certain number of objects. [Wikipedia](https://en.wikipedia.org/wiki/Singleton_pattern)
368
361
*What are anonymous classes?
369
362
*What is the difference between using `==` and `.equals` on a string?
370
363
*How is `String` classimplemented? Why was it made immutable?
@@ -567,7 +560,7 @@ It is also a good practice to annotate overridden methods with `@Override` to ma
567
560
*When is a `static` block run?
568
561
*ExplainGenerics in Java?
569
562
-Generics were included in Java language to provide stronger type checks, by allowing the programmer to define, which classes can be used with other classes
570
-
>In a nutshell, generics enable types (classes and interfaces) to be parameters when defining classes, interfaces and methods. Much like the more familiar formal parameters used in method declarations, type parameters provide a way for you to re-use the same code with different inputs. The difference is that the inputs to formal parameters are values, while the inputs to type parameters are types. ([OfficialJavaDocumentation](https://docs.oracle.com/javase/tutorial/java/generics/why.html))
563
+
>In a nutshell, generics enable types (classes and interfaces) to be parameters when defining classes, interfaces and methods. Much like the more familiar formal parameters used in method declarations, type parameters provide a way for you to re-use the same code with different inputs. The difference is that the inputs to formal parameters are values, while the inputs to type parameters are types. ([OfficialJavaDocumentation](https://docs.oracle.com/javase/tutorial/java/generics/why.html))
571
564
572
565
-This means that, for example, you can define:
573
566
```java
@@ -586,6 +579,22 @@ It is also a good practice to annotate overridden methods with `@Override` to ma
586
579
*What is JavaMemoryModel?What contracts does it guarantee?How are its' Heap and Stack organized? [Jenkov](http://tutorials.jenkov.com/java-concurrency/java-memory-model.html)
587
580
* What is memory leak and how does Java handle it?
588
581
* What are the design patterns? [GitHub](https://github.com/iluwatar/java-design-patterns)
A singleton is a class that can only be instantiated once. This singleton pattern restricts the instantiation of a class to one object. This is useful when exactly one object is needed to coordinate actions across the system. The concept is sometimes generalized to systems that operate more efficiently when only one object exists, or that restrict the instantiation to a certain number of objects.
0 commit comments