0% found this document useful (0 votes)
17 views8 pages

Randomized Algorithm

A Randomized Algorithm utilizes random numbers in its logic to make decisions, exemplified by Randomized Quick Sort and Karger’s algorithm. These algorithms are classified into Las Vegas algorithms, which guarantee correct results but vary in execution time, and Monte Carlo algorithms, which may yield incorrect results but operate within bounded resources. The document discusses the analysis, complexity, and applications of these algorithms, highlighting their significance in optimization, cryptography, and various computational problems.

Uploaded by

navisingla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views8 pages

Randomized Algorithm

A Randomized Algorithm utilizes random numbers in its logic to make decisions, exemplified by Randomized Quick Sort and Karger’s algorithm. These algorithms are classified into Las Vegas algorithms, which guarantee correct results but vary in execution time, and Monte Carlo algorithms, which may yield incorrect results but operate within bounded resources. The document discusses the analysis, complexity, and applications of these algorithms, highlighting their significance in optimization, cryptography, and various computational problems.

Uploaded by

navisingla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

What is a Randomized Algorithm?

An algorithm that uses random numbers to decide what to do next anywhere in its logic is called
a Randomized Algorithm. For example, in Randomized Quick Sort, we use a random number to
pick the next pivot (or we randomly shuffle the array). And in Karger’s algorithm, we randomly
pick an edge.

How to analyse Randomized Algorithms?


Some randomized algorithms have deterministic time complexity. For
example, this implementation of Karger’s algorithm has time complexity is O(E). Such
algorithms are called Monte Carlo Algorithms and are easier to analyse for worst case.
On the other hand, time complexity of other randomized algorithms (other than Las Vegas) is
dependent on value of random variable. Such Randomized algorithms are called Las Vegas
Algorithms. These algorithms are typically analysed for expected worst case. To compute
expected time taken in worst case, all possible values of the used random variable needs to be
considered in worst case and time taken by every possible value needs to be evaluated.
Average of all evaluated times is the expected worst case time complexity. Below facts are
generally helpful in analysis os such algorithms.
Linearity of Expectation
Expected Number of Trials until Success.
For example consider below a randomized version of QuickSort.
A Central Pivot is a pivot that divides the array in such a way that one side has at-least 1/4
elements.

// Sorts an array arr[low..high]


randQuickSort(arr[], low, high)

1. If low >= high, then EXIT.

2. While pivot 'x' is not a Central Pivot.


(i) Choose uniformly at random a number from [low..high].
Let the randomly picked number number be x.
(ii) Count elements in arr[low..high] that are smaller
than arr[x]. Let this count be sc.
(iii) Count elements in arr[low..high] that are greater
than arr[x]. Let this count be gc.
(iv) Let n = (high-low+1). If sc >= n/4 and
gc >= n/4, then x is a central pivot.

3. Partition arr[low..high] around the pivot x.

4. // Recur for smaller elements


randQuickSort(arr, low, sc-1)

5. // Recur for greater elements


randQuickSort(arr, high-gc+1, high)
The important thing in our analysis is, time taken by step 2 is O(n).
How many times while loop runs before finding a central pivot?
The probability that the randomly chosen element is central pivot is 1/n.
Therefore, expected number of times the while loop runs is n (See this for details)
Thus, the expected time complexity of step 2 is O(n).
What is overall Time Complexity in Worst Case?
In worst case, each partition divides array such that one side has n/4 elements and other side
has 3n/4 elements. The worst case height of recursion tree is Log 3/4 n which is O(Log n).
T(n) < T(n/4) + T(3n/4) + O(n)
T(n) < 2T(3n/4) + O(n)

Solution of above recurrence is O(n Log n)


Note that the above randomized algorithm is not the best way to implement randomized Quick
Sort. The idea here is to simplify the analysis as it is simple to analyse.
Typically, randomized Quick Sort is implemented by randomly picking a pivot (no loop). Or by
shuffling array elements. Expected worst case time complexity of this algorithm is also O(n Log
n), but analysis is complex, the MIT prof himself mentions same in his lecture here.
Classification
Randomized algorithms are classified in two categories.
Las Vegas:
A Las Vegas algorithm were introduced by Laszlo Babai in 1979.
A Las Vegas algorithm is an algorithm which uses randomness, but gives guarantees that the
solution obtained for given problem is correct. It takes the risk with resources used. A quick-sort
algorithm is a simple example of Las-Vegas algorithm. To sort the given array of n numbers
quickly we use the quick sort algorithm. For that we find out central element which is also called
as pivot element and each element is compared with this pivot element. Sorting is done in less
time or it requires more time is dependent on how we select the pivot element. To pick the pivot
element randomly we can use Las-Vegas algorithm.
Definition:
A randomized algorithm that always produce correct result with only variation from one aun to
another being its running time is known as Las-Vegas algorithm.
OR
A randomized algorithm which always produces a correct result or it informs about the failure is
known as Las-Vegas algorithm.
OR
A Las-Vegas algorithm take the risk with the resources used for computation but it does not
take risk with the result i.e. it gives correct and expected output for the given problem.
Let us consider the above example of quick sort algorithm. In this algorithm we choose the
pivot element randomly. But the result of this problem is always a sorted array. A Las-Vegas
algorithm is having one restriction i.e. the solution for the given problem can be found out in
finite time. In this algorithm the numbers of possible solutions arc limited. The actual solution is
complex in nature or complicated to calculate but it is easy to verify the correctness of
candidate solution.
These algorithms always produce correct or optimum result. Time complexity of these
algorithms is based on a random value and time complexity is evaluated as expected value. For
example, Randomized Quick Sort always sorts an input array and expected worst case time
complexity of Quick Sort is O(nLogn) .
Relation with the Monte-Carlo Algorithms:
 The Las-Vegas algorithm can be differentiated with the Monte-carlo algorithms in which the
resources used to find out the solution are bounded but it does not give guarantee that the
solution obtained is accurate.
 In some applications by making early termination a Las-Vegas algorithm can be converted
into Monte-Carlo algorithm.
Complexity Analysis:
The complexity class of given problem which is solved by using a Las-Vegas algorithms is
expect that the given problem is solved with zero error probability and in polynomial time.
This zero error probability polynomial time is also called as ZPP which is obtained as follows,
ZPP = RP ? CO-RP
Where, RP = Randomized polynomial time.
Randomized polynomial time algorithm always provide correct output when the correct output is
no, but with a certain probability bounded away from one when the answer is yes. These kinds
of decision problem can be included in class RP i.e. randomized where polynomial time.
That is how we can solve given problem in expected polynomial time by using Las-Vegas
algorithm. Generally there is no upper bound for Las-vegas algorithm related to worst case run
time.

Monte Carlo:
The computational algorithms which rely on repeated random sampling to compute their results
such algorithm are called as Monte-Carlo algorithms.
OR
The random algorithm is Monte-carlo algorithms if it can give the wrong answer sometimes.
Whenever the existing deterministic algorithm is fail or it is impossible to compute the solution
for given problem then Monte-Carlo algorithms or methods are used. Monte-carlo methods are
best repeated computation of the random numbers, and that’s why these algorithms are used
for solving physical simulation system and mathematical system.
This Monte-carlo algorithms are specially useful for disordered materials, fluids, cellular
structures. In case of mathematics these method are used to calculate the definite integrals,
these integrals are provided with the complicated boundary conditions for multidimensional
integrals. This method is successive one with consideration of risk analysis when compared to
other methods.
There is no single Monte carlo methods other than the term describes a large and widely used
class approaches and these approach use the following pattern.
1. Possible inputs of domain is defined.
2. By using a certain specified probability distribution generate the inputs randomly from the
domain.
3. By using these inputs perform a deterministic computation.
4.The final result can be computed by aggregating the results of the individual computation.
Produce correct or optimum result with some probability. These algorithms have deterministic
running time and it is generally easier to find out worst case time complexity. For example this
implementation of Karger’s Algorithm produces minimum cut with probability greater than or
equal to 1/n2 (n is number of vertices) and has worst case time complexity as O(E). Another
example is Fermat Method for Primality Testing . Example to Understand Classification:
Consider a binary array where exactly half elements are 0
and half are 1. The task is to find index of any 1.
A Las Vegas algorithm for this task is to keep picking a random element until we find a 1. A
Monte Carlo algorithm for the same is to keep picking a random element until we either find 1 or
we have tried maximum allowed times say k. The Las Vegas algorithm always finds an index of
1, but time complexity is determined as expect value. The expected number of trials before
success is 2, therefore expected time complexity is O(1). The Monte Carlo Algorithm finds a 1
with probability [1 – (1/2) k]. Time complexity of Monte Carlo is O(k) which is deterministic
Optimization of Monte-Carlo Algorithms:
 In general the Monte-carlo optimization techniques are dependent on the random walks. The
program for Monte carlo algorithms move in multidimensional space around the generated
marker or handle. It wanted to move to the lower function but sometimes moves against the
gradient.
 In numerical optimization the numerical simulation is used which effective and efficient and
popular application for the random numbers. The travelling salesman problem is an
optimization problem which is one of the best examples of optimizations.
 There are various optimization techniques available for Monte-carlo algorithms such as
Evolution strategy, Genetic algorithms, parallel tempering etc.
Applications and Scope:
The Monte-carlo methods has wider range of applications. It uses in various areas like physical
science, Design and visuals, Finance and business, Telecommunication etc. In general Monte
carlo methods are used in mathematics. By generating random numbers we can solve the
various problem. The problems which are complex in nature or difficult to solve are solved by
using Monte-carlo algorithms. Monte carlo integration is the most common application of Monte-
carlo algorithm.
The deterministic algorithm provides a correct solution but it takes long time or its runtime is
large. This run-time can be improved by using the Monte carlo integration algorithms. There are
various methods used for integration by using Monte-carlo methods such as,
i) Direct sampling methods which includes the stratified sampling, recursive
stratified sampling, importance sampling.
ii) Random walk Monte-carlo algorithm which is used to find out the integration for
given problem.
iii) Gibbs sampling.
 Consider a tool that basically does sorting. Let the tool be used by many users and there are
few users who always use tool for already sorted array. If the tool uses simple (not
randomized) QuickSort, then those few users are always going to face worst case situation.
On the other hand if the tool uses Randomized QuickSort, then there is no user that always
gets worst case. Everybody gets expected O(n Log n) time.
 Randomized algorithms have huge applications in Cryptography.
 Load Balancing.
 Number-Theoretic Applications: Primality Testing
 Data Structures: Hashing, Sorting, Searching, Order Statistics and Computational Geometry.
 Algebraic identities: Polynomial and matrix identity verification . Interactive proof systems.
 Mathematical programming: Faster algorithms for linear programming, Rounding linear
program solutions to integer program solutions
 Graph algorithms: Minimum spanning trees, shortest paths, minimum cuts.
 Counting and enumeration: Matrix permanent Counting combinatorial structures.
 Parallel and distributed computing: Deadlock avoidance distributed consensus.
 Probabilistic existence proofs: Show that a combinatorial object arises with non-zero
probability among objects drawn from a suitable probability space.
 Derandomization: First devise a randomized algorithm then argue that it can be
derandomized to yield a deterministic algorithm.
Randomized algorithms are algorithms that use randomness as a key component in their
operation. They can be used to solve a wide variety of problems, including optimization, search,
and decision-making. Some examples of applications of randomized algorithms include:
1. Monte Carlo methods: These are a class of randomized algorithms that use random
sampling to solve problems that may be deterministic in principle, but are too complex to
solve exactly. Examples include estimating pi, simulating physical systems, and solving
optimization problems.
2. Randomized search algorithms: These are algorithms that use randomness to search for
solutions to problems. Examples include genetic algorithms and simulated annealing.
3. Randomized data structures: These are data structures that use randomness to improve
their performance. Examples include skip lists and hash tables.
4. Randomized load balancing: These are algorithms used to distribute load across a network
of computers, using randomness to avoid overloading any one computer.
5. Randomized encryption: These are algorithms used to encrypt and decrypt data, using
randomness to make it difficult for an attacker to decrypt the data without the correct key.
Considering the real-world applications like image segmentation where objects that are focused by
the camera need to be removed from the image. Here, each pixel is considered as a node and the
capacity between these pixels is reduced. The algorithm that is followed is the minimum cut
algorithm.Minimum Cut is the removal of minimum number of edges in a graph (directed or
undirected) such that the graph is divided into multiple separate graphs or disjoint set of vertices.

Let us look at an example for a clearer understanding of disjoint sets achieved

Edges {A, E} and {F, G} are the only ones loosely bonded to be removed easily from
the graph. Hence, the minimum cut for the graph would be 2.

The resultant graphs after removing the edges A → E and F → G are {A, B, C, D, G} and {E,
F}.

Kargers Minimum Cut algorithm is a randomized algorithm to find the minimum cut of a
graph. It uses the monte carlo approach so it is expected to run within a time constraint and have a
minimal error in achieving output. However, if the algorithm is executed multiple times the
probability of the error is reduced. The graph used in kargers minimum cut algorithm is undirected
graph with no weights.

Kargers Minimum Cut Algorithm


The kargers algorithm merges any two nodes in the graph into one node which is known as a
supernode. The edge between the two nodes is contracted and the other edges connecting other
adjacent vertices can be attached to the supernode.

Algorithm
Step 1 − Choose any random edge [u, v] from the graph G to be contracted.
Step 2 − Merge the vertices to form a supernode and connect the edges of the other adjacent nodes
of the vertices to the supernode formed. Remove the self nodes, if any.
Step 3 − Repeat the process until theres only two nodes left in the contracted graph.
Step 4 − The edges connecting these two nodes are the minimum cut edges.

The algorithm does not always the give the optimal output so the process is repeated multiple times
to decrease the probability of error.

Pseudocode
Kargers_MinCut(edge, V, E):
v=V
while(v > 2):
i=Random integer in the range [0, E-1]
s1=find(edge[i].u)
s2=find(edge[i].v)
if(s1 != s2):
v = v-1
union(u, v)
mincut=0
for(i in the range 0 to E-1):
s1=find(edge[i].u)
s2=find(edge[i].v)
if(s1 != s2):
mincut = mincut + 1
return mincut

Example
Applying the algorithm on an undirected unweighted graph G {V, E} where V and E are sets of
vertices and edges present in the graph, let us find the minimum cut −

Step 1

Choose any edge, say A → B, and contract the edge by merging the two vertices into one supernode.
Connect the adjacent vertex edges to the supernode. Remove the self loops, if any.

Step 2

Contract another edge (A, B) → C, so the supernode will become (A, B, C) and the adjacent edges are
connected to the newly formed bigger supernode.

Step 3

The node D only has one edge connected to the supernode and one adjacent edge so it will be easier
to contract and connect the adjacent edge to the new supernode formed.

Step 4

Among F and E vertices, F is more strongly bonded to the supernode, so the edges connecting F and
(A, B, C, D) are contracted.

Step 5
Since there are only two nodes present in the graph, the number of edges are the final minimum cut
of the graph. In this case, the minimum cut of given graph is 2.
The minimum cut of the original graph is 2 (E → D and E → F).

You might also like