0% found this document useful (0 votes)
20 views

GB Unit3 Mutual Exclusion

Mutual exclusion is a condition where only one process can access a resource at a time, requiring algorithms to ensure safety, liveness, and fairness. Various mutual exclusion algorithms are classified into centralized and distributed types, with performance metrics including message complexity, synchronization delay, and system throughput. Key distributed algorithms include Lamport’s, Ricart-Agrawala, and Maekawa’s, each with unique mechanisms for ensuring mutual exclusion while addressing potential issues like deadlocks and message overhead.

Uploaded by

2021.rishi.kokil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

GB Unit3 Mutual Exclusion

Mutual exclusion is a condition where only one process can access a resource at a time, requiring algorithms to ensure safety, liveness, and fairness. Various mutual exclusion algorithms are classified into centralized and distributed types, with performance metrics including message complexity, synchronization delay, and system throughput. Key distributed algorithms include Lamport’s, Ricart-Agrawala, and Maekawa’s, each with unique mechanisms for ensuring mutual exclusion while addressing potential issues like deadlocks and message overhead.

Uploaded by

2021.rishi.kokil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 58

MUTUAL EXCLUSION

WHAT IS MUTUAL
EXCLUSION?

A condition in which there is a set of


processes, only one of which is able
to access a given resource or
perform a given function at any time
REQUIREMENTS OF MUTUAL
EXCLUSION ALGORITHMS
1. Safety property: At most one process may execute in
the critical region (CR) at a time.

2. Liveness property: A process requesting entry to the CR


is eventually granted it. There should not be deadlock and
starvation

3. Fairness: Each process should get a fair chance to


execute the CR.
PERFORMANCE METRICS
1. Message complexity: Number of messages that are
required per CR execution by a process.

2. Synchronization delay: Time interval between critical


region (CR) exit and new entry by any process.
PERFORMANCE METRICS
3. System throughput: Rate at which requests for the CR get
executed.
system throughput=1/(SD+E)
where SD is the synchronization delay and E is the average critical section
execution time.

4. Response time: Time interval from a request send to its CR


execution completed.
CLASSIFICATION OF MUTUAL
EXCLUSION ALGORITHMS
• Centralized mutual exclusion algorithms

• Distributed mutual exclusion algorithms


• Non-token-based algorithm:
• Lamport’s Distributed Mutual Algorithm
• Ricart–Agrawala Algorithm
• Maekawa’s Algorithm
• Token-based algorithm:
• Suzuki–Kasami’s Broadcast Algorithm
• Singhal’s Heuristic Algorithm
• Raymond’s Tree-Based Algorithm
CENTRALIZED ALGORITHM

a) Process 1 asks the coordinator for permission to enter a critical region.


Permission is granted
b) Process 2 then asks permission to enter the same critical region. The
coordinator does not reply.
c) When process 1 exits the critical region, it tells the coordinator, when
then replies to 2

(a) Process 1 asks the coordinator for permission to access a shared


resource. Permission is granted.
(b) Process 2 then asks permission to access the same resource. The
coordinator does not reply.
(c) When process 1 releases the resource, it tells the coordinator,
which then replies to 2.
CENTRALIZED ALGORITHMS
CONTD..
Advantages
⮚ Fair algorithm, grants in the order of requests
⮚ The scheme is easy to implement
⮚ Scheme can be used for general resource
allocation

Critical Question: When there is no reply, does this


mean that the coordinator is “dead” or just busy?

Shortcomings
⮚ Single point of failure. No fault tolerance
⮚ Confusion between No-reply and permission
denied
⮚ Performance bottleneck of single coordinator in a
large system
PERFORMANCE PARAMETERS

1. The algorithm is simple and fair as it handles the request in the


sequential order.

2. It guarantees no starvation.

3. It uses three messages as REQUEST, REPLY and RELEASE.

4. It has a single point of failure.

5. Coordinator selection could increase synchronization delay


especially at times of frequent failures.
Distributed algorithms
DISTRIBUTED MUTUAL
EXCLUSION
? It does not use coordinator but uses logical
clock for event ordering.
? The processes coordinate amongst
themselves to guarantee mutual exclusion .
1.LAMPORT’S DISTRIBUTED
ALGORITHM
? Lamport uses Lamports timestamp logical clock
synchronization mechanism to establish priority
among the processes/nodes.
? The algorithm is fair in the sense that a request
for CS are executed in the order of their
timestamps and time is determined by logical
clocks.
? When a site processes a request for the CS, it updates
its local clock and assigns the request a timestamp.
? The algorithm executes CS requests in the increasing
order of timestamps.
? Every site Si keeps a queue, request queue i, which
contains mutual exclusion requests ordered by their
timestamps
? This algorithm requires communication channels to
deliver messages in FIFO order.
LAMPORT’S…..
Requesting the critical section
? •When a site Si wants to enter the CS, it
broadcasts a REQUEST(tsi, i) message to all other
sites and places the request on request_queuei.
((tsi, i) denotes the timestamp of the request.)
? When a site Sj receives the REQUEST(tsi, i)
message from site Si, it places site Si’s request
on request_queue j and returns a timestamped
REPLY message to Si.
LAMPORT’S…..
Executing the critical section
? Site Si enters the CS when the following two
conditions hold:
⚫ L1: Si has received a message with timestamp
larger than (tsi, i) from all other sites.
⚫ L2: Si’s request is at the top of
request_queuei.
LAMPORT’S…..
Releasing the critical section
? Site Si, upon exiting the CS, removes its
request from the top of its request queue and
broadcasts a timestamped RELEASE message
to all other sites.
? •When a site Sj receives a RELEASE message
from site Si, it removes Si’s request from its
request queue.
1.LAMPORT’S DISTRIBUTED:EXAMPLE GORITHM
PERFORMANCE
PARAMETERS
• Lamport’s algorithm has message overhead of total 3(N − 1)
messages: N – 1 REQUEST messages to All process (N minus
itself), N −1 REPLY messages, and N −1 RELEASE messages
per CR invocation.

• The synchronization delay is T. Throughput is 1/(T + E).

• The algorithm has been proven to be fair and correct. It can


also be optimized by reducing the number of RELEASE
messages sent.
2.RICART–AGRAWALA
ALGORITHM

A> Two processes want to enter the same critical region(CS) .


B> Process 0 has the lowest timestamp, so it wins.
C> When process 0 is done, it sends an OK also, so 2 can now enter
the critical region[2]
D> If the receiver is not in the CS and does not want to enter it, it
sends back an OK. Message to sender.
2.RICART–AGRAWALA
ALGORITHM

Requesting the critical section


(a) When a site Si wants to enter the CS, it broadcasts a
timestamped REQUEST message to all other sites.
(b) When site Sj receives a REQUEST message from site Si,
it sends a REPLY message to site Si if site Sj is neither
requesting nor executing the CS, or if the site Sj is
requesting and Si’s request’s timestamp is smaller than
site Sj’s own request’s timestamp.
Otherwise, the reply is deferred and Sj sets RDj i = 1.
2.RICART–AGRAWALA
ALGORITHM
Executing the critical section
(c) Site Si enters the CS after it has received a REPLY
message from every site it sent a REQUEST
message to.

Releasing the critical section


(d) When site Si exits the CS, it sends all the deferred

REPLY messages: ∀j if RDij = 1, then sends a REPLY


message to Sj and sets RDij = 0.
RICART–AGRAWALA ALGORITHM
PERFORMANCE
PARAMETERS
1.The algorithm does not use explicit RELEASE message. The
dequeuing is done on the receipt of REPLY itself. Thus, total
message overhead would be 2(N − 1) messages, that is, for
entering a CR, (N − 1) requests and exiting (N − 1) replies.
(Improvement over lamport’s algo)

2. The failure of any process almost halts the algorithm


(recovery measures are needed) as it requires all replies.

3.Single point of failure is replaced by multiple point of failure.


MAEKAWA’S ALGORITHM
? Maekawa’s algorithm is a quorum or voting-based
mutual exclusion algorithm.
? It suggests that a process Pi does not require to
request all processes, but only to a subset of
processes (the quorum or a voting set) called Ri.
? Each process Pi in the quorum set Ri gives
permission to at most one process at a time.
? Data structures used by each process Pi:
1. The request-deferred queue, RDi
/* of processes REQUESTing and not REPLIED to*/
2. A variable called ‘Voted’ = F(ALSE) is set
initially; /* T(RUE) when a reply is sent
indicating that it has already granted
permission to a process in its quorum */
MAEKAWA’S ALGORITHM
Properties of a Quorum Set
DIFFERENT VALID QUORUMS
EXAMPLES
MAEKAWA’S ALGORITHM.
Requesting the critical section:
(a) A site Si requests access to the CS by sending REQUEST(i) messages to all
sites in its request set Ri.
(b) When a site Sj receives the REQUEST(i) message, it sends a REPLY(j)
message to Si provided it hasn’t sent a REPLY message to a site since its
receipt of the last RELEASE message. Otherwise, it queues up the
REQUEST(i) for later consideration.

Executing the critical section:


(c) Site Si executes the CS only after it has received a REPLY message from
every site in Ri.

Releasing the critical section:


(d) After the execution of the CS is over, site Si sends a RELEASE(i) message to
every site in Ri.
(e) When a site Sj receives a RELEASE(i) message from site Si, it sends a REPLY
message to the next site waiting in the queue and deletes that entry from
the queue. If the queue is empty, then the site updates its state to reflect that
it has not sent out any REPLY message since the receipt of the last RELEASE
message.
PROBLEM OF DEADLOCKS
? Though Maekawa’s algorithm has been
proven to be correct and safe, but the
property of liveness is not satisfied by it
because it can lead to deadlock

? Maekawa’s algorithm can deadlock because


a site is exclusively locked by other sites and
requests are not prioritized by their
timestamps . Thus,
a site may send a REPLY message to a site
and later force a higher priority request from
another site to wait.
PROBLEM OF DEADLOCKS
PERFORMANCE PARAMETERS

Size of a request set is √ N.(QUORUM SIZE)

1. An execution of the CR requires √N REQUEST,

√ N REPLY and √ N RELEASE messages, thus


requiring total 3 √N messages per CR execution.

2. Synchronization delay is 2T.

3. M = K = √N works best.
TOKEN-BASED ALGORITHMS

? In software, a logical ring is constructed in


which each process is assigned a position in
the ring.
? The ring positions may be allocated in
numerical order of network addresses or
some other means. It does not matter what
the ordering is. All that matters is that each
process knows who is next in line after itself.
TOKEN-BASED ALGORITHMS

? When the ring is initialized, process 0 is given a


token. The token circulates around the ring. It is
passed from process k to process k +1 in point-
to-point messages.
? When a process acquires the token from its
neighbor, it checks to see if it needs to access
the shared resource. If so, the process goes
ahead, does all the work it needs to, and
releases the resources.
? After it has finished, it passes the token along
the ring
? If a process is handed the token by its neighbor
and is not interested in the resource, it just
passes the token along.
? As a consequence, when no processes need the
resource, the token just circulates at high speed
around the ring.
TOKEN-BASED ALGORITHMS

? Correctness:
Only one process has the token at any instant, so
only one process can actually get to the
resource.

? Problems:
⚫ If the token is ever lost, it must be regenerated.
In fact, detecting that it is lost is difficult, since
the amount of time between successive
appearances of the token on the network is
unbounded.
⚫ The algorithm also runs into trouble if a process
crashes.
SUZUKI–KASAMI’S
BROADCAST ALGORITHM
? Broadcast a request for the token
? Process with the token sends it to the
requestor if it does not need it •
? Issues:
? – Current versus outdated requests
– Determining sites with pending requests
– Deciding which site to give the token to
? The request message: REQUEST(i, n) :
request message from node i for its for its
kth critical section execution critical section
execution

? Process data structure:


⮚ Request Array Ri[j]; j=1—n
⮚ On receiving REQUEST(i, n) :
– Set Ri[ j] = max(Ri[ j ], n)
o Token data structure:
⮚ Token Array Token[i]
⮚ Queue Qt
SUZUKI–KASAMI’S
BROADCAST ALGORITHM
PERFORMANCE PARAMETERS

1. It is simple and efficient.

2. The algorithm requires at most N messages to obtain


the token to enter CR.

3. The synchronization delay in this algorithm is 0 or


T(message delay). Zero synchronization delay, if the
process already holds the token or message delay
SINGHAL’S HEURISTIC
ALGORITHM
? 1. E = Executing and having the token

? 2. H = Holding the token and not executing

? 3. R = Requesting the token

? 4. N = Neutral, none of the above


DATA STRUCTURES
ARRAYS INITIALIZATION
PERFORMANCE PARAMETERS

Performance Parameters

1. The number of REQUEST messages can


vary from N/2 (Average value of the
identifier) to N (max value).
RAYMOND’S TREE-BASED ALGORITHM

? This algorithm uses a spanning tree to


reduce the number of messages exchanged
per critical section execution.
? The network is viewed as a graph, a
spanning tree of a network is a tree that
contains all the N nodes.
? The algorithm assumes that the underlying
network guarantees message delivery.
? All nodes of the network are ’completely
reliable.
RAYMOND’S TREE-BASED ALGORITHM

? A node needs to hold information about and


communicate only to its immediate-
neighboring nodes. Similar to the concept of
tokens used in token-based algorithms, this
algorithm uses a concept of privilege.
? Only one node can be in possession of the
privilege (called the privileged node) at any
time, except when the privilege is in transit
from one node to another in the form of a
PRIVILEGE message.
? When there are no nodes requesting for the
privilege, it remains in possession of the
node that last used it.
THE HOLDER VARIABLES

? Each node maintains a HOLDER variable that


provides information about the placement of
the privilege in relation to the node itself.
? A node stores in its HOLDER variable the
identity of a node that it thinks has the privilege
or leads to the node having the privilege.
? For two nodes X and Y, if HOLDERX = Y, we
could redraw the undirected edge between the
nodes X and Y as a directed edge from X to Y.
? For instance, if node G holds the privilege,
Figure 4 can be redrawn with logically directed
edges.
RAYMOND’S TREE-BASED
ALGORITHM
PERFORMANCE PARAMETERS

The algorithm exchanges only O(log N) messages under


light load and four messages under heavy load
to execute the CR, where N is the number of nodes in the
network.
REFERENCE
1.Ajay Kshemkalyani and Mukesh Singhal
Distributed Computing: Principles, Algorithms,
and Systems,Cambridge University Press

You might also like