0% found this document useful (0 votes)
28 views

Distributed Computing UNIT-3

Unit III covers Distributed Mutex and Deadlock, focusing on distributed mutual exclusion algorithms, including Ricart-Agrawala's and Lamport's algorithms, as well as deadlock detection strategies in distributed systems. It discusses system models, requirements for mutual exclusion, performance metrics, and various algorithm classifications. The document emphasizes the importance of message passing and the challenges of ensuring fairness and deadlock freedom in distributed environments.

Uploaded by

vishalini1706
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views

Distributed Computing UNIT-3

Unit III covers Distributed Mutex and Deadlock, focusing on distributed mutual exclusion algorithms, including Ricart-Agrawala's and Lamport's algorithms, as well as deadlock detection strategies in distributed systems. It discusses system models, requirements for mutual exclusion, performance metrics, and various algorithm classifications. The document emphasizes the importance of message passing and the challenges of ensuring fairness and deadlock freedom in distributed environments.

Uploaded by

vishalini1706
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

UNIT III

3 Distributed Mutex and Deadlock

Syllabus
Distributed Mutual
exclusion Algorithms
Ricart-Agrawala 's Algorithm -
:
Introduction - Preliminaries -
Lamport's algorithm
Token-Based Algorithms -
Deadlock Detection in
of Deadlocks -
Distributed Systems : Suzuki-Kasami's Broadcast
Introduction - System Model -
Chandy-Misra-HaasAlgorithm for the
Algorithm
Preliminaries -Models
modelAND andOR Model.

Contents
3.1 Distributed Mutual Exclusion Algorithms :Introduction
3.2 Preliminaries

3.3 Lamport's algorithm


3.4
Ricart-Agrawala's Algorithm
May-22, Dec.-22, Marks 15
3.5 Token-Based Algorithms
Dec.-22, Marks 15
3.6 Deadlock Detection in

Distributed Systems:Introduction May-22, Marks 13


3.7 System Model
3.8 Preliminaries: Deadlock Handling Strategies
3.9 Models of Deadlocks
Dec.-22,
Marks 13
3.10Chandy-Misra-HaasAlgorithm for the AND Model
3.11 Chandy-Misra-HaasAlgorithm for the OR Model
3.12 Two Marks Questions with Answers

(3 - 1)
3-2 Distributed Mutex and Deadlock
DistributedComputing

3.1 Distributed Mutual Exclusion Algorithms :Introduction

Mutual exclusion ensures that concurrent processes make a serialized access to

shared resources or data. It requires that the actions performed by a user on a

shared resource must be atomic.

•n a distributed systemn neither shared variables nor a local kernel can be used
has be
in

order to implement mutual exclusion. Thus, mutual exclusion to based

exclusively on message passing, in the context of unpredictablemessage delays

and no complete knowledge of the state of the system.


Mutual exclusion :Makes sure that concurrent process access shared resources or
data in a serialized wayIf process, say Pi, is executing in its critical section,
a

then no other processes can be executing in their critical sections.

Example: Updating a DB or sendingcontrol signals to an I/0device


• Problem of mutual exclusion frequently arises in distributed systems whenever
to shared resources by several sites is involved.
concurrent access
Mutual exclusion is the fundamental issue in the design of distributed systems.

:
Entry section The code executed in preparation for entering the critical section

• Critical section :The code to be protected from concurrent execution

• Exit section : The code executed upon leaving the critical section

• Remainder section : The rest of the code

in the order : remainder, entry, critical,


Each process cycles through these sections
exit.

3.2 Preliminaries
3.2.1 System Model
. Distributed mutual
delays and incomplete knowledge
exclusion algorithms must
of the system
deal
state.
with unpredictable message

System model
The system consists of N sites, SN.
S1,S2,..., We assume that a single process is

The process at S; is denoted by pi.


running on each site. site

• At any instant, a site may have several requests` for critical section. A site queueS

up these requests and serves them one at time. a

Site may in one of the three states :

1. Requesting CS

2. Executing CS
3. Neither requesting nor executing requests for CS

TECHNICAL PUBLICATIONS - an up-thrust for knowledge


Distributed Computing
3-3 Distributed Mutex and Deadlock
Classification of Mutual Exclusion
• Different types of algorithm
are used to solve problem
of mutual exclusion in
distributed system. But these algorithms
differ in their
Topology may communication topology.
be ring, bus, star etc. They

.
also maintain
information. different types Of

These algorithms are


divided into two classes :
1. Non-token based : Require multiple
rounds of message exchanges for
States to stabilize local

2. Token based:
Permission passes around from one
site to another. Site
is
allowed enter its critical section if it
to
possesses the token and it continues to
hold the token until the
execution of the critical section is
over.

3.2.2 Requirement of Mutual Exclusion


1. Freedom from deadlocks :Two or more sites should not endlessly wait for
message that will never arrive.

2. Freedom from starvation : A site should not wait indefinitely while other sites
repeatedly access the CS.

3. Strict fairness :
Requests are served in the (logical) order in which they arrive.
4. Fault tolerance : An algorithm should be able to detect and recover from failures.

3.2.3 Performance Metrics

1. Message complexity : The number of messages required per CS execution by a


site.

2. Synchronization delay : After a site leaves the CS, it is the time required before
the next site enters the CS.

3. Response time :The time interval a request waits for its CS execution to be over
after its request messages have been sent out.

4. System throughput :The rate at which the system executes requests for the CS.

System throughput= 1/(SD+E)

where SD is the synchronization delay and E is the average critical section

execution time.

Fig. 3.2.1 shows the synchronization delay.

• Synchronization delay (sd) :A leaves B enters

• Response time:A requests A leaves

Throughput : CS requests handled per time unit = 1 /(SD + E) where E is

average execution time of CS.

TECHNICAL PUBLICATIONS- an up-thrust for knowledge


3-4 Distributed Mutex and Deadlocl
Distnibuted Computing

Last site exits CS Next site enters CS

Time
Synchronization delay

Fig. 3.2.1Synchronization delay

Performance of mutual exclusion algorithm depends upon the loading condition of

the system.

• Performance may depend on whether load is low or high. Best case, worst case,

and average cases are all of interest.

• If the load is high then there is always pending requests for mutual exclusion.

• Fig. 3.2.2 shows the response time.

The site
enters the The site
CS request Its request CS exits the
arrive message CS
sent out

CS execution
time Time

Response time

Fig. 3.2.2 Response time


3.3 Lamport's Algorithm

Each process freely and equally competes for the right to use the shared resource;
requests are arbitrated by a central control site or by distributed agreement.
Permission from all processes.

Example of non-token based algorithms are


Raicourol-Carvalhoetc.
: Lamport, Ricart-Agarwala,

• Let Ri be the request, set of site Si, i.e. the set of sites from which Si needs
permission when it wants to enter CS.
• Each site Si keeps a request queue of requests ordered by logical timestamp

TECHNICALPUBLICATIONS an up-thrust for knowledge


-
and Deadlock
3-5 DistributedMutex
Computing
D'stributed

Reguesting the critical section


Send a REQUEST(Ci, i) to each other site, and place in the request queue.

. When Sj
places the
receives

REQUEST
the REQUEST(Ci,
in the
i)

request queue.
it returns a timestamped REPLY to S1 and

Conditions for entering Cs


. L1

sites.
:Si has received a message with a timestamp larger than (Ci, i) from all other

.L2: Si's request is at the top of the requestqueue.

Releasing the CS
and send RELEASE to all sites in the
Remove (Ci,i) from the request queue, a

request set.

When Sj receives RELEASE, it removes the request from the request queue.

Correctness
[L1, L2] hold in both processes
• Assume two processes S1, S2 in CS simultaneously.
timestamps from all others, their
i.e. both have received messages with larger

requests are at the top of their queues.

So, S1's request is in S2's queue, but S2's


• WLOG, assume Sl's request is earlier

request is at the top of the queue contradiction.


SiFinst (ea)
Optimization

can be omitted in certain situations.


• In Lamport's algorithm,REPLY messages
Si after it has sent
• For example, if site Sj receives a REQUEST message from site

of site Si 's
its own REQUEST message with timestamp higher than the timestamp
to site Si .
request, then site Sjneed not send a REPLY message

because when site Si receives site Sj 's request with timestamp higher than
• This is

its own. it can conclude that site Sj does not have any smaller timestamp request

which is still pending.

With this optimization, Lamport's algorithm requires between 3(N - 1) and

Z(N- 1) messages per CS execution.

• Fig. 3.3.1shows the optimizationof Lamport algorithm.

for section.
Step 1 : Sites S1 and S2 are making request critical

PUBLICATIONS - an up-thrust for knowledge


TECHNICAL
Distributed Mutex and
3-6
Distributed Computing Deadioo

Logical timestamp=2
(2, 1) Process=1

|2,1

Timestamp = 1 (1,2)/
Process =2
1,2
Request

Fig. 3.3.1 (a)

Step 2:Site S2 enters the critical section.

(2,1)
S1
2,1|/1,2
2,1

(1,2)/
S2
|1,2|1,2|
|2,1

S3
1,2 1,2
2,1 S2 enters
critical
section

Fig. 3.3.1 (b)

TECHNICAL PUBLICATIONS
an
up-thrust for
knowledge
Mutex andDeadlock
3-7 Distributed
Computing
Dstibuted

Site S2 exits and sends RELEASE messages.


Step
3: critical section

(2,1)
S1

(1.2)/
S2
1,2||1,2
2,1
Release

S3
1,2| |1,2| S2 leaves
|2,1 critical section

Fig. 3.3.1 (c)

Step 4: Site S1 enters critical section

S1enters
critical section

|
(2,1)
S1
2,11,2 |2.1

|2,1

S2(1.2/
1,2||1,2
2,1

S3
1,2 |1,2 2,1|
|2,1

Fig. 3.3.1 (d)

Lamport evaluation
• Deadlock :processes do not wait forever
which is fair
• Fairness : CS requests are granted in order of logical clock,

• Fault tolerance :

a. Ifa process fails before a request, fine

b. If a process fails after a few requests...

TECHNICAL PUBLICATIONS - an up-thrust for knowledge


Distnibuted Computing
3-8 Distributed Mutex and Deadlock

c One method use a probabilistic failure detector or if a process times out.

remove from all queues.


it

d. If a process fails in critical section...

Performance

a. 3(N- 1) messages per CS execution.

b. Sync delay is T (average message latency).

3.4 Ricart-Agrawala's Algorithm AU: May-22, Dec.-22

Ricart-Agrawala algorithm is an optimizationon Lamport algorithm.

• Ricart-Agrawala algorithm uses only two types of messages : REQUEST and


REPLY.

• Itis assumed that all processes keep a logical clock which is updated according to
the clock rules.

• The algorithm requires a total ordering of requests. Requests are ordered


according to their global 1logicaltimestamps; if timestamps are equal, process
identifiers are compared to order them.
The process that requires entry to a CS multicasts the request message to allother
processes competing for the same resource. enter the Cs
Process is allowed to
when all processes have replied to this message. The request message consists of

the requesting process' timestamp (logical clock) and its identifier.


• Each process keeps its state with respect to the CS : released, requested,or held.
Algorithm:

Requesting the critical section :


1. Request when Si wants to enter the critical section, it broadcast timestamped
REQUEST message to all site.

2. When a process receives a REQUEST message, may be


it in one of three states :
Case 1: The receiver is not interested in the critical section, send reply (OK)to sender.
Case 2: The receiver is in the critical section; do not reply and add the request to a
local queue of requests.

Case 3: The receiver also wants to enter the critical section and has sent its request. n
this case, the receiver compares the timestamp
message with the one thatin the received
it has sent out. The
earliest timestamp wins. If the
receiver is the loser, it sends a
(OK)to sender. Ifthe receiver has the earlier timestamp, then it is reply
the winner and does
not reply. Instead, it adds the request to its queue.

Executing the critical section :


3 When site Si enters critical section after it has received REPLY message from all
the site in its request set.

TECHNICAL PUBLICATIONS an up-thrust for knowledge


Computing
nstibuted 3-9 and Deadlock
Distributed Mutex

the critical section:


Releasing
site
4.
When Si exits the CS, it sends RELEASE()
messages to all sites in its request
setRi.

When a site
5. Sj receivesthe RELEASE() message from site, it sends a REPLY)
message to the next site waiting in the gueue and deletes that entry from the
queue. l the queue is empty, then the site updates its state to reflect that the site
has not sent out any
REPLY message.
:
Optimization
Once site Si has received a REPLY message from a site Si, the authorization
implicit in this message remains valid Sisends a REPLY message
. Fig. 3.4.1 shows operation of Ricart-Agrawala algorithm.
until to Sj.

Step 1: Site S1 and S2 are making request for critical section.


senbd
Fig. 3.4.1 (a)

Step 2 :Site S2 enters the critical section.

(2,1)
S1

Reply

s2 (1.2)/
2,1

S3
S2 enters
critical seotion

Fig. 3.4.1 (b)

TECHNICAL PUBLICATIONS- an up-thrust for knowledge


Distnibuted Computing 3-10 Distributed Mutexand Deadlo

Step 3: Site S2 exits the CS and sends a REPLY messages to S1.

(2.1)
S1

(1.2)

S2
|2,1|

S3

S2 leaves
critical section

Fig. 3.4.1 (c)

Step 4 : Site S1enters the critical section

S1 enters

critical section

S1 (2.1)

(1,2)

S2
2.11

S3

Fig. 3.4.1 (d)


University Questions

1. Show that in the Ricart-Agrawala


algorithm the critical section is accessed in increasing order
of timestamp.
AU:May-22, Marks 15
2. Explain Ricart Agrawala Algorithm with an example.
AU: Dec,-22, Marks 13
3.5 Token-Based Algorithms
AU; Dec.-22
In the Token-based algorithm, a unique token is shared among all
the sites in
distributedcomputing systems. In non-token based algorithm, there is no
token
even not any concept of sharing token for access.

TECHNICAL PUBLICATIONS an up-thrust for knowledge


Dstributed Computing 3-11 Distributed Mutex and Deadlock

,Token based algorithms are Suzuki-Kasami algorithm and Raymond's tree


algorithm.

•A site can access the lock (critical section) if it has a token. Every process
maintains a sequence number (requestid).

3.5.1 Suzuki-Kasami's Broadcast Algorithm

Logical token representing the access right to the shared resource is passed in a
regulated fashion among the processes; whoever holds the token is allowed to

enter the critical section.

This means sequence number is used


• A unique token is shared among all sites. a

instead of a timestamp.

sequence number every time requests the token. This means,


• Sites increment its it

issues of liveness (deadlock-freedom, starvation) are more interesting.

REQUEST message to all other sites.


• To enter CS, a site broadcasts its
that has the token sends the token to
Upon receiving a REQUEST message, a site
token
site is not in CS. If it is in CS, it sends the
the requesting site only if this
only after it has exited the CS.
are no
as holds the token and there
can repeatedly enter CS as long
it
•A site

pending requests from other sites.

Major Design Issues


based protocols
• No RELEASE/REPLY messages as
in assertion

REQUEST messages from


outdated
• Each site needs to be able to distinguish
messages (outstandingrequests)
current REQUEST
to send the token next.
to know which site
• The site with the token needs

Important Data Structures of integers, LN[1..NI.


queue of requesting sites, Array
The token consists of Q, a
that site Sj executed most
of the request
LNiI is the sequence number
and LN keeps
where track of the latest request that
of sites
is the number
recently and N

each site has executed.


the largest sequence
array of integers RNi|l.N] Where RNiljis
• A site Si keeps an REQUEST(j,n) indicates
that site Sjis requesting

received so far from Sj. A


number
its sequence
number.
for CS with n as

- an up-thrust for knowledge


PUBLICATIONS
TECHNICAL
DistributedComputing 3- 12 Distributed Mutex and Deadlock

Algorithm :
Requesting CS :
1. Site Si increments its sequence number, RNi[i] and sends a REQUEST( ,Sn) to all

other sites with sequence number sn.

2. When site Sj receives this message, it sets RNjli}=max(RNji]sn). If Sj has an idle


token, it sends the token to Si if RNj[il=LN[i]+1.

Executing CS:
3. Site Si executes CS when it has received the token

Releasing CS:

4. LN[i]<- Rn[il[i]

5. For each Sjnot in the token queue, append ID to queue if RN]]=LN[j]+1


6. If tokern queue is nonempty after update, delete top ID and send to site indicated
by ID

Theorem : A requesting site enters CS in finite time


Proof :Token request reachesother sites in finite time and this request will be placed
in the token queue in finite time. Since there can be at most N-1 requests in front of

this request, the requesting site can be in CS in finite time

Performance :

1. The algorithm is simple and efficient. requires 0 or N messages


It
per CS
invocations.

:
2. Synchronizationdelay 0 orT. No message is needed or the synchronizationdelay
is zero if a site holds the idle token at the time of its request.

University Question

1. Analyse Suzuki -Kasami's broadcast algorithm for mutual exclusion


in distributed system.

AU :Dec.-22, Marks 15

3.6 Deadlock Detection in Distributed Systems : Introduction


AU:May-22
Adistributed system consists of a number of sites connected by a network. Each
site maintains some of the resources of the system.

• Processes with a globally unique identifier run on the distributed system. They
make resource requests to a controller. There is one controller per site.

t the resource is local, the process makes a request of the local controller.

TECHNICALPUBLICATIONS -an up-thrust for knowledge


Computing
Dstributed 3-13 Distributed Mutex and Deadiock

. Ifthe desired resource is at a


remote site, the process sends a
process makes a request, but before it is
message Ate
granted. it is blocked and said to be
dependernt on the process that holds
the desired resource.
Ahe controller at each site could maintain a WFG on the process requests that
knows about. This the
t
is local WFG.
However, each site's WFG could be cycle free and yet the
distributed system coula
be deadlocked. This is called

. This would occur in the


1. Process A at site 1
global deadlock.

following situation :
holds a lock on resource X.

2. Process A has requested, but has not been granted,


resource Y at site 2.

3. Process B at site 2 holds a


lock on resourceY.
4. Process B has requested,but has not been granted,resource X at site 1.

Both processes are blocked by the other one. There is a globaldeadlock.

• However, the deadlock will not be discovered at either site unless they exchange
information via some detection scheme.

3.6.1 Necessary Condition

•A process can be in two states : Running or blocked


• In the running state (also called the active state), a process has all the needed
resources and is either executing or is ready for execution.

• In the blocked state, a process is waiting to acquiresome resources.

Deadlock is a situation in which a set of processes is blocked waiting for other


process in the set to release the resource.

.Following conditions should hold simultaneously for deadlock to occur :


, 1. Mutual exclusion

2. No pre-emption
3. Hold and wait
4. Circularwait

1. Mutual exclusion Only one process may use a resource at a time. Once a process
has been allocated particular resource, it has exclusive
a
use of the resource, No
other process can use a resource while it is allocated to a process.

2. Hold and wait : A process may hold a resource at the same time it requests

another one.

TECHNICAL PUBLICATIONS an up-thrust for knowledge


3- 14 Distributed Mutex and Deadlock
Distributed Computing

3, Circular waiting :A situation can arise in

which process, holds resource while it

requests resource and process holds while it

requests resource. Each process holds at

least one resource needed by the next

proess in the chain There may be more R1 R,


than two processes involved in a circular
Fig. 3.6.1 Threo deadlocked
wait (e.g. Fig.

: processes
3.6.1).

4. No pre-emption No resource can be


forcibly removed from a process holding it (Resources can be released only by the

explicit action of the process, rather than by the action of an external authority.)

A deadlock is possible only if all four of these conditions simultaneously hold in the

community of processes. These conditions are necessary for a deadlock to exist.

University Question

1. Discuss with suitable example to show that a deadlock cannot occur if any one of the four
conditions is absent.
AU:May-22,Marks 13

3.7 System Model


Resource types R1,R2, .. ., Rm (Resources :CPU ccles, memory space, I/0devices)
• Each resourcetype Ri has Wi instances.

• Each process utilizes a resource as follows :


J Request 2Use 3. Release

3.7.1 Wait for Graph

• The state of process-resource interaction in distributed systems can be modeled by


a bi-partite directed graph called a resource allocation graph. The nodes of this

graph are processes and resources of a system, and the edges of the graph depict
assignments or pending requests.

A pending request is represented by a request edge directed from the node of a

requesting process to the node of the requested resource. A resource assignment is


represented by an assignment, cdge directed from the node of an assigned resource
to the node of the assignedprocess.

• A system is deadlocked if its resource allocation graph contains a directed_cycle or


a knot. Fig. 3.7.1(a) shows resource allocation graph.

TECHNICAL PUBLICATIONS - an up-thrust for knowledge


nstributed
Computing
3-15
Distributed Mutex and Deadlock

R1
R4

P4
Assignment
edge

R2 P3 Request edge
R3

Fig. 3.7.1 (a)


Resource allocation
graph
In distributed
systems, the system state can be modeled or
represented by a directed graph, called a Wait-For
(WFG). In a Graph
WFG, nodes are processes and there
is a
directed edge from node P1 to node
P2 if P1 is blocked
and is waiting for P2 to release

. A system is deadlocked
some resource.

and only if there is a directed


if P4)+
cycle or not (depending upon the
underlying model) in the Fig. 3.7.1 (b)Wait-for
WFG. Resulting wait for graph of resource allocation
graph graph
is shown in Fig. 3.7.1 (b).

3.8 Preliminaries: Deadlock Handling Strategies


• There are three strategies for handling deadlocks :
1. Deadlock prevention 2. Deadlock avoidance 3. Deadlock detection.

3.8.1 Deadlock Prevention

First method :

Prevent the circular-wait condition by defining a linear ordering of resource types.


A process can be assigned resources only according to the linear ordering.

Disadvantages :/Resources cannot be requested in the orders that are needed.


Resources willbe longer than necessary
Second method:

• Prevent the hold-and-wait condition by requiring the process to acquire all needed
resources before starting execution.

Disadvantages :
l Inefficient use of resources.

2 Reduced concurrency.

3 Process can become deadlocked during the initial resource acquisition.

4. Future needs of a process cannot be always predícted.

TECHNICAL PUBLICATIONS - an up-thrust for knowledge


3-16 Distributed Mutex and Deadlock
Distributed Computing

Third method :

Use of time-stamps
• Example : Use time-stamps for transactions to a database -Each transaction has
the time-stamp of its creation.

The circular wait condition is avoided by comparing time-stamps :Strict ordering

of transactions is obtained,the transaction with an earlier time-stamp always wins.,

1. "Wait-die" method: A non-preemptive approach. If a younger process is using

the resource, then the older process (that wants theresource) waits. If an older

process is holding the resource, the younger process (that wants the resource) kill

itself. This forces the resource utilization graph to be directed from older to

younger processes, making cycles impossible. This algorithm is known as the

wait-die algorithm.Fig. 3.8.1 shows wait-die method.


if [e (T2) < e (T1) )
halt T2 ('wait');
else

kill T2 (die');

Wants resource Holds resource

A(t =7) B(t = 11)

Waits

Wants resource Holds resource

B(t = 11) A(t =7)


Dies

Fig. 3.8.1 Wait-die method

2. "Wound-wait" method : A pre-emptive approach. An alternative method by


which resource request cycles may be avoided have an old process preempt
is to
(kill) the younger process that holds a resource. Ifa younger process wants a
resource that an older one is using, then it waits until the old process is done. In
this case, the graph flows from young to old and cycles are again impossible. This
variant is called the
wound-wait algorithm. Fig. 3.8.2 shows wound-wait method.
if [e (T2) < e (T1) ]
kill _T1 (wound);
else

halt T2 (wait):

.die
i
wait
wourd-
TECHNICALPUBLICATIONS - an up-thrust for knowledge
Computing
istnbuted
3- 17 Mutex and Deadlock
Distributed

Wants resource
Holds resource

=7)
A(t
B(t =11)
Preempts

Wants resource Holds resource

B(t = 11)
A(t =7)

Waits

Fig. 3.8.2 Wound - wait method

3,8.2 Dead Avoidance

• Decision made dynamically, before allocating a resource, the resulting global


system state is checked, if it is safe state then allow for allocation.

• Because of the following drawback, deadlock avoidance can be impractical in


distributed system.

• Disadvantages
l Every site has to maintain global state of system (extensive overhead in storage

and communication).
but global state
2Different sites may determine (concurrently) that state is safe,

for safe global state by different sites must be


may be unsafe:Verification
mutually exclusive.

overhead to check for every allocation (distributed system may have


3.Large
large number of processes and resources).

3.8.3 Deadlock Detection

• Principle of operation : Detection of a cycle in WFG proceeds concurrently with

normal operation.
and resolution algorithms:
• Requirements for the deadlock detection

Detection
time
K The algorithm must detect all existing deadlock in finite

(phantom)deadlock.
2 The algorithm should not report non-existent

wait-for dependencies in WFG must be


Resolution (recovery) : AIl existing
and give their
removed, i.e. roll-back one or mnore processes that are deadlocked

resources to other blocked processes.

the most popular strategy for handling


• Observation : Deadlock detection is

deadlocks in distributed systems.


an up-thrust for knowledge
TECHNICAL PUBLICATIONS®
-
Distnibuted Computing
3-18 Distnibuted Mutex and Deadlock

3.9 Models of Deadlocks AU: Dec.-22


Models of deadlocks distributed system allows differernt kind of resource requests
so;that means,they are represented by different model a process might require a
single resource or a combination of resources for its execution.

3.9.1 The Single Resource Model

• In the single resource model, a process can have at most one outstanding request
for only one unit of a resource.

Since the maximum out-degree of a node in a WFG for the single resource model
can be 1, the presence of a cycle in the WFG shall indicate that there is a
deadlock.

3.9.2 The AND Model

Set of deadlocked processes, where each process waits for a resource held by
another process (e.g. data object in a database, I/0 resource on a server).
Uses AND condition.
• AND condition : A process that requires resources for execution carn proceed
when it has acquired all those resources.

The condition for deadlock in a system using the AND condition is the existence
of a cycle.

Since in the single-resource model, a process can have most one outstanding
at
request, the AND model is more general than the single-resource model

3.9.3 The OR Model


Set of deadlocked processes, where each process waits to receive messages
(communication) from other processes in the set.

• Uses OR condition.

• OR condition :A
process that requires resources for execution can proceed when
it has acquired at least one of those resources.
The condition for deadlock in a system using the
OR condition is the existence of
a knot. A knot (K) consists of a set of nodes
such that for every node a in K, all
nodes in K and only the nodes in K are reachable from
node
a.
• In the OR model, the presence of a knot indicates a deadlock.

3.9.4 The AND-OR Model


•A generalization of the previous two models (OR model and AND model) is the
AND-OR model.

TECHNICAL PUBLICATIONS-an up-thrust for knowledge


Computing 3- 19 Distributed Mutex and Deadlock
pstributed

AND-OR model, a and and or in


Inthe request may specifyany combination

of
the resOurce request. For example, in the AND-OR model, a request for multiple
can be of the form x and
resources (y or z).

the presence of deadlocks in such a construct


• To detect model, there is no familiar

of graph theory using WFG.


deadlock is a stable
property, a deadlock in the AND-OR model can be
Sincela
detected by repeated application of the test for OR-model deadlock.

Question
University

system with the


1 Name and explain the different types of deaklock models in distributed

commonly used strategies to handle deadlocks with a neat diagram.

AU: Dec.-22,.Marks 13

B10 Chandy-Misra-Haas Algorithmn for the AND Model

It is also considered
. This considered an edge-chasing, probe-based algorithm.
is

one of the bestdeadlock detectión algorithmsfor distributed systems.


the process
If a process makes a request
for a resource which fails or times out,

a probe message and sends it to
each of the processes holding one or
generates

more of its requested resources.

message contains the following information :


Each probe
one that initiates the probe message);
1. The id of the process that is blocked (the

version of the probe message;


2. The id of the process issending this particular

and
receive this probe message.
3. The id of the process that should
is also waiting for
checks to see if
message, it it

• When a process receives a probe and will eventually


If not, it is currently using the needed resource
resources.
finish and release the resource.

on the probe message to all processes it


it passes
• Ifit waiting for resources,
is
it has itself requested.
knows to be holding resources
the sender and receiver
modifies the probe message, changing
• The process first
that it recognizes as
having initiated, it
ids. If a process
receives a probe message

is a cycle in the
system and thus, deadlock.
knows there

• Fig. 3.10.1 shows deadlock example.


the messages shown have P,
probe message, so that all

• In this case P, initiates the


by process Pa, it modifies
it
is received
When the probe message
as the initiator. to
Eventually, the probe message returns
and sends it to two more processes.

process P,. Deadlock!


an up-thrust for knowledge
TECHNICAL PUBLICATIONS
-
Distnibuted Computing 3-20 DistributedMutex and Deadlock

P (P,.P.P3)i

\P,. P2. Pi)

\P,.P3. Pe

P, P2
Site 2

Site 1

Fig. 3.10.1 Deadlock example

Advantages :
a) It is easy to implement.
b) Each probe message is of fixed length.

c) There is very little computation.


d) There is very little overhead.

e) There isno need to construct a graph, nor to pass graph information to other
sites.

f) This algorithmdoes not find false (phantom) deadlock.


g) There is no need for special data structures.

|3.11 Chandy-Misra-Haas Algorithm for the OR Model


Chandy-Misra-Haas distributed deadlock
based on the approach of
detection algorithm for OR
model is

diffusion-computation. A blocked process determines if


it is deadlocked by initiating a diffusion
computation.
• Two types of messages are used
reply(i, j, k), denoting that they belong
in a diffusion

to a
computation
diffusion computation
:query(i, j, k) and

initiated by a
process P and are being sent from process P,to process Pk
A blocked process initiates deadlock detection by sending query messages
processes in to all
its dependent set. If an active process receives a query or reply
message, it discards it.

• When a blocked process P receives a query(i, j, k) message, it takes the following


actions:
1) If this is the first query message received by Pk for the
deadlock detection
initiated by P (called the
engaging query), then it
the processes in propagates the query to all
its dependent set and sets a local
variable numk (i) to the
number of query messages sent.

TECHNICAL PUBLICATIONS® - an up-thrust for knowledge


Computing
nstributed
3-21
DistributedMutex and Deadlock
If this is
not the
engaging query, then P
immediately provided returns a reply message to it
P, has been
continuously blocked since received the
corresponding engaging

it
query.Otherwise, it
Process Pk maintains a discards the query.
boolean variable waitk (i)
that denotes the
been continuously fact that it nas
blocked since it received
the last engaging
P.. When a query from process
blocked process P, receives
a reply(i, i, k) message,
num, (i) only if waity (i) holds. it decrements

.A processsends a
reply message in response
to an engaging
has received a reply to query only after it
every query message it
had sent out for this
query. The initiator process engagin8
detects a deadlock
when it receives reply messages to
all the query
messages it had sent out.

B.12 Two Marks Questions with


Answers
Q.1 Explain the term mutual exclusion.
AU: Dec.-22

Ans. : Asynchronous programming provides opportunities for a program to continue


running other code while waiting for a long-runningtask to complete.
Q,2 What is deadlock ? AU :Dec.-22
Ans. : Deadlock is the problem of multiprogramming system. Deadlock can be defined
as the permanent blocking of a set of processes that either complete for system
resources.

Q.3 Name the two types of messages used in Ricart-Agrawala's algorithm.


:
Ans. :
Two type of messages used by
communication channels are assumed
Ricart-Agrawala
to follow FIFO
are REQUEST and REPLY and
order. Site send a REQUEST
AU May-22

message toall other site to get their permission to enter critical section. A site send a
REPLY message to other site to give its permission to enter the critical section.

Q.4 What are the conditions for deadlock ? 'AU :


May-22
Ans Conditions should hold simultaneouslyfor deadlock to occur are:
a) Mutual exclusion b) No preemption
c) Hold and wait d) Circular wait.

Q.5 What is mutual exclusion ?


Ans. : Mutual exclusion in a distributed system states that only one process is allowed
to execute the critical section (CS) at any given time. In a distributed system, shared
Variables or a local kernel cannot be used to implement mutual exclusion.

0.6
Which are the three basic approaches for implementing distributed mutual
exclusion ?

TECHNICAL PUBLICATIONS - an up-thrust for knowledge


Distributed
Distributed Computing 3-22 Muutex and
Deadioc

distributed mutual
Ans. :There are three basic approaches for implementing exclusion

1.Token based approach


2. Non-token based
approach
3.Quorum based approach
Q.7
What are the requirements of mutual ?
Ans. : Requirements of mutual exclusion
exclusion

algorithmsare
algorithms

a
Freedom from deadlocks b. Freedom from starvation
C. Strict fairness d. Fault tolerance

Q.8
What are the performance metric of mutual exclusion algorithm ?
Ans. : Performances metric are message complexity, synchronization delay, response
time and system
throughput.
Q.9 What is response time ?
Ans. : The time interval a request waits for its CS execution to be over after its request
messages have been sent out.

Q.10 Which are the criteria for evaluating performance of algorithms for
exclusion ? mutual

Ans. : Criteria for evaluating


performance of algorithms for mutual exclusionare :
a. Bandwidth consumed which is proportional to the number of
each entry and exit operation. messages sent in

b. Client delay incurred by a process at


each entry and exit operation.
C. Throughput of the system.

Q.11 What is the advantage if your server side


processing uses threads
single process ? instead of a

Ans. :An important property of threads


is that they can
of allowing blocking provide a convenient means
system without blocking the
calls
thread is running. This entire process in which the
property makes threads
distributed systems as it particularly attractive to use in
makes it much easier to
express communication in
maintaining multiple logical connections the form of
at the same time.
Q.12 What is a phantom deadlock ?
Ans. :A deadlock that is detected'
but is not really a
deadlock. deadlock is called a phantom
Q.13 What is wait for graph ?
Ans. : The state of process-resource
interaction in
directed distributed systems can be
by a bi-partite graph called a resource modeled
allocation graph. The nodes this
of

TECHNICAL PUBLICATIONS
- an up-thrust for
knowledge
Distributed Computing 3-23 Distributed Mutex and Deadlock

graph are processes and resources of a system, and the edges of the graph depict
assignmentsor pending requests. A pending request is represented by a request edge
directed from the node of a requesting process to the node of the requested resource.

Q.14 Explain Wait-die" method.

Ans. : A non-preemptive approach. If a younger process is using the resource, then the
older process waits. If an older process is holding the resource, the younger process
kills itself. This forces the resource utilization graph to be directed from older to

younger processes, making cycles impossible. This algorithm is known as the wait-die
algorithm.

0.15 List the deadlock handling strategies in distributed system.

Ans.: There are three strategies for handling deadlocks, viz., deadlock prevention,
deadlockavoidance, and deadlock detection.

Q,16 What do you mean by deadlock avoidance ?


Ans. : Deadlock avoidance depends on additional information about the long term
resource needs process. The system must be able todecide whether granting a
of each
resource is and only make the allocation when it is safe. When a process is
safe or not
created, it must declare its maximum claim, i.e. the maximum number of unit resource.

The resource manager can grant the request if the resources are available.
Q.17 Define deadlock detection in distributed systems.

examination of process-resource interaction for the


Ans. : Deadlock detection requires

presence of cyclic wait.

TECHNICAL PUBLICATIONS an up-thrust for knowledge

You might also like