0% found this document useful (0 votes)
432 views

Concurrency Control in DBMS

Concurrency control protocols in DBMS are necessary to maintain data consistency when transactions run concurrently. The main problems that can occur are dirty reads, unrepeatable reads, phantom reads, and lost updates. Lock-based protocols use shared and exclusive locks to control access to data. The two-phase locking protocol requires transactions to acquire all locks in a growing phase and release locks in a shrinking phase. Strict two-phase locking retains exclusive locks until transaction commit or abort to avoid cascaded rollbacks. Validation-based protocols use timestamps and validation rules to detect conflicts after transaction completion rather than locking data during execution.

Uploaded by

21egjcs151
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
432 views

Concurrency Control in DBMS

Concurrency control protocols in DBMS are necessary to maintain data consistency when transactions run concurrently. The main problems that can occur are dirty reads, unrepeatable reads, phantom reads, and lost updates. Lock-based protocols use shared and exclusive locks to control access to data. The two-phase locking protocol requires transactions to acquire all locks in a growing phase and release locks in a shrinking phase. Strict two-phase locking retains exclusive locks until transaction commit or abort to avoid cascaded rollbacks. Validation-based protocols use timestamps and validation rules to detect conflicts after transaction completion rather than locking data during execution.

Uploaded by

21egjcs151
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Concurrency Control in DBMS

When several transactions execute concurrently without any rules and protocols, various problems arise that may harm
the data integrity of several databases. These problems are known as concurrency control problems. Therefore several
rules are designed, to maintain consistency in the transactions while they are executing concurrently which are known
as concurrency control protocols.
What is concurrency control in DBMS?
A transaction is a single reasonable unit of work that can retrieve or may change the data of a database. Executing each
transaction individually increases the waiting time for the other transactions and the overall execution also gets delayed.
Hence, to increase the throughput and to reduce the waiting time, transactions are executed concurrently.
Example: Suppose, between two railway stations, A and B, 5 trains have to travel, if all the trains are set in a row and
only one train is allowed to move from station A to B and others have to wait for the first train to reach its destination
then it will take a lot of time for all the trains to travel from station A to B. To reduce time all the trains should be
allowed to move concurrently from station A to B ensuring no risk of collision between them.
When several transactions execute simultaneously, then there is a risk of violation of the data integrity of several
databases. Concurrency Control in DBMS is a procedure of managing simultaneous transactions ensuring
their atomicity, isolation, consistency and serializability.
Concurrency Control Problems
Several problems that arise when numerous transactions execute simultaneously in a random manner are referred to as
Concurrency Control Problems.
Dirty Read Problem
The dirty read problem in DBMS occurs when a transaction reads the data that has been updated by another transaction
that is still uncommitted. It arises due to multiple uncommitted transactions executing simultaneously.
Example: Consider two transactions A and B performing read/write operations on a data DT in the database DB. The
current value of DT is 1000: The following table shows the read/write operations in A and B transactions

Transaction A reads the value of data DT as 1000 and modifies it to 1500 which gets stored in the temporary buffer.
The transaction B reads the data DT as 1500 and commits it and the value of DT permanently gets changed to 1500 in
the database DB. Then some server errors occur in transaction A and it wants to get rollback to its initial value, i.e.,
1000 and then the dirty read problem occurs.
Unrepeatable Read Problem
The unrepeatable read problem occurs when two or more different values of the same data are read during the read
operations in the same transaction.
Example: Consider two transactions A and B performing read/write operations on a data DT in the database DB. The
current value of DT is 1000: The following table shows the read/write operations in A and B transactions.

Transaction A and B initially read the value of DT as 1000. Transaction A modifies the value of DT from 1000 to 1500
and then again transaction B reads the value and finds it to be 1500. Transaction B finds two different values of DT in
its two different read operations.

Phantom Read Problem


In the phantom read problem, data is read through two different read operations in the same transaction. In the first
read operation, a value of the data is obtained but in the second operation, an error is obtained saying the data does not
exist.
Example: Consider two transactions A and B performing read/write operations on a data DT in the database DB. The
current value of DT is 1000: The following table shows the read/write operations in A and B transactions.

Transaction B initially reads the value of DT as 1000. Transaction A deletes the data DT from the database DB and
then again transaction B reads the value and finds an error saying the data DT does not exist in the database DB.

Lost Update Problem


The Lost Update problem arises when an update in the data is done over another update but by two different
transactions.
Example: Consider two transactions A and B performing read/write operations on a data DT in the database DB. The
current value of DT is 1000: The following table shows the read/write operations in A and B transactions.
Transaction A initially reads the value of DT as 1000. Transaction A modifies the value of DT from 1000 to
1500 and then again transaction B modifies the value to 1800. Transaction A again reads DT and finds 1800
in DT and therefore the update done by transaction A has been lost.
Concurrency Control Protocols
To avoid concurrency control problems and to maintain consistency and serializability during the execution of
concurrent transactions some rules are made. These rules are known as Concurrency Control Protocols.
Introduction to Lock-Based Protocol
We can define a lock-based protocol in DBMS as a mechanism that is responsible for preventing a transaction from
reading or writing data until the necessary lock is obtained. The concurrency problem can be solved by securing or
locking a transaction to a specific user. The lock is a variable that specifies which activities are allowed on a certain
data item.

Types of Locks in DBMS


In DBMS Lock based Protocols, there are two modes for locking and unlocking data items Shared Lock (lock-S) and
Exclusive Lock (lock-X). Let's go through the two types of locks in detail:
Shared Lock(S): The locks which disable the write operations but allow read operations for any data in a transaction
are known as shared locks. They are also known as read-only locks and are represented by 'S'.
Exclusive Lock(X): The locks which allow both the read and write operations for any data in a transaction are known
as exclusive locks. This is a one-time use mode that can't be utilized on the exact data item twice. They are represented
by 'X'.
Lock Compatibility Matrix

The two methods outlined below can be used to convert between the locks:
1. Conversion from a read lock to a write lock is an upgrade.
2. Conversion from a write lock to a read lock is a downgrade.
Types of Lock-Based Protocols
There are basically four lock based protocols in DBMS namely Simplistic Lock Protocol, Pre-claiming Lock
Protocol, Two-phase Locking Protocol, and Strict Two-Phase Locking Protocol. Let's go through each of these
lock-based protocols in detail.
Simplistic Lock Protocol
The simplistic method is defined as the most fundamental method of securing data during a transaction. Simple
lock-based protocols allow all transactions to lock the data before inserting, deleting, or updating it. After the
transaction is completed, the data item will be unlocked.
Pre-Claiming Lock Protocol
Pre-claiming Lock Protocols are known to assess transactions to determine which data elements require locks. Prior
to actually starting the transaction, it asks the Database management system for all of the locks on all of the data
items. The pre-claiming protocol permits the transaction to commence if all of the locks are obtained. Whenever
the transaction is finished, the lock is released. This protocol permits the transaction to roll back if all of the locks
are not granted and then waits until all of the locks are granted.

Two-phase Locking Protocol


If Locking as well as the Unlocking can be performed in 2 phases, a transaction is considered to follow the Two-
Phase Locking protocol. The two phases are known as the growing and shrinking phases.
Growing Phase: In this phase, we can acquire new locks on data items, but none of these locks can be released.
Shrinking Phase: In this phase, the existing locks can be released, but no new locks can be obtained.
Two-phase locking helps to reduce the amount of concurrency in a schedule but just like the two sides of a coin two-
phase locking has a few cons too. The protocol raises transaction processing costs and may have unintended
consequences. The likelihood of establishing deadlocks is one bad result.

Strict Two-Phase Locking Protocol


In DBMS, Cascaded rollbacks are avoided with the concept of a Strict Two-Phase Locking Protocol. This protocol
necessitates not only two-phase locking but also the retention of all exclusive locks until the transaction commits or
aborts. The two-phase is with deadlock.
It is responsible for assuring that if 1 transaction modifies data, there can be no other transaction that will be able to
read it until the first transaction commits. The majority of database systems use a strict two-phase locking protocol.
Starvation and Deadlock
When a transaction must wait an unlimited period for a lock, it is referred to as starvation. The following are the causes
of starvation:
When the locked item waiting scheme is not correctly controlled.
When a resource leak occurs.
The same transaction is repeatedly chosen as a victim.
Let's know how starvation can be prevented. Random process selection for resource or processor allocation should be
avoided since it encourages hunger. The resource allocation priority scheme should contain ideas like aging, in which
a process' priority rises as it waits longer. This prevents starvation.
Deadlock: In a circular chain, a deadlock situation occurs when two or more processes are expecting each other to
release a resource or when more than 2 processes are waiting for the resource.
Introduction to Validation Based Protocol in DBMS
Validation based protocol in DBM is a type of concurrency control techniques that works on the validation rules and
time-stamps. It is also known as the optimistic concurrency control technique. The protocol associated with three phases
for managing concurrent transactions such as read phase, validation phase, and write phase. The optimistic approach
of the protocol assumes less interference among concurrent transactions, hence there is no checking process happening
while the transactions are executed. This protocol is preferable for short transactions. It uses the local copy of the data
for the rollback mechanism that is used to manage rare conflict scenarios and avoids cascading rollbacks.
How Validation Based Protocol works in DBMS?
The Validation based protocol works based upon the following three phases
 Read and Execution Phase: Read phases involve read and execute an operation for Transaction T1. The
values of the multiple data items are being read in this phase and the protocol writes the data in a temporary
variable. The temporary variable is a local variable that holds the data item instead of writing it to the database.
 Validation Phase: The validation phase is an important phase of the concurrency protocol. It involves
validating the temporary value with the actual values in the database and to check the view serializability
condition.
 Write Phase: The write phase ensures valid data to be written to the database that is validated in the validation
phase. The protocol performs the rollback operation in case of an invalid scenario of the validation phase.
Various Timestamps Associated
Next, we will discuss various timestamps associated with each phase of the validation protocol. There are three
timestamps that control the serializability of the validation based protocol in the database management system, such
as.
Start (Timestamp): The start timestamp is the initial timestamp when the data item being read and executed in the
read phase of the validation protocol.
Validation (Timestamp): The validation timestamp is the timestamp when T1 completed the read phase and started
with the validation phase.
Finish (Timestamp): The finish timestamp is the timestamp when T1 completes the writing process in the writing
phase.
To manage the concurrency between transactions T1 and T2, the validation test process for T1 should validate all the
T1 operations should follow TS(T1) < TS(T2) where TS is the timestamp and one of the following condition should
be satisfying.
Finish T1 < Start T2
 In this condition, T1 completes all the execution processes before the T2 starts the operations.
 It regulates maintaining the serializability.
Start (T2) <Finish (T1) <Validate (T2)
 The validation phase of T2 should occur after the finish phase of T1. This scenario is useful for concurrent
transaction serializability.
 The Transactions are able to access the mutually exclusive database resource at a particular timestamp while
validating the protocol conditions.
The validation based protocol relies on the timestamp to achieve serializability. The validation phase is the deciding
phase for the transaction to be committed or rollback in the database. It works on the equation TS (T1) =Validation
(T1) where TS is the time stamp and T1 is the transaction.
How does the Validation Protocol Work?
We will discuss an example scenario to demonstrate how the validation protocol works

The transaction table is shown in the example associated with transaction T1 and transaction T2. It represents the
schedule produced using validation protocol.
The concurrent transaction process starts with T1 with a reading operation as Read (A) where A is the numeric data
element in the database. In the next step, the transaction T2 also reads the same data variable A after some time.
Transaction T2 performs an arithmetic operation by subtracting constant value 40 from the variable A. It is represented
as A=A-40 in the transaction table. The next step is a read operation on transaction T2 where it’s reading another
numerical value of variable B as the Read (B). After the read operation completed, the transaction T2 immediately
performs an arithmetic operation on the variable B. It uses the addition operator ‘+’ for adding a constant value as 80
to variable B. The addition operation is represented as B=B+80 in the transaction table.
In the next step of the concurrent transaction, T1 reads the variable B with operation Read (B). Now the validation
based protocol comes into the action in the transaction T1 that validates the time stamp of the start phase of T2 lesser
than the finishing phase time stamp of Transaction T1 and that is a lesser timestamp as the validate phase of Transaction
T2.
Similarly, in the Transaction T2, the validation based protocol validates the timestamps. In the example shown in the
table indicates both the validation based protocol is provided with a valid result based on the timestamp condition. And,
as the conclusive operations write operations are performed by the transaction T2 using Write (A) and Write (B)
statements.

Timestamp Ordering Protocol-


What is Timestamp Ordering Protocol?
 Timestamp ordering protocol maintains the order of transaction based on their timestamps.
 A timestamp is a unique identifier that is being created by the DBMS when a transaction enters into the
system. This timestamp can be based on the system clock or a logical counter maintained in the system.
 Timestamp helps identifying the older transactions (transactions that are waiting in line to be executed) and
gives them higher priority compared to the newer transactions. This make sure that none of the transactions
are pending for a longer period of time.
 This protocol also maintains the timestamps for the last read and last write on a data.
 For example, let’s say an old transaction T1 timestamp is TS (T1) and a new transaction T2 enters into the
system, timestamp assigned to T2 is TS (T2). Here TS (T1) < TS (T2) so the T1 has the higher priority
because its timestamp is less than timestamp of T2. T1 would be given the higher priority than T2. This is
how timestamp based protocol maintains the serializability order.
How a Timestamp ordering protocol works?
Let’s see how a timestamp ordering protocol works in a DBMS system. Let’s say there is data item A in the database.
W_TS (A) is the largest timestamp of a transaction that executed the operation write (A) successfully.
R_TS (A) is the largest timestamp of a transaction that executed the operation read (A) successfully.
1. Whenever a Transaction Tn issues a Write(A) operation, this protocol checks the following conditions:
 If R_TS(A) > TS(Tn) or if W_TS(A) > TS(Tn), then abort and rollback the transaction Tn and reject
the write (A) operation.
 If R_TS(A) <= TS(Tn) or if W_TS(A) <= TS(Tn) then execute Write(A) operation of Tn and
set W_TS(A) to TS(Tn).
2. Whenever a Transaction Tn issues a Read(A) operation, this protocol checks the following conditions:
 If W_TS(A) > TS(Tn), then abort and reject Tn and reject the Read(A) operation.
 If W_TS(A) <= TS(Tn), then execute the Read(A) operation of Tn and update the
timestamp R_TS(A).
Advantages of Timestamp based protocol
 Schedules managed using timestamp based protocol are serializable just like the two phase protocols
 Since older transactions are given priority which means no transaction has to wait for longer period of time
that makes this protocol free from deadlock.

Explain Log based recovery method.


Log based recovery
The most widely used structure for recording database modification is the log.
The log is a sequence of log records, recording all the update activities in the database.
In short Transaction log is a journal or simply a data file, which contains history of all
transaction performed and maintained on stable storage.
Since the log contains a complete record of all database activity, the volume of data stored
in the log may become unreasonable large.
For log records to be useful for recovery from system and disk failures, the log must reside
on stable storage.
Log contains
1. Start oftransaction
2. Transaction-id
3. Record-id
4. Type of operation (insert, update, delete)
5. Old value, new value
6. End of transaction that is committed or aborted.
All such files are maintained by DBMS itself. Normally these are sequential files.
Recovery has two factors Rollback (Undo) and Roll forward (Redo).
When transaction Ti starts, it registers itself by writing a <Ti start>log record
Before Ti executes write(X), a log record <Ti, X, V1, V2> is written, where V1 is the
value of X before the write, and V2 is the value to be written to X.
 Log record notes that Ti has performed a write on data item Xj
 Xj had value V1 before the write, and will have value V2 after the write.
When Ti finishes it last statement, the log record <Ti commit> is written.
Two approaches are used in log based recovery
1. Deferred database modification
2. Immediate database modification
Log based Recovery Techniques
Once a failure occurs, DBMS retrieves the database using the back-up of database and
transaction log. Various log based recovery techniques used by DBMS are as per below:
1. Deferred Database Modification
2. Immediate Database Modification
Both of the techniques use transaction logs. These techniques are explained in following
sub-sections.
Explain Deferred Database Modification log based recovery method.
Concept
Updates (changes) to the database are deferred (or postponed) until the transaction
commits.
During the execution of transaction, updates are recorded only in the transaction log and
in buffers. After the transaction commits, these updates are recorded in the database.
When failure occurs
If transaction has not committed, then it has not affected the database. And so, no need to
do any undoing operations. Just restart the transaction.
If transaction has committed, then, still, it may not have modified the database. And so,
redo the updates of the transaction.
Transaction Log
In this technique, transaction log is used in following ways:
Transaction T starts by writing <T start> to the log.
Any update is recorded as <T, X, V>, where V indicates new value for data item X. Here,
no need to preserve old value of the changed data item. Also, V is not written to the X in
database, but it is deferred.
Transaction T commits by writing <T commit> to the log. Once this is entered in log, actual
updates are recorded to the database.
If a transaction T aborts, the transaction log record is ignored, and no any updates are
recorded to the database.
Example
Consider the following two transactions, T0 and T1 given in figure, where T0 executes
before T1. Also consider that initial values for A, B and C are 500, 600 and 700
respectively.
Transaction – T0 Transaction – T1
Read (A) Read
A =A - (C)
100 C=C-
200
Write (A) Write (C)
Read (B)
B =B+
100
Write (B)
The following figure shows the transaction log for above two transactions at three
different instances of time.
Time Instance (a) Time Instance (b) Time Instance (c)
<T0 start> < T0 start> < T0 start>
< T0, A, 400> < T0, A, 400> < T0, A, 400>
< T0, B, 700> < T0, B, 700> < T0, B, 700>
< T0 commit> < T0 commit>
<T1 start> < T1 start>
<T1, C, 500> < T1, C, 500>
< T1 commit>
If failure occurs in case of
1. No any REDO actions are required.
2. As Transaction T0 has already committed, it must be redone.
3. As Transactions T0 and T1 have already committed, they must be redone.
Explain Immediate Database Modification log based recovery method.
Concept
Updates (changes) to the database are applied immediately as they occur without waiting
to reach to the commit point.
Also, these updates are recorded in the transaction log.
It is possible here that updates of the uncommitted transaction are also written to the
database. And, other transactions can access these updated values.
When failure occurs
If transaction has not committed, then it may have modified the database. And so, undo the
updates of the transaction.
If transaction has committed, then still it may not have modified the database. And so, redo
the updates of the transaction.
Transaction Log
In this technique, transaction log is used in following ways:
Transaction T starts by writing <T start> to the log.
Any update is recorded as <T, X, Vold, Vnew > where Vold indicates the original value
of data item X and Vnew indicates new value for X. Here, as undo operation is required, it
requires preserving old value of the changed data item.
Transaction T commits by writing <T commit> to the log.
If a transaction T aborts, the transaction log record is consulted, and required undo
operations are performed.
Example
Again, consider the two transactions, T0 and T1, given in figure, where T0 executes
before T1.
Also consider that initial values for A, B and C are 500, 600 and 700 respectively.
The following figure shows the transaction log for above two transactions at three different
instances of time. Note that, here, transaction log contains original values also along with
new updated values for data items.
If failure occurs in case of -
Undo the transaction T0 as it has not committed, and restore A and B to 500 and 600
respectively.
Undo the transaction T1, restore C to 700; and, Redo the Transaction T0 set A and B to
400 and 700 respectively.
Redo the Transaction T0 and Transaction T0; and, set A and B to 400 and 700
respectively, while set C to 500.
Time Instance (a) Time Instance (b) Time Instance (c)
<T0 start> < T0 start> < T0 start>
< T0, A, 500, 400> < T0, A, 500, 400> < T0, A, 500, 400>
< T0, B, 600, 700> < T0, B, 600, 700> < T0, B, 600, 700>
< T0 commit> < T0 commit>
<T1 start> < T1 start>
<T1, C, 700, 500> < T1, C, 700, 500>
< T1 commit>

Explain system recovery procedure with Checkpoint record concept.

Problems with Deferred & Immediate Updates


Searching the entire log is time-consuming.
It is possible to redo transactions that have already been stored their updates to the
database.
Checkpoint
 A point of synchronization between database and transaction log file.
 Specifies that any operations executed before this point are done correctly and
storedsafely.
 At this point, all the buffers are force-fully writtento the secondary storage.
Checkpoints are scheduled at predetermined time intervals
 Used to limit -
1. The size of transaction log file
2. Amount of searching, and
3. Subsequent processing that is required to carry out on the transaction log file.
When failure occurs
 Find out the nearest checkpoint.
 If transaction has already committed before this checkpoint, ignore it.
 If transaction is active at this point or after this point and has committed before
failure,redo that transaction.
 If transaction is active at this point or after this point and has not committed, undo that
transaction.
Example
Consider the transactions given in following figure. Here, Tc indicates checkpoint, while
Tf indicates failure time.
Here, at failure time -
1. Ignore the transaction T1 as it has already been committed before checkpoint.
2. Redo transaction T2 and T3 as they are active at/after checkpoint, but have
committed before failure.
3. Undo transaction T4 as it is active after checkpoint and has not committed.

Time Tc Tf

T1
T2

T3

T4

Checkpoint Time Failure

Explain Shadow Paging Technique.


Concept
Shadow paging is an alternative to transaction-log based recovery techniques.
Here, database is considered as made up of fixed size disk blocks, called pages. These
pages are mapped to physical storage using a table, called page table.
The page table is indexed by a page number of the database. The information about
physical pages, in which database pages are stored, is kept in this page table.
This technique is similar to paging technique used by Operating Systems to allocate
memory, particularly to manage virtual memory.
The following figure depicts the concept of shadow paging.
Execution of Transaction
During the execution of the transaction, two page tables are maintained.
1. Current Page Table: Used to access data items during transaction execution.
2. Shadow Page Table: Original page table, and will not get modified during
transaction execution.
Whenever any page is about to be written for the first time
1. A copy of this page is made onto an free page,
2. The current page table is made to point to the copy,
3. The update is made on this copy.
At the start of the transaction, both tables are same and point· to same pages.
The shadow page table is never changed, and is used to restore the database in case of any
failure occurs. However, current page table entries may change during transaction
execution, as it is used to record all updates made to the database.
When the transaction completes, the current page table becomes shadow page table. At
this time, it is considered that the transaction has committed.
The following figure explains working of this technique.
As shown in this figure, two pages - page 2 & 5 - are affected by a transaction and copied
to new physical pages. The current page table points to these pages.
The shadow page table continues to point to old pages which are not changed by the
transaction. So, this table and pages are used for undoing the transaction.
Pages Page Table

 1
Page 5 2
Page 1 
Page 4  3
Page 2  5 4
Page 3
 6
Page 6

Current Shadow
Page Tale Pages Page Table

 1
 Page 5(old)
 2
 Page 1
 3
 Page 4
 4
 Page 2(old)
 5
 Page 3
 6
 Page 6
Page 2(new)
Page 5(new)

Advantages
No overhead of maintaining transaction log.
Recovery is quite faster, as there is no any redo or undo operations required.
Disadvantages
Copying the entire page table very expensive.
Data are scattered or fragmented.
After each transaction, free pages need to be collected by garbage collector. Difficult to extend
this technique to allow concurrent transaction
What is deadlock? Explain wait-for-graph. When it occurs?
OR
Define deadlock. Explain wait-for-graph. Explain different conditions that lead
to deadlock.
Deadlock-A deadlock is a condition when two or more transactions are executing and each transaction is
waiting for the other to finish but none of them are ever finished. So all the transactions will wait for infinite
time and not a single transaction is completed.

Wait-for-graph

Table 1
Held by Wait for

Transaction 1 Transaction 2

Wait for Hold by


Table 2
In the above figure there are two transactions 1 and 2 and two table’s as table1 and table 2.
Transaction 1 hold table 1 and wait for table 2. Transaction 2 hold table 2 and wait for table 1.
Now the table 1 is wanted by transaction 2 and that is hold by transaction 1 and same way table 2 is wanted
by transaction 1 and that is hold by transaction 2. Until any one can’t get this table they can’t precede further
so this is called wait for graph. Because both of these transaction have to wait for some resources.
When dead lock occurs
A deadlock occurs when two separate processes struggle for resources are held by one another.
Deadlocks can occur in any concurrent system where processes wait for each other and a cyclic chain can
arise with each process waiting for the next one in the chain.
Deadlock can occur in any system that satisfies the four conditions:
1. Mutual Exclusion Condition: only one process at a time can use a resource or each
resource assigned to 1 process or is available.
2. Hold and Wait Condition: processes already holding resources may request new
resources.
3. No Preemption Condition: only a process holding a resource can release it voluntarily
after that process has completed its task or previously granted resources cannot forcibly
taken away from any process.
4. Circular Wait Condition: two or more processes forms circular chain where each process
requests a resource that the next process in the chain holds.
Explain deadlock detection and recovery.
Resource-Allocation Graph
A set of vertices V and a set of edges E.
 V is partitioned into two types:
1. P = {P1, P2, …, Pn}, the set consisting of all the processes in the system.
2. R = {R1, R2, …, Rm}, the set consisting of all resource types in the
system.

Request edge – directed edge Pi → Rj assignment edge – directed edge Rj → Pi


Process -
Resource
Process Pi requests resource Rj –
Process Pi is holding resource Rj
If graph contains no cycles then no deadlock. If graph contains cycles then deadlock.

R1 R3 R1 R3

P1 P2 P3 P1 P2 P3

R2 R4 R2 R4

No cycle (circular chain) Cycle (circular chain) created (P2, R3, P3, R4, P2) So
created So no deadlock deadlock

When a deadlock is detected, the system must recover from the deadlock.
The most common solution is to roll back one or more transactions to break the deadlock. Choosing which
transaction to abort is known as Victim Selection.
In the above wait-for graph transactions R3 and R4 are deadlocked.
In order to remove deadlock one of the transaction out of these two (R3, R4) transactions must be roll backed.
We should rollback those transactions that will incur the minimum cost.
When a deadlock is detected, the choice of which transaction to abort can be made using following criteria:
The transaction which have the fewest locks the transaction that has done the least work the transaction that
is farthest from completion
Explain deadlock prevention methods.
OR
Explain methods to prevent deadlock.
Deadlock prevention
A protocols ensure that the system will never enter into a deadlock state. Some prevention
strategies :
 Require that each transaction locks all its data items before it begins
execution (predeclaration).
 Impose partial ordering of all data items and require that a transaction
canlock data items only in the order specified by the partial.
 Following schemes use transaction timestamps for the sake of deadlock
prevention alone.
1. Wait-die scheme — non-preemptive
• If an older transaction is requesting a resource which is held by younger
transaction, then older transaction is allowed to wait for it till it is available.
• If an younger transaction is requesting a resource which is held by older
transaction, then younger transaction is killed.
2. Wound-wait scheme — preemptive
• If an older transaction is requesting a resource which is held by younger
transaction, then older transaction forces younger transaction to kill the
transaction and release the resource.
• If an younger transaction is requesting a resource which is held by older
transaction, then younger transaction is allowed to wait till older
transaction will releases it.
3. Timeout-Based Schemes :
• A transaction waits for a lock only for a specified amount of time. After
that, the wait times out and the transaction is rolled back. So deadlocks
never occur.
• Simple to implement; but difficult to determine good value of the timeout
interval.
Wait/Die Wound/Wait
O needs a resource held by Y O waits Y dies
Y needs a resource held by O Y dies Y waits
Lock Based Protocol in DBMS

The database management system (DBMS) stores data that can connect with one another as well as can
be altered at any point. There are instances where more than one user may attempt to access the same
data item simultaneously, resulting in concurrency.
As a result, there is a requirement to handle concurrency in order to handle the concurrent processing
of transactions across many databases in the picture. Lock-based protocol in DBMS is an example of
such an approach.

You might also like