Os by ArifurRefat
Os by ArifurRefat
Chapter 1
In summary, operating systems act as the bridge between hardware and software, allowing us
to navigate our devices and access essential programs
Types of Operating Systems: There are several types of Operating Systems which are —
⸰ Batch Operating System ⸰ Time-Sharing Operating System
⸰ Multi-Programming System ⸰ Distributed Operating System
⸰ Multi-Processing System ⸰ Network Operating System
⸰ Multi-Tasking Operating System ⸰ Real-Time Operating System
Q. Write the advantages and disadvantages of Time Sharing Systems and Distributed
Arifur Refat
2
systems? (18-19) 4
Q. Write the advantages and disadvantages of Multiprocessor Systems and Distributed
systems? (17-18) 3
1. Batch Operating System: This type of operating system does not interact with the
computer directly. There is an operator which takes similar jobs having the same requirement
and groups them into batches. It is the responsibility of the operator to sort jobs with similar
needs.
Advantages Disadvantages
Multiple users can share the batch systems. Batch systems are hard to debug.
The idle time for the batch system is very The other jobs will have to wait for an
less. unknown time if any job fails.
It is easy to manage large work repeatedly It is sometimes costly.
in batch systems.
Advantages Disadvantages
Multi Programming increases the There is not any facility for user interaction
Throughput of the System. of system resources with the system.
It helps in reducing the response time.
Arifur Refat
3
Advantages Disadvantages
It increases the throughput of the system. Due to the multiple CPU, it can be more
complex and somehow difficult to
As it has several processors, so, if one processor
understand.
fails, we can proceed with another processor.
Advantages Disadvantages
Multiple Programs can be executed simultaneously The system gets heated in case of
in Multi-Tasking Operating System. heavy programs multiple times.
It comes with proper memory management.
Arifur Refat
4
5. Time-Sharing Operating Systems: Each task is given some time to execute so that
all the tasks work smoothly. Each user gets the time of the CPU as they use a single system.
These systems are also known as Multitasking Systems. The task can be from a single user or
different users also. The time that each task gets to execute is called quantum. After this time
interval is over OS switches over to the next task.
Advantages Disadvantages
Each task gets an equal opportunity. High Overhead: Time-sharing systems have
a higher overhead than other operating
CPU idle time can be reduced.
systems due to the need for scheduling,
context switching.
Resource Sharing: Time-sharing systems Complexity: Time-sharing systems are
allow multiple users to share hardware complex and require advanced software to
resources such as the CPU, memory, and manage multiple users simultaneously. This
peripherals, reducing the cost of hardware complexity increases the chance of bugs and
and increasing efficiency. errors.
Improved Productivity: Time-sharing allows Security Risks: With multiple users sharing
users to work concurrently, thereby resources, the risk of security breaches
reducing the waiting time for their turn to increases. Time-sharing systems require
use the computer. This increased careful management of user access,
productivity translates to more work getting authentication, and authorization to ensure
done in less time. the security of data and software.
Arifur Refat
5
Advantages Disadvantages
Failure of one will not affect the other Failure of the main network will stop the
network communication, as all systems are entire communication.
independent of each other.
Since resources are being shared, To establish distributed systems the
computation is highly fast and durable. language is used not well-defined yet.
Load on host computer reduces. These types of systems are not readily
These systems are easily scalable as many available as they are very expensive. Not
systems can be easily added to the network. only that the underlying software is highly
complex and not understood well yet.
Delay in data processing reduces.
Arifur Refat
6
7. Network Operating System: These systems run on a server and provide the
capability to manage data, users, groups, security, applications, and other networking
functions. These types of operating systems allow shared access to files, printers, security,
applications, and other networking functions over a small private network.
Advantages Disadvantages
Highly stable centralized servers. Servers are costly.
Security concerns are handled through servers. Maintenance and updates are required
New technologies and hardware up-gradation regularly.
are easily integrated into the system.
Server access is possible remotely from User has to depend on a central location
different locations and types of systems. for most operations.
Arifur Refat
7
8. Real-Time Operating System: These types of OSs serve real-time systems. The time
interval required to process and respond to inputs is very small. This time interval is called
response time.
Real-time systems are used when there are time requirements that are very strict like missile
systems, air traffic control systems, robots, etc. Types of Real-Time Operating Systems
• Hard Real-Time Systems: Hard Real-Time OSs are meant for applications where
time constraints are very strict and even the shortest possible delay is not acceptable.
These systems are built for saving life like automatic parachutes or airbags which are
required to be readily available in case of an accident. Virtual memory is rarely found
in these systems.
• Soft Real-Time Systems: These OSs are for applications where time-constraint is
less strict.
Advantages Disadvantages
Maximum utilization of devices and systems, Very few tasks run at the same time
thus more output from all the resources.
These types of systems are error-free. The system resources are expensive.
Focus on running applications and less The algorithms are very complex and
importance on applications that are in the difficult for the designer to write on.
queue.
The time assigned for shifting tasks in these Sometimes the system resources are not so
systems is very less. good
Memory allocation is best managed in these It is not good to set thread priority as these
types of systems. systems are very less prone to switching
tasks.
Arifur Refat
8
Arifur Refat
9
Chapter 2
Arifur Refat
10
Chapter 3
Q. What is process? Explain process state with proper diagram. (17-18 & 18-19) 5
Process: A program loaded into memory and executing is called a process. A process is more
than the program code, which is sometimes known as the text section. It also includes the
current activity, as represented by the value of the program counter and the contents of the
processor's registers. A process generally also includes the process stack, which contains
temporary data (such as function parameters, return addresses, and local variables), and a data
section, which contains global variables. A process may also include a heap, which is
memory that is dynamically allocated during process run time. The structure of a process in
memory is shown in the following Figure.
Arifur Refat
11
Thread: A thread is a small unit of execution within a process that enables concurrent tasks.
Threads share resources with other threads in the same process and can execute tasks
simultaneously. They're lightweight and efficient but require careful management to prevent
issues like race conditions.
Q. Show the process state with a diagram. (17-18) 4
Process State: As a process executes, it changes state. The state of a process is defined in
part by the current activity of that process.
Arifur Refat
12
Process Control Block: Each process is represented in the operating system by a process
control block (PCB)—also called a task control block. A PCB is shown in the following
Figure.
Arifur Refat
13
Arifur Refat
14
execution on the CPU. For a single-processor system, there will never be more than one
running process. If there are more processes, the rest will have to wait until the CPU is free
and can be rescheduled.
Q. Draw the diagram of a context switch and describe how it works. (17-18) 5
Context switching in OS is the process of storing the state of a running process or thread, so
that it can be restored and resume execution at a later point, and then loading the context or
state of another process or thread and run it.
Steps Of Context Switching in OS
The steps involved in context switching in OS are as follows:
1. Save the state of the current process: This includes saving the contents of the CPU
registers, the memory map, and the stack.
2. Load the state of the new process: This includes loading the contents of the CPU
registers, the memory map, and the stack.
3. Update the process scheduler: The process scheduler needs to be updated to reflect
the new state of the system.
4. Switch to the new process. This involves transferring control to the new process's
instruction pointer.
State Diagram of Context Switching:
Arifur Refat
15
The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues. The OS
maintains a separate queue for each of the process states and PCBs of all processes in the
same execution state are placed in the same queue. When the state of a process is changed, its
PCB is unlinked from its current queue and moved to its new state queue.
The Operating System maintains the following important process scheduling queues −
Job Queue: The job queue stores all processes in the system. It represents processes that are
waiting to be admitted into the system. New processes enter the job queue when they are
created. The long-term scheduler (also known as the job scheduler) manages this queue.
Ready Queue: The ready queue contains processes residing in main memory, ready and
waiting for execution. Processes in the ready state compete for CPU time. When a process is
ready to run, it enters the ready queue. The short-term scheduler (also called the CPU
scheduler) selects processes from this queue for execution on the CPU.
Device Queues or Waiting Queues: Device queues are also known as I/O queue, device
queue contains the processes which are waiting for the completion of I/O request and the
main is each device has its own device queue.
Chapter 5
5.1.1 CPU –I/O Burst Cycle:
Arifur Refat
16
Arifur Refat
17
Long-Term Scheduler (Job Scheduler): The long-term scheduler brings new processes into
the ‘Ready State’. It controls the degree of multi-programming, which refers to the number of
processes present in the ready state at any given time.
Short-Term Scheduler (CPU Scheduler): The short-term scheduler selects one process
from the ready state for execution on the CPU. It ensures that no process suffers from
starvation due to long burst times. It only selects the process; the actual loading onto the CPU
is handled by the dispatcher.
Medium-Term Scheduler: The medium-term scheduler is less common and operates
between the long-term and short-term schedulers. It is responsible for swapping processes in
and out of main memory.
In summary, these three types of schedulers work together to manage processes efficiently in
an operating system, ensuring optimal resource utilization and responsiveness.
Arifur Refat
18
1. When a process switches from the running state to the waiting state (as the result of an I/O
request)
2. When a process switches from the running state to the ready state (when an interrupt
occurs)
3. When a process switches from the waiting state to the ready state (at completion of I/O)
4. When a process terminates.
Nonpreemptive or cooperative: Under nonpreemptive scheduling, once the CPU has been
allocated to a process, the process keeps the CPU until it releases the CPU either by
terminating or switching to the waiting state.
Preemptive: Preemptive scheduling is used when a process switches from the running state
to the ready state or from the waiting state to the ready state.
The resources (mainly CPU cycles) are allocated to the process for a limited amount of time
and then taken away, and the process is again placed back in the ready queue if that process
still has CPU burst time remaining. That process stays in the ready queue till it gets its next
chance to execute.
Q. Distinguish between preemptive and Non-preemptive scheduling. (18-19) 2
Preemptive Scheduling Non-Preemptive Scheduling
The CPU is allocated to the processes for a The CPU is allocated to the process till it
certain amount of time. ends its the fewer execution or switches to
waiting state.
The executing process here is interrupted in The executing process here is not
the middle of execution. interrupted in the middle of execution.
This is cost associative as it has to maintain This is not cost associative.
the integrity of shared data.
This scheduling leads to more context This scheduling leads to less context
switches. switches compared to preemptive
scheduling.
It also affects the design of the operating It does not affect the design of the OS
system Kernel. kernel.
Preemptive scheduling is more complex. Simple, but very inefficient.
Example: Round robin method. Example: First come first serve method.
5.1.4 Dispatcher:
Q. Write a short note on dispatcher. (17-18) 2
Arifur Refat
19
The dispatcher is the module that gives control of the CPU to the process selected by the CPU
scheduler. The dispatcher performs context switching, switches to user mode, and jumps to
the proper location in the newly loaded program.
The dispatcher should be as fast as possible, since it is invoked during every process switch.
The time it takes for the dispatcher to stop one process and start another running is known as
the dispatch latency and is illustrated in Figure 5.3
Arifur Refat
20
Many criteria have been suggested for comparing CPU-scheduling algorithms. The criteria
include the following:
(i) CPU utilization: We want to keep the CPU as busy as possible. Conceptually, CPU
utilization can range from 0 to 100 percent. In a real system, it should range from 40
percent (for a lightly loaded system) to 90 percent (for a heavily loaded system).
(ii) Throughput: Throughput is the total amount of work done in a given time.
(iii) Turnaround time: The interval from the time of submission of a process to the time of
completion is the turnaround time.
(iv) Waiting time: Waiting time is the sum of the periods spent waiting in the ready queue.
(v) Response time: Response time is the amount of time it takes for the CPU to respond to a
request made by a process. It is the duration between the arrival of a process and the first
time it runs.
It is desirable to maximize CPU utilization and throughput and to minimize turnaround time,
waiting time, and response time.
5.3 Scheduling Algorithms:
CPU scheduling deals with the problem of deciding which of the processes in the ready queue
is to be allocated the CPU. There are many different CPU-scheduling algorithms.
5.3.1 FCFS Scheduling Algorithm:
The CPU scheduling algorithm First Come, First Served (FCFS), also known as First In, First
Out (FIFO), allocates the CPU to the processes in the order they are queued in the ready
queue.
FCFS uses non-preemptive scheduling, which means that once a CPU has been assigned to a
process, it stays assigned to that process until it is either not terminated or may be interrupted
by an I/O interrupt.
Example:
Process Arrival Time Burst Time
P1 0 8
P2 1 7
P3 2 10
Gantt Chart:
P1 P2 P3
0 8 15 25
Waiting time = Completion time – Burst Time – Arrival Time
P1 waiting time: 0
P2 waiting time: 8-1=7
P3 waiting time: 15-2=13
Arifur Refat
21
Response time = Time at which the process gets the CPU for the first time - Arrival time
P1 response time: 0
P2 response time: 8-1=7
P3 response time: 15-2=13
Arifur Refat
22
Step-1: At time= 1, Process P3 arrives. But, P4 still needs 2 execution units to complete. It
will continue execution.
Step-2: At time =2, process P1 arrives and is added to the waiting queue. P4 will continue
execution.
Step-3: At time = 3, process P4 will finish its execution. The burst time of P3 and P1 is
compared. Process P1 is executed because its burst time is less compared to P3.
Step-4: At time = 4, process P5 arrives and is added to the waiting queue. P1 will continue
execution.
Step-5: At time = 5, process P2 arrives and is added to the waiting queue. P1 will continue
execution.
Arifur Refat
23
Step-6: At time = 9, process P1 will finish its execution. The burst time of P3, P5, and P2 is
compared. Process P2 is executed because its burst time is the lowest.
Step-8: At time = 11, process P2 will finish its execution. The burst time of P3 and P5 is
compared. Process P5 is executed because its burst time is lower.
Arifur Refat
24
Step-11: Let’s calculate the average waiting time for above example.
Waiting time = Completion time – Burst Time – Arrival Time
Turnaround Time = Completion Time – Arrival Time
Process Waiting Time Turnaround Time
P1 9-6-2=1 9-2=7
P2 11-2-5=4 11-5=6
P3 23-8-1=14 23-1=22
P4 3-3-0=0 3-0=3
P5 15-4-4=7 15-4=11
Average Waiting Time = 1+4+14+0+7 Average Turnaround Time = 7+6+22+3+11
= =
= 5.2 = 9.8
Preemptive SJF:
In Preemptive SJF Scheduling, jobs are put into the ready queue as they come. A process with
shortest burst time begins execution. If a process with even a shorter burst time arrives, the
current process is removed or preempted from execution, and the shorter job is allocated CPU
cycle. Consider the following five process:
Process Burst time Arrival time
P1 6 2
P2 2 5
P3 8 1
P4 3 0
P5 4 4
Arifur Refat
25
Step-1: At time = 1, Process P3 arrives. But, P4 has a shorter burst time. It will continue
execution.
Step-2: At time = 2, process P1 arrives with burst time = 6. The burst time is more than that
of P4. Hence, P4 will continue execution.
Step-3: At time = 3, process P4 will finish its execution. The burst time of P3 and P1 is
compared. Process P1 is executed because its burst time is lower.
Step-4: At time = 4, process P5 will arrive. The burst time of P3, P5, and P1 is compared.
Process P5 is executed because its burst time is lowest. Process P1 is preempted.
Step-5: At time = 5, process P2 will arrive. The burst time of P1, P2, P3, and P5 is compared.
Process P2 is executed because its burst time is least. Process P5 is preempted.
Arifur Refat
26
Step-7: At time =7, P2 finishes its execution. The burst time of P1, P3, and P5 is compared.
Process P5 is executed because its burst time is lesser.
Step-8: At time =10, P5 will finish its execution. The burst time of P1 and P3 is compared.
Process P1 is executed because its burst time is less.
Step-9: At time =15, P1 finishes its execution. P3 is the only process left. It will start
execution.
Arifur Refat
27
Step-11: Let’s calculate the average waiting time for above example.
Waiting time = Completion time – Burst Time – Arrival Time
Turnaround Time = Completion Time – Arrival Time
Process Waiting Time Turnaround Time
P1 15-6-2=7 15-2=13
P2 7-2-5=0 7-5=2
P3 23-8-1=14 23-1=22
P4 3-3-0=0 3-0=3
P5 10-4-4=2 10-4=6
Average Waiting Time = 7+0+14+0+2 Average Turnaround Time = 13+2+22+3+6
= =
= 4.6 = 9.2
Arifur Refat
28
Gantt chart:
P1 P2 P3 P4 P5 P6 P1 P3 P4 P5 P6 P3 P4 P5
0 5 9 14 19 24 29 31 36 41 46 50 55 56 66
Waiting time = Completion time – Burst Time – Arrival Time
Turnaround Time = Completion Time – Arrival Time
Process Arrival Burst Time Completion Turnaround Waiting Time
Time Time Time
P1 0 7 31 31 24
P2 1 4 9 8 4
P3 2 15 55 53 38
P4 3 11 56 53 42
P5 4 20 66 62 42
P6 4 9 50 46 37
Average Completion Time = 𝟑𝟏+𝟗+𝟓𝟓+𝟓𝟔+𝟔𝟔+𝟓𝟎 = 𝟐𝟔𝟕 = 44.5
𝟔 𝟔
Arifur Refat
29
We can prepare the Gantt chart according to the Non-Preemptive priority scheduling.
The Process P1 arrives at time 0 with the burst time of 3 units and the priority number 2.
Since No other process has arrived till now hence the OS will schedule it immediately.
Meanwhile the execution of P1, two more Processes P2 and P3 are arrived. Since the priority
of P3 is 3 hence the CPU will execute P3 over P2.
Arifur Refat
30
Meanwhile the execution of P3, All the processes get available in the ready queue. The
Process with the lowest priority number will be given the priority. Since P6 has priority
number assigned as 4 hence it will be executed just after P3.
After P6, P4 has the least priority number among the available processes; it will get executed
for the whole burst time.
Since all the jobs are available in the ready queue hence All the Jobs will get executed
according to their priorities. If two jobs have similar priority number assigned to them, the
one with the least arrival time will be executed.
P1 P3 P6 P4 P2 P5 P7
0 3 7 11 13 18 27 37
From the GANTT Chart prepared, we can determine the completion time of every process.
The turnaround time, waiting time and response time will be determined.
Arifur Refat
31
P1
0 1
At time 1, P2 arrives. P1 has completed its execution and no other process is available at this
time hence the Operating system has to schedule it regardless of the priority assigned to it.
P1 P2
0 1 2
The Next process P3 arrives at time unit 2, the priority of P3 is higher to P2. Hence the
execution of P2 will be stopped and P3 will be scheduled on the CPU.
P1 P2 P3
0 1 2 5
During the execution of P3, three more processes P4, P5 and P6 becomes available. Since, all
these three have the priority lower to the process in execution. So P3 will complete its
execution and then P5 will be scheduled with the priority highest among the available
processes.
P1 P2 P3 P5
0 1 2 5 10
Meanwhile the execution of P5, all the processes got available in the ready queue. At this
point, the algorithm will start behaving as Non-Preemptive Priority Scheduling. Hence now,
once all the processes get available in the ready queue, the OS just took the process with the
highest priority and execute that process till completion. In this case, P4 will be scheduled and
will be executed till the completion.
Arifur Refat
32
P1 P2 P3 P5 P4
0 1 2 5 10 16
Since P4 is completed, the other process with the highest priority available in the ready queue
is P2. Hence P2 will be scheduled next.
P1 P2 P3 P5 P4 P2
0 1 2 5 10 16 22
P2 is given the CPU till the completion. Since its remaining burst time is 6 units hence P7 will
be scheduled after this.
P1 P2 P3 P5 P4 P2 P7
0 1 2 5 10 16 22 30
The only remaining process is P6 with the least priority, the Operating System has no choice
unless of executing it. This will be executed at the last.
P1 P2 P3 P5 P4 P2 P7 P6
0 1 2 5 10 16 22 30 45
The Completion Time of each process is determined with the help of Gantt Chart.
The turnaround time, the waiting time and the response time can be calculated by the
following formula.
Waiting Time = Completion Time – Burst Time – Arrival Time
Response time = Time at which the process gets the CPU for the first time - Arrival time
Turnaround Time = Completion Time – Arrival Time
Process Priority Arrival Burst Completion Turnaround Waiting Response
Time Time Time Time Time Time
P1 2 0 1 1 1-0=1 1-1-0=0 0-0=0
P2 6 1 7 22 22-1=21 22-7-1=14 1-1=0
P3 3 2 3 5 5-2=3 5-3-2=0 2-2=0
P4 5 3 6 16 16-3=13 16-6-3=7 10-3=7
P5 4 4 5 10 10-4=6 10-5-4=1 5-4=1
P6 10 5 15 45 45-5=40 45-15-5=25 30-5=25
P7 9 6 8 30 30-6=24 30-8-6=16 22-6=16
Arifur Refat
33
Chapter 6
On the basis of synchronization, processes are categorized as one of the following two types:
• Independent Process: The execution of one process does not affect the execution of
other processes.
• Cooperative Process: A process that can affect or be affected by other processes
executing in the system.
Process synchronization problem arises in the case of Cooperative processes also because
resources are shared in Cooperative processes.
When more than one processes try to access the same code segment that segment is known as
the critical section. The critical section contains shared variables or resources that need to be
synchronized to maintain the consistency of data variables.
Arifur Refat
34
Q. What is Critical Section problem? What criteria have to satisfy to solve the Critical
Section problem? (18-19) 5
The Critical-Section Problem: The critical section problem refers to the problem of how to
ensure that at most one process is executing its critical section at a given time.
A solution to the critical-section problem must satisfy the following three requirements:
1. Mutual exclusion: When one process is executing in its critical section, no other
process is allowed to execute in its critical section.
2. Progress: When no process is executing in its critical section, and there exists a
process that wishes to enter its critical section, it should not have to wait indefinitely to enter
it.
3. Bounded waiting: There exists a bound, or limit, on the number of times that other
processes are allowed to enter their critical sections after a process has made a request to
enter its critical section and before that request is granted.
Arifur Refat
35
Chapter 7
Deadlocks can be described more precisely in terms of a directed graph called a system
resource-allocation graph. This graph consists of a set of vertices V and a set of edges E.
A directed edge from process Pi to resource type Rj is denoted by Pi → Rj and is called a
request edge.
Arifur Refat
36
Arifur Refat
37
instance of R3.
P1 → R1 → P2 → R3 → P3 → R2 → P1
Arifur Refat
38
P2 → R3 → P3 → R2 → P2
Processes P1, P2, and P3 are deadlocked. Process P2 is waiting for the resource R3, which is
held by process P3. Process P3 is waiting for either process P1 or process P2 to release
resource R2. In addition, process P1 is waiting for process P2 to release resource R1.
Resource-allocation graph with a cycle but no deadlock:
P1 → R1 → P3 → R2 → P1
There is no deadlock. Because process P4 may release its instance of resource type R2. That
resource can then be allocated to P3, breaking the cycle. Or, process P2 may also release its
instance of resource type R1. That resource can be allocated to P1 for breaking the cycle.
Methods for Handling Deadlocks: We can deal with the deadlock problem in one of three
ways—
• We can use a protocol to prevent or avoid deadlocks, ensuring that the system will
never enter a deadlocked state.
• We can allow the system to enter a deadlocked state, detect it, and recover.
Arifur Refat
39
• We can ignore the problem altogether and pretend that deadlocks never occur in the
system.
The third solution is the one used by most operating systems, including Linux and Windows.
It is then up to the application developer to write programs that handle deadlocks.
To ensure that deadlocks never occur, the system can use either a deadlock-prevention or a
deadlock-avoidance scheme.
Q. What are the ways for deadlock prevention. (18-19) 4
Deadlock prevention provides a set of methods to ensure that at least one of the necessary
conditions (Mutual exclusion, Hold and wait, No preemption and Circular wait) cannot hold.
These methods prevent deadlocks by constraining how requests for resources can be made.
Q. How deadlock can be avoided in a system? (18-19) 3
Deadlock avoidance requires that the operating system be given additional information in
advance concerning which resources a process will request and use during its lifetime. With
this additional knowledge, the operating system can decide for each request whether or not
the process should wait. To decide whether the current request can be satisfied or must be
delayed, the system must consider the resources currently available, the resources currently
allocated to each process, and the future requests and releases of each process.
For example, in a system with one tape drive and one printer, the system might need to know
that process P will request first the tape drive and then the printer before releasing both
resources, whereas process Q will request first the printer and then the tape drive. With this
knowledge of the complete sequence of requests and releases for each process, the system can
decide for each request whether or not the process should wait in order to avoid a possible
future deadlock.
Arifur Refat
40
Q. Write down the data structure for the banker’s algorithm. (18-19) 3
The following Data structures are used to implement the Banker’s Algorithm where 𝑛 is the
number of processes in the system and 𝑚 is the number of resource types:
• Available: A vector of length 𝑚 indicates the number of available resources of each
type. If 𝑨𝒗𝒂𝒊𝒍𝒂𝒃𝒍𝒆[𝑗] equals 𝑘, then 𝑘 instances of resource type 𝑅𝑗 are available.
The Banker's Algorithm is the combination of the safety algorithm and the resource
request algorithm to control the processes and avoid deadlock in a system.
Arifur Refat
41
Safety Algorithm
We can now present the algorithm for finding out whether or not a system is in a safe state.
This algorithm can be described as follows:
1. Let Work and Finish be vectors of length m and n, respectively.
Initialize Work = Available and Finish[i] = false for i = 0, 1, ..., n − 1.
2. Find an index i such that both
a. Finish[i] == false
b. 𝑵𝒆𝒆𝒅𝑖 ≤ Work
If no such i exists, go to step 4.
3. Work = Work + 𝑨𝒍𝒍𝒐𝒄𝒂𝒕𝒊𝒐𝒏𝑖 Finish[i] = true Go to step 2.
4. If Finish[i] == true for all i, then the system is in a safe state.
This algorithm may require an order of m × 𝑛2 operations to determine whether a state is safe.
Q. Write and explain the Resource-Request Algorithm. (17-18) 4
Resource-Request Algorithm
Next, the algorithm for determining whether requests can be safely granted.
Let 𝑹𝒆𝒒𝒖𝒆𝒔𝒕𝑖 be the request vector for process 𝑃𝑖 . If 𝑹𝒆𝒒𝒖𝒆𝒔𝒕𝑖 [j] == k, then process 𝑃𝑖
wants k instances of resource type 𝑅𝑗 . When a request for resources is made by process 𝑃𝑖 ,
the following actions are taken:
1. If 𝑹𝒆𝒒𝒖𝒆𝒔𝒕𝑖 ≤ 𝑵𝒆𝒆𝒅𝑖 , go to step 2. Otherwise, raise an error condition, since the
process has exceeded its maximum claim.
2. If 𝑹𝒆𝒒𝒖𝒆𝒔𝒕𝑖 ≤ Available, go to step 3. Otherwise, 𝑃𝑖 must wait, since the resources are
not available.
3. Have the system pretend to have allocated the requested resources to process 𝑃𝑖 by
modifying the state as follows:
Available = Available – 𝑹𝒆𝒒𝒖𝒆𝒔𝒕𝑖 ;
𝑨𝒍𝒍𝒐𝒄𝒂𝒕𝒊𝒐𝒏𝑖 = 𝑨𝒍𝒍𝒐𝒄𝒂𝒕𝒊𝒐𝒏𝑖 + 𝑹𝒆𝒒𝒖𝒆𝒔𝒕𝑖 ;
𝑵𝒆𝒆𝒅𝑖 = 𝑵𝒆𝒆𝒅𝑖–𝑹𝒆𝒒𝒖𝒆𝒔𝒕𝑖 ;
If the resulting resource-allocation state is safe, the transaction is completed, and
process 𝑃𝑖 is allocated its resources. However, if the new state is unsafe, then 𝑃𝑖 must
wait for 𝑹𝒆𝒒𝒖𝒆𝒔𝒕𝑖 , and the old resource-allocation state is restored.
Example: Consider a system that contains five processes P0, P1, P2, P3, P4 and the three
resource types A, B and C. Following are the resources types: A has 10, B has 5 and the
resource type C has 7 instances.
Process Allocation Max Available
A B C A B C A B C
P0 0 1 0 7 5 3 3 3 2
P1 2 0 0 3 2 2
P2 3 0 2 9 0 2
P3 2 1 1 2 2 2
Arifur Refat
42
P4 0 0 2 4 3 3
Answer-1: The content of the matrix Need is defined to be Max − Allocation and is as
follows:
Process Need
A B C
P0 7 4 3
P1 1 2 2
P2 6 0 0
P3 0 1 1
P4 4 3 1
Arifur Refat
43
Arifur Refat
44
Answer-3: For granting the 𝑅𝑒𝑞𝑢𝑒𝑠𝑡1= (1,0,2), first we have to check that
𝑅𝑒𝑞𝑢𝑒𝑠𝑡1<=Available, that is (1,0,2)<=(3,3,2), since the condition is true. So the process P1
gets the request immediately.
Arifur Refat
45
Chapter 8
Arifur Refat
46
Arifur Refat
47
Arifur Refat
48
Arifur Refat
49
Arifur Refat
50
Arifur Refat
51
Arifur Refat
52
Arifur Refat
53
Arifur Refat
54
Arifur Refat
55
Arifur Refat
56
In the above diagram, process P1 is swap out so the process with more memory requirement
or higher priority will be executed to increase the overall efficiency of the operating system.
While the process P2 is swap in for its execution from the secondary memory to the main
memory (RAM).
Let's understand this with the help of an example. The process size of the user's process is
4096 KB, and the transfer rate is 1 MBps. Now we'll find out how long it will take to move
from main memory to secondary memory.
Here,
User process size = 4096 KB and Data transfer Rate = 1024 KBps
Time = User process size = 4096 = 4 𝑠𝑒𝑐𝑜𝑛𝑑𝑠 =
4000 𝑚𝑖𝑙𝑙𝑖𝑠𝑒𝑐𝑜𝑛𝑑𝑠 Data transfer Rate 1024
Arifur Refat
57
Q. Assume that the size of the user process = 4096 KB and data transfer rate of a hard
disk = 1 MB/s. How much time will take to complete the swapping process? (17-18) 7
Q. What is fragmentation? (17-18) 1
The problem due to which memory space is not getting utilized at all is commonly known as
Fragmentation in the operating system.
Fragmentation is further divided into two types:
(i) Internal Fragmentation
(ii) External Fragmentation
External Fragmentation: When the memory space in the system can easily satisfy the
requirement of the processes, but this available memory space is non-contiguous. So, it can’t
be utilized further. Then this problem is referred to as External Fragmentation.
Arifur Refat
58
Arifur Refat
59
Arifur Refat
60
Arifur Refat
61
Arifur Refat
62
Arifur Refat
63
Q. Describe the general dynamic storage allocation strategies (first-fit, best-fit and
worst fit). (18-19) 3
Q. Show the compaction process with a diagram. (17-18) 3
Arifur Refat
64
Arifur Refat
65
Arifur Refat
66
Let’s take an example that how non-contiguous memory allocation work, let we have three
processes P1 with size 8KB, P2 with size 16KB, and P3 with size 8KB. Here main memory
size is 32KB. Every process is divided into pages of the same size like 2KB. So the size of
every page is 2KB and the same as each memory frame size is 2KB.
Arifur Refat
67
Assume the initial entire memory is free and three processes P1, P2 and P3 want to get
memory than processes obviously allocate contiguous manner because not require to allocate
as non-contiguous way because memory is free, so P1, P2 and P3 allocate contiguously as
memory is entirely free.
Now assume that process P1 and P3 has been finished, so they are swapped out from the
memory and create a chance for other processes, assume that process P4 comes with size
14KB.
P1 individually free 8KB (4 frames) and P3 individually free 8KB (4 frames), but if see above
the memory picture 8KB+8KB=16KB (8 frames) will not free contiguously or consecutively.
So, here because of not enough space to allocate P4 in a place contiguous manner, it allocates
two places of memory in a non-contiguous manner.
Arifur Refat
68
There are three fundamental approaches to implementing non-contiguous memory allocation. Such
as: (i) Paging (ii) Segment (iii) Paging with the segment.
Arifur Refat
69
3. Every frame in the main memory holds some information that is free or allocated with
other pages.
Q. Define the term Logical Address Space. (18-19) 1.5
Logical Address Space (LAS): The set of all logical addresses generated by a CPU for a
process is a logical address space. The logical addresses in a process constitute the logical
address space of the process.
Logical address space = Size of the process
Number of pages = Process size(LAS)
Page size
Physical Address Space (PAS): The set of all physical addresses corresponding to these logical
addresses is a physical address space. The set of physical addresses in the system constitutes the
physical address space of the system.
Physical address space = Size of the Memory
Number of frames = Memory size(PAS)
Frame size
Arifur Refat
70
Every address generated by the CPU is divided into two parts: a page number (p) and a page
offset (d). The page number is used as an index into a page table. The page table contains the
base address of each page in physical memory. This base address is combined with the page
offset to define the physical memory address that is sent to the memory unit. The paging model
of memory is shown in Figure 8.11.
Arifur Refat
71
Q. What are the differences between logical and physical address spaces? (17-18) 2 Address
Translation: When the CPU tries to fetch a word or data from the main memory, the CPU generates
a Logical address (Relocatable address) for this word or data but this address is not the actual location
of the memory. Memory Management Unit (MMU) converts the Logical address (Relocatable
address) to the corresponding Physical address (Absolute address) to map the actual location of main
memory for this word or data.
Arifur Refat
72
The page size (like the frame size) is defined by the hardware. The size of a page is a power of
2, varying between 512 bytes and 1 GB per page, depending on the computer architecture.
The selection of a power of 2 as a page size makes the translation of a logical address into a
page number and page offset particularly easy. If the size of the logical address space is 2𝑚,
Arifur Refat
73
and a page size is 2𝑛 bytes, then the high-order 𝑚 − 𝑛 bits of a logical address designate the
page number, and the 𝑛 low-order bits designate the page offset. Thus, the logical address is
as follows:
where 𝑝 is an index into the page table and 𝑑 is the displacement within the page. As an example,
consider the memory in Figure 8.12.
Here, in the logical address, 𝑛 = 2 and 𝑚 = 4. Using a page size of 4 bytes and a physical
memory of 32 bytes (8 pages), we show how the programmer’s view of memory can be
mapped into physical memory. Logical address 0 is page 0, offset 0. Indexing into the page
table, we find that page 0 is in frame 5. Thus, logical address 0 maps to physical address 20 [=
(5 × 4) + 0]. Logical address 3 (page 0, offset 3) maps to physical address 23 [= (5 × 4) + 3].
Logical address 4 is page 1, offset 0; according to the page table, page 1 is mapped to frame
Arifur Refat
74
Arifur Refat
75
• Simple Segmentation
With the help of this type, each process is segmented into n divisions and they are all
together segmented at once exactly but at the runtime and can be non-contiguous (that
is they may be scattered in the memory).
Segmentation Hardware: Each entry in the segment table has a segment base and a segment limit.
The segment base contains the starting physical address where the segment resides in memory, and
the segment limit specifies the length of the segment.
Arifur Refat
76
Arifur Refat
77
An Example:
The segment table occupies less space as Maintaining a segment table for each
compared to the paging table. process leads to overhead
Arifur Refat
78
Arifur Refat
79
Functions of TLB:
(i) The TLB improves the performance of the virtual memory by reducing the average
time required to translate a logical address to a physical address.
(ii) The TLB also reduces the number of memory or disk accesses, which can save
energy and bandwidth.
(iii) The TLB also enables faster context switches, which are the operations that switch
the execution of one process to another.
(iv) The TLB can store the mappings of different processes, so that when a context switch
occurs, the CPU does not have to reload the entire page table.
Q. Write down the formula for effective access time (EAT). (17-18) 4
Effective Access Time (EAT) refers to the total time it takes to complete a memory access
operation, taking into account the possibility of both TLB hits and TLB misses.
EAT
where, hit ratio of TLB
𝑚 Memory access time
𝑐 TLB access time
Q. If TLB search takes 𝟐𝟎𝒏𝒔; memory access takes 𝟏𝟎𝟎𝒏𝒔 and hit ratio is 𝟗𝟎%.
Calculate the effective access time. (17-18) 4
Arifur Refat
80
Q. In the demand paging memory, a page table is held in registers. If it takes 𝟏𝟎𝟎𝟎𝒎𝒔 to
service a page fault and if the memory access time is 𝟏𝟎𝒎𝒔, what is the effective access
time for a page fault rate of 𝟎. 𝟎𝟏?
Chapter 9
Arifur Refat
81
(iii) Using Optimal: (Replace a page that will not be used in near future) ⟶
Reference Strings 1 2 3 4 2 1 5 2 1 2 3 3 2 1 2 3
Frame 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
Frame 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
Frame 3 3 4 4 4 5 5 5 5 3 3 3 3 3 3
M M M M H H M H H H M H H H H H
Number of page fault (M) = 6 Number of Hit (H) = 10
Arifur Refat
82
Q. Suppose that the Memory Size = 900 bytes, Page Size = 300 bytes and Reference
String is 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0. Find out the page fault rate for Optimal Page
Replacement and Least Recently Used (LRU) algorithms. (17-18) 8
Arifur Refat
83
File Attributes: A file’s attributes vary from one operating system to another but typically consist
of these:
• Name: The symbolic file name is the only information kept in human readable form. •
Identifier: This unique tag, usually a number, identifies the file within the file
system; it is the non-human-readable name for the file.
• Type: This information is needed for systems that support different types of files.
• Location: This information is a pointer to a device and to the location of the file on
that device.
• Size: The current size of the file (in bytes, words, or blocks) and possibly the
maximum allowed size are included in this attribute.
• Protection: Access-control information determines who can do reading, writing,
executing, and so on.
• Time, date, and user identification: This information may be kept for creation, last
modification, and last use. These data can be useful for protection, security,
and usage monitoring.
Arifur Refat
84
• Repositioning within a file: The directory is searched for the appropriate entry, and
the current-file-position pointer is repositioned to a new given value.
• Deleting a file: To delete a file, first browse the directory of the named file, then
free up the file area and delete the directory entry.
• Truncating a file: To truncate a file, delete only the contents of the file, but don’t
delete its file structure.
There are also other file operations like appending a file, creating a duplicate of the file, and
renaming a file.
Q. Briefly describe the sequential and random file access mechanisms. (17-18) 4
Sequential access is a file access method in which data is accessed in a linear or sequential order.
This means that data can only be accessed in the order in which it is stored in the file.
Random access is a file access method in which data can be accessed from any location within
the file. It provides the ability to directly access any record or data element in the file.
Direct access is a file access method that allows data to be accessed directly by using the data's
physical location within the file. It does not use an index or address like random access, and
instead relies on the physical location of the data within the file.
Indexed access method involves accessing files through an index or directory that contains a list
of file names and their corresponding locations on the disk.
Arifur Refat
85
Q. Write the advantages and disadvantages of different file access methods. (18-19) 5
Access Methods Advantages Disadvantages
Sequential access Simple and easy to Not efficient for accessing
implement, suitable for specific data or making
storing large amounts of changes to the data, slow for
data, requires less memory. reading or writing data in
the middle of the file.
Random access Provides fast and efficient Requires more memory to
access to specific data store index or address
within the file, efficient for information, file size can be
editing and updating data, larger than with sequential
suitable for devices that access, data can become
require fast access to inaccessible if index or
specific data. address information
becomes corrupted.
Direct access Provides fast and efficient Requires knowledge of the
access to specific data physical layout of the data
within the file, suitable for within the file, may require
devices that require fast special hardware or software
access to specific data, file to access the data directly,
gaps can be left in the file
size is smaller than with
which can impact
random access.
performance.
Indexed access method Provides fast and efficient The index must be
access to files by name or maintained, which can
attributes, making it suitable require additional disk space
for applications that require and processing time.
searching and retrieving
specific files quickly.
Arifur Refat