Operating System: CPU Scheduling
Operating System: CPU Scheduling
CPU Scheduling
CPU Scheduling
CPU Scheduler
Whenever the CPU becomes idle, the O/S must
select one of the processes in the ready queue to be
executed.
The selection process is carried out by CPU
scheduler.
The scheduler selects a process from the processes
in memory that are ready to execute and allocates
the CPU to that process.
The ready queue is not necessarily a first-in first-out
queue.
CPU Scheduling
Dispatcher
The dispatcher is the module that gives control of
the CPU to the process selected by the short-term
scheduler.
The dispatcher should be as fast as possible,
since it is invoked during every process switch.
The time it takes for the dispatcher to stop one
process and start another running is known as the
dispatch latency.
It involves in the following functions
Switching context
Switching to user mode
Jumping to the proper location in the user program
CPU Scheduling
Scheduling Criteria
Different CPU scheduling algorithms have different
properties.
The choice of a particular algorithm may favour one class
of process.
To evaluate a scheduling algorithm, there are many
possible criteria including
CPU Utilization: keep CPU utilization as high as possible.
Throughput: number of processes completed per unit time.
Turnaround Time: mean time from submission to completion of
process.
Waiting Time: amount of time spent ready to run but not
running.
Response Time: time between submission of requests and first
response to the request.
CPU Scheduling
Difference between Batch and Interactive system
scheduling
Batch systems, typically want good throughput or
turnaround time.
Interactive systems, both of these are still usually
important, but response time is usually a primary
consideration.
Difference between long and short term scheduling
Long term scheduler is given a set of processes and
decides which ones should start to run.
Once they start running, they may suspend because of
I/O or because of pre-emption.
Short term scheduler decides which of the available jobs
that long term scheduler has decided are runnable to
actually run.
CPU Scheduling
Scheduling Goals
Fairness: A scheduler makes sure that each
process gets its fair share of the CPU and no
process can suffer indefinite postponement.
Policy Enforcement: The scheduler has to make
sure that system's policy is enforced.
Efficiency: Scheduler should keep the CPU busy
100% of the time when possible. If the CPU and
all the I/O devices can be kept running all the
time, more work gets done per second than if
some components are idle.
Response Time: A scheduler should minimize the
response time for interactive user.
CPU Scheduling
Turnaround: A scheduler should minimize the
time batch users must wait for an output.
Throughput: A scheduler should maximize the
number of jobs processed per unit time.
Characterization of Scheduling Policies
The selection function determines which
process in the ready queue is selected next for
execution.
The decision mode specifies the instants in time
at which the selection function is exercised.
CPU Scheduling
Nonpreemptive
Once a process is in the running state, it will
continue until it terminates or blocks itself for
I/O.
Preemptive
Currently running process may be interrupted
and moved to the Ready state by the OS.
Allows for better service since any one process
cannot monopolize the processor for very long.
CPU Scheduling
Types of scheduling
The aim of scheduling is to assign process
to be executed by the processor over time,
in a way that meets system objectives.
This scheduling activity is broken down
into three separate functions.
Long-Term Scheduling
Medium-Term Scheduling
Short-Term Scheduling
CPU Scheduling
Long-Term Scheduling
Determines which programs are admitted to the
system for processing.
Controls the degree of multiprogramming.
Once admitted, a job or user program becomes a
process and is added to the queue for short-term
scheduler.
If more processes are admitted
Less likely that all processes will be blocked.
Each process has less fraction of the CPU.
The long term scheduler may attempt to keep a
mix of processor-bound and I/O-bound processes.
CPU Scheduling
Medium-Term Scheduling
Medium-term scheduling is part of the swapping
function.
The swapping-in decision is based on the need to
manage the degree of multiprogramming.
CPU Scheduling
Short-Term Scheduling
Determines which process is going to execute
next.
The short term scheduler is also known as the
dispatcher.
The short-term scheduler is invoked on a event
that may lead to choose another process for
execution:
Clock interrupts
I/O interrupts
Operating system calls and traps
Signals
CPU Scheduling
CPU Scheduling
CPU Scheduling
Scheduling Algorithms
Scheduling algorithms are established on a set of
criteria.
The commonly used criteria can be characterized
along two dimension.
User-oriented
Response Time: Elapsed time from the submission of a
request to the beginning of response.
Turnaround Time: Elapsed time from the submission of a
process to its completion.
System-oriented
Processor utilization
Fairness
Throughput: Number of process completed per unit time
CPU Scheduling
Scheduling Metrics
arrival time ta = time the process became
“Ready” (again)
wait time Tw = time spent waiting for the
CPU
service time Ts = time spent executing in the
CPU
turnaround time Tr = total time spent waiting
and executing = Tw + Ts
CPU Scheduling
CPU Scheduling
First Come First Served (FCFS)
The simplest scheduling policy is FCFS, also
known as first-in-first-out (FIFO).
Sometimes referred as strict queuing scheme.
As each process becomes ready, it joins the ready
queue.
A process runs until it blocks itself then next
process moves into the queue.
Drawbacks
A process that does not perform any I/O will monopolize the
processor.
It favors CPU-bound processes over I/O bound processes.
CPU Scheduling
I/O-bound processes have to wait until CPU-
bound process completes.
They may have to wait even when their I/O
are completed (poor device utilization).
FCFS is not an attractive alternative on its
own for a single-processor system.
It is often combined with a priority scheme to
provide an effective scheduler.
Scheduler maintains a number of queues,
one for each priority level and dispatch within
each queue on a FCFS basis.
CPU Scheduling
It seems a fair scheme but very inefficient.
Problem: A CPU-bound process runs 1 sec, then
reads 1 disk block.
If several I/O-bound processes run little CPU, but
must read 1000 disk blocks.
CPU Scheduling
Process Burst Time
P1 24
P2 3
P3 3
Suppose that the processes arrive in the order: P1 , P2 , P3
Waiting time for P1 = 0; P2 = 24; P3 = 27
Average waiting time (0 + 24 + 27)/3 = 17
CPU Scheduling
Suppose that the processes arrive in the order
P2 , P3 , P1
Waiting time for P1 = 6; P2 = 0; P3 = 3
Average waiting time (6 + 0 + 3)/3 = 3
CPU Scheduling
Round Robin (RR)
The RR algorithm is designed especially for time
sharing systems.
It is similar to FCFS scheduling but pre-emption is
added to switch between processes.
A small unit of time, called time quantum or time
slice is defined.
A time quantum is generally from 10 to 100
milliseconds.
To implement RR scheduling, a system keep a
ready queue as a FIFO queue of processes.
New processes are added to the tail of the ready
queue.
CPU Scheduling
The CPU picks the first process from the ready
queue, sets a timer to interrupt after 1 time
quantum and dispatches the process.
After this time has elapsed, the process is
preempted and added to the end of the ready
queue.
Design issue: the length of the time quantum.
If the quantum is very short, then short processes
will move through system relatively quickly.
So q must be large with respect to context switch,
otherwise overhead is too high.
The average waiting time under RR policy is often
long.
CPU Scheduling
RR still favors CPU-bound processes
A I/O bound process uses the CPU for a time
less than the time quantum and then is blocked
waiting for I/O.
A CPU-bound process run for all its time slice
and is put back into the ready queue.
Solution: Virtual Round Robin (VRR)
When a I/O has completed, the blocked process is moved
to an auxiliary queue which gets preference over the main
ready queue.
A process dispatched from the auxiliary queue runs no
longer than the basic time quantum minus the time
spent running since it was selected from the ready
queue
CPU Scheduling
CPU Scheduling
Queuing for VRR
CPU Scheduling
Process Burst Time
P1 53
P2 17
P3 68
P4 24
q= 20
CPU Scheduling
Shortest Process Next (SPN)
This is a non pre-emptive policy in which the
process with the shortest expected processing time
is selected.
A short process will jump to the head of the queue
past longer jobs.
Overall performance can be improved in terms of
response time.
One difficulty with SPN is the need to know at least
estimate the required processing time for each
process.
Decision mode: non pre-emptive, once CPU given to
the process it cannot be preempted until completes
its CPU burst.
CPU Scheduling
It picks I/O bound processes first.
Possibility of starvation for longer processes
as long as there is a steady supply of shorter
processes.
If estimated time for process not correct, the
operating system may abort it.
Lack of preemption is not suited in a time
sharing environment.
CPU bound process gets lower priority (as it
should) but a process doing no I/O could still
monopolize the CPU if it is the first one to
enter the system.
CPU Scheduling
CPU Scheduling
Shortest Remaining Time (SRT)
The SRT policy is a pre-emptive version of SPN.
If a new process arrives with CPU burst length less than
remaining time of current executing process, it is
preempted.
Scheduler selects the process that has the shortest
expected remaining processing time.
When a new process joins the ready queue, it may have
a shorter remaining time than the currently running
process.
Unlike RR no additional interrupts are generated which
leads to increase overheads.
SRT should give superior turnaround time performance to
SPN, because a short job is given immediate preference
to a running job.
CPU Scheduling
Multilevel Queue Scheduling
Ready queue is partitioned into several separate
queues.
The processes are permanently assigned to one
queue based on a process.
Each queue has its own scheduling algorithm
The foreground queue might be scheduled by RR
algorithm.
The background queue is scheduled by an FCFS
algorithm.
Another possibility is to time-slice among the
queues.
CPU Scheduling
Scheduling must be done between the
queues.
Fixed priority scheduling: (i.e. serve all
from foreground then from background).
Possibility of starvation.
Time slice: each queue gets a certain
amount of CPU time which it can schedule
amongst its processes.
CPU Scheduling
Multilevel Feedback scheduling
A process can move between the various queues.
Aging can be implemented this way.
Multilevel-feedback-queue scheduler can be defined
by the following parameters:
Number of queues
Scheduling algorithms for each queue
Method used to determine when to upgrade a process
Method used to determine when to demote a process
Method used to determine which queue a process will enter
when that process needs service
CPU Scheduling
For example we have three queues
Q0 : RR with time quantum 8 milliseconds
Q1 : RR time quantum 16 milliseconds
Q2 : FCFS
Scheduling procedure for the above mentioned
queues will be
A new job enters queue Q0 which is served FCFS. When it
gains CPU, job receives 8 milliseconds. If it does not finish
in 8 milliseconds, job is moved to queue Q1.
At Q1 job is again served FCFS and receives 16 additional
milliseconds. If it still does not complete, it is preempted
and moved to queue Q2.
CPU Scheduling
Longer processes may still suffer starvation.
Possible solution: promote a process to higher
priority after some time.