UNIT 2 Process Management
UNIT 2 Process Management
com
Process Management
Unit 2
Process
A process is an instance of a program running in a computer.
It is close in meaning to task, a term used in some OS.
In UNIX and some other OS, a process is started when a
program is initiated (either by a user entering a shell
command or by another program).
A program by itself is not a process; a program is a passive
entity, such as a file containing a list of instructions stored on
disks. (often called an executable file)
A program becomes a process when an executable file is
loaded into memory and executed.
Process..
When a program is loaded into a memory and it become a
process , it can be divided into four sections :
Process..
Stack : it contain temporary data such as method/ function
parameters return address and local variables.
Heap : this is dynamically allocated memory to a process
during its run time.
Text : this includes the current activity represented by the
value of program counter and the contents of the processor’s
registers.
Data: This section contain the global and static variable
Program
Program is an executable file containing the set of
instructions written to perform a specific job on our
computer.
E.g. chrome.exe
is an executable file containing the set of instruction
written so that we can view web page.
A program is piece of code which may be a single line or millions
of lines.
Program are not stored on the primary memory on our computer.
They are stored on a disk or secondary memory of our computer
Process Vs Program
BASIS FOR PROGRAM
PROCESS
COMPARISON
Basic Program is a set of When a program is executed,
instruction. it is known as process.
Process…
Although two processes may be associated with the same
program, they are nevertheless considered two separate
execution sequences.
For instance several users may be running different copies of
mail program, or the same user may invoke many copies of
web browser program.
Each of these is a separate process, and although the text
sections are equivalent, the data, heap and stack section may
vary.
Process Creation:
There are four principal events that cause processes
to be created:
1. System initialization.
2. Execution of a process creation system call by a
running process.
3. A user request to create a new process.
4. Initiation of a batch job.
Process Creation..
Parent process create children processes, which, in turn
create other processes, forming a tree of processes .
Generally, process identified and managed via a process
identifier (pid).
When an operating system is booted, often several
processes are created.
Some of these are foreground processes, that is,
processes that interact with (human) users and perform
work for them.
Process Creation..
Daemons
( a daemon is a computer program that runs as a Hardware
activity, or other programs by performing some task. )
Processes, which are not associated with particular users, but
instead have some specific function.
•Processes that stay in the background to handle some
activity such as web pages, printing, and so on.
Process Creation..
In UNIX there is only one system call to create a new process:
fork().
This call creates an exact clone of the calling process.
After the fork, the two processes, the parent and the child, have
the same memory image, the same environment strings, and the
same open files.
Usually, the child process then executes execve or a similar
system call to change its memory image and run a new program.
Fork() returns a negative value if the child creation is unsuccessful.
Fork() returns the value zero if the child creation is successful.
Fork() returns a positive integer (the process id of the parent
process).
Process Creation
1. #include<stdio.h>
2. #include<unistd.h>
3. int main() {
4. int pid;
5. pid = fork(); /* fork another process */
6. if (pid < 0) { /* error occurred */
7. printf(―child process creation is Failed");
8. }
9. else if (pid == 0) { /* child process */
10. Printf(―child process‖);
11. printf("%d",pid);
12. }
13. else { /* parent process */
14. printf (―parent process");
15. printf("%d",pid);
16. }
17.}
Process Creation..
fork() system call implementation for process creation
Process Creation..
If so what will be the o/p for this???
1. main()
2. {
3. fork();
4. fork();
5. printf(―hello‖);
6. }
Process Creation..
And this???
1. main()
2. {
3. fork();
4. fork();
5. fork();
6. printf(―hello‖);
7. }
• If a program consists of n fork() calls then it will create 2n-1
childprocess
Role of PCB
The role or work of process control block (PCB) in
process management is that it can access or modified by most
OS utilities including those are involved with memory,
scheduling, and input / output resource access.
It can be said that the set of the process control blocks give
the information of the current state of the operating system.
Data structuring for processes is often done in terms of
process control blocks.
For example, pointers to other process control blocks inside
any process control block allows the creation of those queues
of processes in various scheduling states.
Role of PCB..
The following are the various information that is
contained by process control block:
Naming the process
State of the process
Resources allocated to the process
Memory allocated to the process
Scheduling information
Input / output devices associated with process
Components of PCB
The following are the various components that are associated
with the process control block PCB:
1. Process ID:
2. Process State
3. Program counter
4. Register Information
5. Scheduling information
6. Memory related information
7. Accounting information
8. Status information related to input/output
Components of PCB..
Components of PCB..
1. Process ID:
In computer system there are various process running
simultaneously and each process has its unique ID. This Id
helps system in scheduling the processes. This Id is provided
by the process control block.
In other words, it is an identification number that uniquely
identifies the processes of computer system.
Components of PCB..
2. Process state:
As we know that the process state of any process can be New,
running, waiting, executing, blocked, suspended, terminated.
For more details regarding process states you can
refer process management of an Operating System.
Process control block is used to define the process state of
any process.
In other words, process control block refers the states of the
processes.
Components of PCB..
3. Program counter:
Program counter is used to point to the address of the next
instruction to be executed in any process. This is also
managed by the process control block.
4. Register Information:
This information is comprising with the various registers,
such as index and stack that are associated with the process.
This information is also managed by the process control
block.
Components of PCB..
5. Scheduling information:
Scheduling information is used to set the priority of different
processes. This is very useful information which is set by the
process control block. In computer system there were many
processes running simultaneously and each process have its
priority. The priority of primary feature of RAM is higher
than other secondary features. Scheduling information is very
useful in managing any computer system.
Components of PCB..
6. Memory related information:
This section of the process control block comprises of page and
segment tables. It also stores the data contained in base and limit
registers.
7. Accounting information:
This section of process control block stores the details relate to
central processing unit (CPU) utilization and execution time of a
process.
8. Status information related to input / output:
This section of process control block stores the details pertaining
to resource utilization and file opened during the process
execution.
Process Table
The operating system maintains a table called process table,
which stores the process control blocks related to all the
processes.
The process table is a data structure maintained by the operating
system to facilitate context switching and scheduling, and other
activities discussed later.
Each entry in the table, often called a context block, contains
information about a process such as process name and state
(discussed above), priority (discussed above), registers, and a
semaphore it may be waiting on . The exact contents of a context
block depends on the operating system. For instance, if the OS
supports paging, then the context block contains an entry to the
page table.
Thread
A thread is the smallest unit of processing that can be
performed in an OS.
In most modern operating systems, a thread exists within a
process - that is, a single process may contain multiple
threads.
A thread is a basic unit of CPU utilization, it comprises a
thread ID, a program counter, a register set, and a stack.
It shares with other threads belonging to the same process its
code section, data section, and other operating system
resources, such as open files and signals.
Threads..
A traditional ( or heavy weight) process has a single thread of
control.
If a process has multiple thread of control, it can perform
more than one task at a time.
Fig below illustrate the difference between single threaded
process and a multithreaded process.
Threads..
Threads
Threads..
Properties of a Thread:
Only one system call can create more than one thread
(Lightweight process).
Threads share data and information.
Threads shares instruction, global and heap regions but has
its own individual stack and registers.
Thread management consumes no or fewer system calls as
the communication between threads can be achieved using
shared memory.
The isolation property of the process increases its overhead
in terms of resource consumption.
Types of Thread
There are two types of threads:
User Threads
Kernel Threads
Multithreading
Many software package that run on modern desktop pcs are
multithreaded.
An application is implemented as a separate process with
several threads of control.
A web browser might have one thread to display images or
text while other thread retrieves data from the network.
A word-processor may have a thread for displaying graphics,
another thread for reading the character entered by user
through the keyboard, and a third thread for performing
spelling and grammar checking in the background.
Why Multithreading
In certain situations, a single application may be required to
perform several similar task such as a web server accepts
client requests for web pages, images, sound, graphics etc.
A busy web server may have several clients concurrently
accessing it.
So if the web server runs on traditional single threaded
process, it would be able to service only one client at a time.
The amount of time that the client might have to wait for its
request to be serviced is enormous.
One solution of this problem can be thought by creation of
new process.
Why Multithreading …
When the server receives a new request, it creates a separate
process to service that request. But this method is heavy weight.
In fact this process creation method was common before threads
become popular.
Process creation is time consuming and resource intensive.
It is generally more efficient for one process that contains multiple
threads to serve the same purpose.
This approach would multithread the web server process. The
server would cerate a separate thread that would listen for clients
requests.
When a request is made by client, rather than creating another
process, server will create a separate thread to service the request.
Benefits of Multi-threading:
Responsiveness:
Multithreaded interactive application continues to run even if
part of it is blocked or performing a lengthy operation,
thereby increasing the responsiveness to the user.
Resource Sharing:
By default, threads share the memory and the resources of
the process to which they belong.
It allows an application to have several different threads of
activity within the same address space.
These threads running in the same address space do not need
a context switch.
Benefits of Multi-threading…
Economy:
Allocating memory and resources for each process creation is
costly.
Since thread shares the resources of the process to which they
belong, it is more economical to create and context switch
threads.
Shorter context switching time. Less overhead than running
several processes doing the same task.
Utilization of multiprocessor architecture:
The benefits of multi threading can be greatly increased in
multiprocessor architecture, where threads may be running in
parallel on different processors.
Multithreading on a multi-CPU increases concurrency.
Multithreading Model
The user threads must be mapped to kernel threads, by
one of the following strategies:
Many to One Model
One to One Model
Many to Many Model
Shared Memory:
Here a region of memory that is shared by co-operating process is
established.
Process can exchange the information by reading and writing data
to the shared region.
Shared memory allows maximum speed and convenience of
communication as it can be done at the speed of memory within
the computer.
System calls are required only to establish shared memory regions.
Once shared memory is established no assistance from the kernel
is required, all access are treated as routine memory access.
Message Passing:
Communication takes place by means of messages exchanged
between the co-operating process
Message passing is useful for exchanging the smaller amount
of data.
Easier to implement than shared memory.
Slower than that of Shared memory as message passing
system are typically implemented using system call
Which requires more time consuming task of Kernel
intervention.
Race Condition
The situation where two or more processes are reading or writing some
shared data, but not in proper sequence is called race Condition
Race Condition:
The situation where 2 or more processes are reading or writing
some shared data, but not in proper sequence is called race
Condition.
The final results depends on who runs precisely(accurately) when.
Race Condition:
Imagine that our spooler directory has a large number of slots,
numbered 0, 1, 2, ..., each one capable of holding a file name.
Also imagine that there are two shared variables,
out: which points to the next file to be printed
in: which points to the next free slot in the directory.
Race Condition:
At a certain instant, slots 0 to 3 are empty (the files have already
been printed) and slots 4 to 6 are full (with the names of files to
be printed).
More or less simultaneously, processes A and B decide they want
to queue a file for printing as shown in the fig.
Process A reads in and stores the value, 7, in a local variable called
next_free_slot.
Race Condition:
Just then a clock interrupt occurs and the CPU decides that
process A has run long enough, so it switches to process B.
Process B also reads in, and also gets a 7, so it stores the
name of its file in slot 7 and updates in to be an 8. Then it
goes off and does other things.
Race Condition
Eventually, process A runs again, starting from the place it left off last time. It
looks at next_free_slot, finds a 7 there, and writes its file name in slot
7, erasing the name that process B just put there.
Then it computes next_free_slot + 1, which is 8, and sets in to 8.
The spooler directory is now internally consistent, so the printer daemon will
not notice anything wrong, but process B will never receive any output.
Strict Alternation
Turn=0
(a) (b)
while (1) { while (1) { //repeat forever
while(turn != 0) /* loop* /; while(turn != 1) /* loop* /;
critical_region(); critical_region();
turn = 1; turn = 0;
noncritical_region(); noncritical_region();
} }
(a) Process 0. (b) Process 1.
A proposed solution to the critical region problem.
In both cases, be sure to note the semicolons terminating the while
statements, this ensures waiting by a process.
Drawbacks:
Taking turn is not a good idea when one of the process is much
slower than other.
This situation requires that two processes strictly alternate in
entering their critical region.
Example:
Process 0 finishes the critical region it sets turn to 1 to allow
process 1 to enter critical region.
Suppose that process 1 finishes its critical region quickly so both
process are in their non critical region with turn sets to 0.
Process 0 executes its whole loop quickly, exiting its critical
region & setting turn to 1.
At this point turn is 1 and both processes are executing in their
noncritical regions.
Second attempt(algorithm 2)
• In this algorithm the variable turn is replaced by flag.
flag[0]=flag[1]=F;// Boolean value initially representing false.
while (1){ while (1) { //repeat forever
flag[0] =T; flag[1]=T; // interested to enter c.s
while(flag[1]) /* loop* /; while(flag[0]) /* loop* /;
critical_region(); critical_region();
flag[0] =F; flag[1] =F;
noncritical_region(); noncritical_region();
} }
(a) Process 0. (b) Process 1.
Peterson's Solution:
By combination the idea of taking turns with the idea of lock
variables and warning variables.:
while (1){ //repeat forever OR while(TRUE)
flag[0] =T; // interested to enter c.s
turn=1;
while(turn==1 && flag[1]==T) /* loop to give chance to other*/;
critical_region();
flag[0] =F;
noncritical_region();
}
(a) Process 0.
Peterson's Solution:
while (1){ //repeat forever
flag[1] =T; // interested to enter c.s
turn=0;
while(turn==0 && flag[0]==T) /* loop to give chance to other*/;
critical_region();
flag[1] =F;
noncritical_region();
}
(b) Process 1.
82
85
94
Atomic operations:
When one process modifies the semaphore value, no other
process can simultaneously modify that same semaphore
value.
In addition, in case of the P(S) operation the testing of the
integer value of S (S<=0) and its possible modification (S=S-
1), must also be executed without interruption.
Modification to the integer value of the semaphore in the
wait {p(s)} and signal{V(s)} operation must be executed
indivisibly(only one process can modify the same semaphore
value at a time)
Semaphore operations:
P or Down, or Wait: P stands for proberen ("to test”)
V or Up or Signal: Dutch words. V stands for verhogen
("increase")
wait(sem)
decrement the semaphore value. if negative, suspend the
process and place in queue. (Also referred to as P(), down in
literature.)
signal(sem)
increment the semaphore value, allow the first process in the
queue to continue. (Also referred to as V(), up in literature.)
Semaphore S; // initialized to 1
Do{
wait (S);
Critical Section
signal (S);
} while(T)
void consumer(void)
{
int item;
while (TRUE){ /* infinite loop */
down(&full); /* decrement full count */
down(&mutex); /* enter critical region */
item = remove_item(); /* take item from buffer */
up(&mutex); /* leave critical region */
up(&empty); /* increment count of empty slots */
consume_item(item); /* do something with the item */
}
}
Advantages of semaphores
Processes do not busy wait while waiting for resources.
While waiting, they are in a "suspended'' state, allowing the
CPU to perform other work.
Works on (shared memory) multiprocessor systems.
User controls synchronization.
Disadvantage of semaphores
can only be invoked by processes--not interrupt service
routines because interrupt routines cannot block
user controls synchronization--could mess up.
108
Monitors:
In concurrent programming, a monitor is an object or
module intended to be used safely by more than one thread.
The defining characteristic of a monitor is that its methods are
executed with mutual exclusion.
That is, at each point in time, at most one thread may be executing
any of its methods.
Monitors also provide a mechanism for threads to temporarily
give up exclusive access, it has to wait for some condition to be
met, before regaining exclusive access and resuming their task.
Monitors also have a mechanism for signaling other threads that
such conditions have been met.
Message Passing
Interprocess communication uses two primitives, send and
receive.
They can easily be put into library procedures, such as
send(destination, &message);
receive(source, &message);
The former call sends a message to a given destination and
the latter one receives a message from a given source.
If no message is available, the receiver can block until one
arrives. Alternatively, it can return immediately with an
error code.
There are also design issues that are important when the
sender and receiver are on the same machine. One of these is
performance. Copying messages from one process to another
is always slower than doing a semaphore operation.
Solution :
The solution to this problem includes three semaphores.
First is for the customer which counts the number of
customers present in the waiting room (customer in the barber
chair is not included because he is not waiting).
Second, the barber 0 or 1 is used to tell whether the barber is
idle or is working, And the third mutex is used to provide the
mutual exclusion which is required for the process to execute.
In the solution, the customer has the record of the number of
customers waiting in the waiting room if the number of
customers is equal to the number of chairs in the waiting room
then the upcoming customer leaves the barbershop.
Customer {
while(true) { /* protects seats so only 1 customer tries to sit
in a chair if that's the case.*/
down(Seats); //This line should not be here.
if(FreeSeats > 0) { /* sitting down.*/
FreeSeats--; /* notify the barber. */
up(Customers); /* release the lock */
up(Seats); /* wait in the waiting room if barber is busy. */
down(Barber); // customer is having hair cut
} else { /* release the lock */
up(Seats); // customer leaves
}
}
}
CPU Scheduling
Introduction:
In a multiprogramming system, frequently multiple process competes
for the CPU at the same time.
When two or more process are simultaneously in the ready state a
choice has to be made which process is to run next.
This part of the OS is called Scheduler and the algorithm is called
scheduling algorithm.
Process execution consists of cycles of CPU execution and I/O wait.
Processes alternate between these two states.
Process execution begins with a CPU burst that is followed by I/O
burst, which is followed by another CPU burst then another i/o burst,
and so on.
Eventually, the final CPU burst ends with a system request to terminate
execution.
CPU Scheduling
The long-term scheduler:
selects processes from this process pool and loads selected processes into memory for
execution.
The short-term scheduler:
selects the process to get the processor from among the processes which are already in
memory.
The short-time scheduler will be executing frequently (mostly at least once every 10
milliseconds).
So it has to be very fast in order to achieve a better processor utilization.
Medium term scheduler:
It can sometimes be good to reduce the degree of multiprogramming by removing
processes from memory and storing them on disk.
These processes can then be reintroduced into memory by the medium-term scheduler.
This operation is also known as swapping. Swapping may be necessary to free memory.
CPU Scheduling
Scheduling Criteria:
Many criteria have been suggested for comparison of
CPU scheduling algorithms.
CPU utilization:
we have to keep the CPU as busy as possible. It may range
from 0 to 100%. In a real system it should range from 40 –
90 % for lightly and heavily loaded system.
Throughput:
It is the measure of work in terms of number of process
completed per unit time. Eg: For long process this rate may
be 1 process per hour, for short transaction, throughput may
be 10 process per second.
Scheduling Criteria:
Turnaround Time:
It is the sum of time periods spent in waiting to get into memory,
waiting in ready queue, execution on the CPU and doing I/O.
The interval form the time of submission of a process to the time of
completion is the turnaround time.
Waiting time plus the service time.
Turnaround time= Time of completion of job - Time of submission
of job. (waiting time + service time or burst time)
Waiting time:
its the sum of periods waiting in the ready queue.
Response time:
in interactive system the turnaround time is not the best criteria.
Response time is the amount of time it takes to start responding, not the
time taken to output that response.
Types of Scheduling:
1. Preemptive Scheduling
preemptive scheduling algorithm picks a process and lets it
run for a maximum of some fixed time.
If it is still running at the end of the time interval, it is
suspended and the scheduler picks another process to run (if
one is available).
Doing preemptive scheduling requires having a clock
interrupt occur at the end of time interval to give control of
the CPU back to the scheduler.
Types of Scheduling:
2. Non preemptive Scheduling
Nonpreemptive scheduling algorithm picks a process to
run and then just lets it runa until it blocks (either on I/O or
waiting for another process) or until it voluntarily releases
the CPU.
Even it runs for hours, it will not be forcibly suspended.
NON-PREEMPTIVE
PARAMENTER PREEMPTIVE SCHEDULING
SCHEDULING
Overhead It has overheads of scheduling the processes. It does not have overheads.
1. Preemptive Scheduling:
2. Non-Preemptive Scheduling:
Non-preemptive Scheduling is used when a process terminates, or
a process switches from running to waiting state.
In this scheduling, once the resources (CPU cycles) is allocated to
a process, the process holds the CPU till it gets terminated or it
reaches a waiting state. In case of non-preemptive scheduling does
not interrupt a process running CPU in middle of the execution.
Instead, it waits till the process complete its CPU burst time and
then it can allocate the CPU to another process.
Algorithms based on preemptive scheduling are: Shortest
Remaining Time First (SRTF), Priority (preemptive version), etc.
2. Non-Preemptive Scheduling:
Dispatcher
A dispatcher is a special program which comes into play after
the scheduler.
When the scheduler completes its job of selecting a process,
it is the dispatcher which takes that process to the desired
state/queue.
The dispatcher is the module that gives a process control
over the CPU after it has been selected by the short-term
scheduler. This function involves the following:
Switching context
Switching to user mode
Jumping to the proper location in the user program to restart
that program
Scheduler
Schedulers are special system software which
handle process scheduling in various ways. Their main task
is to select the jobs to be submitted into the system and to
decide which process to run. There are three types of
Scheduler:
1. Long term (job) scheduler
2. Medium term scheduler
3. Short term (CPU) scheduler
Scheduler
1. Long term (job) scheduler –
Due to the smaller size of main memory initially all program
are stored in secondary memory.
When they are stored or loaded in the main memory they are
called process.
This is the decision of long term scheduler that how many
processes will stay in the ready queue.
Hence, in simple words, long term scheduler decides the
degree of multi-programming of system.
Scheduler
2. Medium term scheduler –
Most often, a running process needs I/O operation which
doesn’t requires CPU.
Hence during the execution of a process when a I/O operation
is required then the operating system sends that process from
running queue to blocked queue.
When a process completes its I/O operation then it should
again be shifted to ready queue.
ALL these decisions are taken by the medium-term scheduler.
Medium-term scheduling is a part of swapping.
Scheduler
3. Short term (CPU) scheduler –
When there are lots of processes in main memory
initially all are present in the ready queue.
Among all of the process, a single process is to be
selected for execution.
This decision is handled by short term scheduler.
Let’s have a look at the figure given below. It may
make a more clear view for you.
Scheduling Criteria
There are several different criteria to consider when trying to select the
"best" scheduling algorithm for a particular situation and environment,
including:
1. CPU utilization - Ideally the CPU would be busy 100% of the time, so as
to waste 0 CPU cycles. On a real system CPU usage should range from 40%
( lightly loaded ) to 90% ( heavily loaded. )
2. Throughput - Number of processes completed per unit time. May range
from 10 / second to 1 / hour depending on the specific processes.
3. Turnaround time - Time required for a particular process to complete,
from submission time to completion. ( Wall clock time. )
4. Waiting time - How much time processes spend in the ready queue
waiting their turn to get on the CPU.
( Load average - The average number of processes sitting in the ready queue
waiting their turn to get into the CPU. Reported in 1-minute, 5-minute, and 15-
minute averages by "uptime" and "who". )
5. Response time - The time taken in an interactive program from the
issuance of a command to the commence of a response to that command.
Scheduling Criteria
In general one wants to optimize the average value of a
criteria ( Maximize CPU utilization and throughput, and
minimize all the others. ) However some times one wants to
do something different, such as to minimize the maximum
response time.
Sometimes it is most desirable to minimize the variance of a
criteria than the actual value. I.e. users are more accepting of
a consistent predictable system than an inconsistent one, even
if it is a little bit slower.
Scheduling
Batch system scheduling
First come first served
Shortest job first
Shortest remaining time next
Solution :
5. Priority Scheduling:
A priority is associated with each process, and the CPU is
allocated to the process with the highest priority.
Equal priority processes are scheduled in the FCFS order.
Assigning priority:
1. To prevent high priority process from running indefinitely the
scheduler may decrease the priority of the currently running
process at each clock interrupt. If this causes its priority to drop
below that of the next highest process, a process switch occurs.
2. Each process may be assigned a maximum time quantum that is
allowed to run. When this quantum is used up, the next highest
priority process is given a chance to run.
5. Priority Scheduling:
It is often convenient to group processes into priority classes and
use priority scheduling among the classes but round-robin
scheduling within each class.
5. Priority Scheduling:
Problems in Priority Scheduling:
Starvation:
Low priority process may never execute.
Solution: Aging: As time progress increase the priority of
Process.
5. Priority Scheduling:
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Solution :
Solution :
Solution :
Solution :
Solution :
Solution :
Solution :
Solution :
Finished Unit 2