0% found this document useful (0 votes)
1 views

Os Chapter 2

Chapter 2 discusses the concept of processes, including their states and descriptions, as well as the Process Control Block (PCB) which contains essential information about each process. It covers various scheduling algorithms for uniprocessor systems, such as FCFS, SJF, and Round Robin, and introduces the concept of threads and multithreading. Additionally, it explains process creation, termination, and the criteria for effective CPU scheduling.

Uploaded by

cs23f006
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

Os Chapter 2

Chapter 2 discusses the concept of processes, including their states and descriptions, as well as the Process Control Block (PCB) which contains essential information about each process. It covers various scheduling algorithms for uniprocessor systems, such as FCFS, SJF, and Round Robin, and introduces the concept of threads and multithreading. Additionally, it explains process creation, termination, and the criteria for effective CPU scheduling.

Uploaded by

cs23f006
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 72

Chapter No.

Process and Process Scheduling


2.1 Concept of a Process, Process States, Process Description, Process Control Block.
2.2 Uniprocessor Scheduling-Types: Preemptive and Non-preemptive scheduling
algorithms (FCFS, SJF, SRTN, Priority, RR)
2.3 Threads: Definition and Types, Concept of Multithreading

Prepared by
Pranali Dhamne
Process

● A process is a program in execution which then forms the


basis of all computation.
● The process is not as same as program code but a lot more than
it.
● A process is an 'active' entity as opposed to the program which
is considered to be a 'passive' entity.
● Attributes held by the process include hardware state, memory,
CPU, etc.
Process in a Memory

● Text Section: A Process, sometimes known as the Text


Section, also includes the current activity represented by
the value of the Program Counter.
● Stack: The stack contains temporary data, such as
function parameters, returns addresses, and local
variables.
● Data Section: Contains the global variable.
● Heap Section: Dynamically allocated memory to process
during its run time.
Process in a Memory
PCB [Process Control Block]
● It contains information about the process,
i.e. registers, state, priority, etc.
● The process table is an array of PCB’s, that
means logically contains a PCB for all of the
current processes in the system.
● It is also known as the task control block. It
is a data structure, which contains the
following:
1 Process State : The current state of the process i.e., whether it is ready, running, waiting, or whatever.

2 Process privileges: This is required to allow/disallow access to system resources.

3 Process ID : Unique identification for each of the process in the operating system.

4 Pointer:A pointer to parent process.

5 Program Counter

Program Counter is a pointer to the address of the next instruction to be executed for this process.

6 CPU registers

Various CPU registers where process need to be stored for execution for running state.

7 CPU Scheduling Information

Process priority and other scheduling information which is required to schedule the process.
PCB [Process Control Block] continue..
● Every process has its own process control block (PCB), i.e each process will have a
unique PCB.

8 Memory management information

This includes the information of page table, memory limits, Segment table depending
on memory used by the operating system.

9 Accounting information

This includes the amount of CPU used for process execution, time limits, execution ID
etc.

1 IO status information
0
This includes a list of I/O devices allocated to the process.
Process Control Block
• Process identification
– Identifiers
• Numeric identifiers that may be stored with the process
control block include
• Identifier of this process
• Identifier of the process that created this process (parent
process)
• User identifier
Process Control Block
• Processor State Information
– User-Visible Registers
• A user-visible register is one that may be referenced by
means of the machine language that the processor
executes.
• Typically, there are from 8 to 32 of these registers,
although some RISC implementations have over 100.
Process Control Block
• Processor State Information
– Control and Status Registers
These are a variety of processor registers that are employed to
control the operation of the processor. These include
• •Program counter: Contains the address of the next instruction to
be fetched
• •Condition codes: Result of the most recent arithmetic or logical
operation (e.g., sign, zero, carry, equal, overflow)
•Status information: Includes interrupt enabled/disabled flags,
execution mode
Process Control Block
• Processor State Information
– Stack Pointers
• Each process has one or more last-in-first-out
(LIFO) system stacks associated with it. A stack
is used to store parameters and calling
addresses for procedure and system calls. The
stack pointer points to the top of the stack.
Process Control Block
• Process Control Information
– Scheduling and State Information
This is information that is needed by the operating system to perform its
scheduling function. Typical items of information:
•Process state: defines the readiness of the process to be scheduled for
execution (e.g., running, ready, waiting, halted).
• •Priority: One or more fields may be used to describe the scheduling priority of the
process. In some systems, several values are required (e.g., default, current,
highest-allowable)
• •Scheduling-related information: This will depend on the scheduling algorithm
used. Examples are the amount of time that the process has been waiting and the
amount of time that the process executed the last time it was running.
Process Control Block
• Process Control Information
– Data Structuring
• A process may be linked to other process in a queue, ring, or
some other structure.
• For example, all processes in a waiting state for a particular
priority level may be linked in a queue.
• A process may exhibit a parent-child (creator-created)
relationship with another process.
• The process control block may contain pointers to other
processes to support these structures.
Process Control Block
• Process Control Information
– Interprocess Communication
• Various flags, signals, and messages may be associated with
communication between two independent processes. Some or all
of this information may be maintained in the process control block.
– Process Privileges
• Processes are granted privileges in terms of the memory that may
be accessed and the types of instructions that may be executed. In
addition, privileges may apply to the use of system utilities and
services.
Process Control Block
• Process Control Information
– Memory Management
• This section may include pointers to segment
and/or page tables that describe the virtual
memory assigned to this process.
– Resource Ownership and Utilization
• Resources controlled by the process may be
indicated, such as opened files. A history of
utilization of the processor or other resources
may also be included; this information may be
needed by the scheduler.
Difference between Process Vs Program
Process Program

The process is basically an instance of the computer A Program is basically a collection of instructions that
program that is being executed. mainly performs a specific task when executed by the
computer.

A process has a shorter lifetime. A Program has a longer lifetime.

A Process requires resources such as memory, CPU, A Program is stored by hard-disk and does not require
Input-Output devices. any resources.

A process has a dynamic instance of code and data A Program has static code and static data.

Basically, a process is the running instance of the On the other hand, the program is the executable
code. code.
Two-State Process Model
• Process may be in one of two states
– Running
– Not-running
Not-Running Process in a Queue
Process Scheduling Components

LONG TERM SCHEDULER


·Run seldom ( when job comes into memory )
·Controls degree of multiprogramming
·Tries to balance arrival and departure rate through an appropriate job mix.
SHORT TERM SCHEDULER
Contains three functions:
·Code to remove a process from the processor at the end of its run.
a)Process may go to ready queue or to a wait state.
·Code to put a process on the ready queue –
a)Process must be ready to run.
b)Process placed on queue based on priority.
SHORT TERM SCHEDULER (cont.)
·Code to take a process off the ready queue and run that process (also called dispatcher).
a) Always takes the first process on the queue (no intelligence required)
b) Places the process on the processor.
This code runs frequently and so should be as short as possible.
MEDIUM TERM SCHEDULER
•Mixture of CPU and memory resource management.

•Swap out/in jobs to improve mix and to get memory.

•Controls change of priority.
Process Creation
• Submission of a batch job
• User logs on
• Created to provide a service such as printing
• Process creates another process
• CPU and I/O Bound Processes: If the process is intensive in terms of CPU
operations then it is called CPU bound process. Similarly, If the process is
intensive in terms of I/O operations then it is called I/O bound process.
• I/O-bound process – spends more time doing I/O than computations,
many short CPU bursts
• CPU-bound process – spends more time doing computations; few very
long CPU bursts
Process Termination

• Batch job issues Halt instruction

• User logs off

• Quit an application

• Error and fault conditions


Reasons for Process Termination
• Normal completion
• Time limit exceeded
• Memory unavailable
• Bounds violation
• Protection error
– example write to read-only file
• Arithmetic error
• Time overrun
– process waited longer than a specified maximum for an event
Reasons for Process Termination
• I/O failure
• Invalid instruction
– happens when try to execute data
• Privileged instruction
• Data misuse
• Operating system intervention
– such as when deadlock occurs
• Parent terminates so child processes terminate
• Parent request
Processes
• Running
– Running in processor (Processor/CPU/any resource Allocated )
• Not-running
– ready to execute
• Blocked (process is in main memory)
– waiting for I/O
• Suspended (process is in secondary memory)
– Suspended for various reasons
• whether a process is waiting on an event (blocked or not)
• whether a process has been swapped out of main memory (suspended or not)
• Dispatcher cannot just select the process that has been in the queue the longest
because it may be blocked
A Five-State Model
• New : A program which is going to be picked up by the OS into the main memory is
called a new process.
• Ready: Whenever a process is created, it directly enters in the ready state(in the
ready queue), in which, it waits for the CPU to be assigned.
• Running :One of the process from the ready state(from front of the ready queue)
will be chosen by the OS depending upon the scheduling algorithm.
• Blocked/Waiting : From the Running state, a process can make the transition to the
block or wait state depending upon the scheduling algorithm or the intrinsic
behavior of the process.
• Exit :When a process finishes its execution, it comes in the termination state. All the
context of the process (Process Control Block) will also be deleted the process will
be terminated by the Operating system.
Process State
Context Switching
● When CPU switches
to another process,
the system must
save the state of the
old process and load
the saved state for
the new process via
a context switch
● Context of a process
represented in the
PCB
● Context-switch time
is overhead; the
system does no
useful work while
Role of various Queues in Process
UNIX Process States
Process Management
By CPU Scheduling

● Arrival Time: Time at which the process arrives in the ready queue.
● Completion Time: Time at which process completes its execution.
● Burst Time: Time required by a process for CPU execution.
● Turn Around Time: Time Difference between completion time and arrival time.

Turn Around Time = Completion Time – Arrival Time

● Waiting Time(W.T): Time Difference between turn around time and burst time.

Waiting Time = Turnaround Time – Burst Time


Scheduling Criteria
● CPU utilization – keep the CPU as busy as possible
● Throughput – # of processes that complete their execution per time
unit
● Turnaround time – amount of time to execute a particular process
● Waiting time – amount of time a process has been waiting in the
ready queue
● Response time – amount of time it takes from when a request was
submitted until the first response is produced, not output (for time-
sharing environment)
● Max CPU utilization
● Max throughput
● Min turnaround time
● CPU utilization: The main purpose of any CPU algorithm is to keep the CPU as busy as possible.
Theoretically, CPU usage can range from 0 to 100 but in a real-time system, it varies from 40 to 90
percent depending on the system load.
● Throughput: The average CPU performance is the number of processes performed and
completed during each unit. This is called throughput. The output may vary depending on the
length or duration of the processes.
● Turn round Time: For a particular process, the important conditions are how long it takes to perform
that process. The time elapsed from the time of process delivery to the time of completion is
known as the conversion time. Conversion time is the amount of time spent waiting for memory
access, waiting in line, using CPU, and waiting for I / O.
● Waiting Time: The Scheduling algorithm does not affect the time required to complete the process
once it has started performing. It only affects the waiting time of the process i.e. the time spent in
the waiting process in the ready queue.
● Response Time: In a collaborative system, turn around time is not the best option. The process may
produce something early and continue to computing the new results while the previous results are
released to the user. Therefore another method is the time taken in the submission of the application
Alternating Sequence of CPU And
I/O Bursts

● Selects from among the processes in memory


that are ready to execute, and allocates the
CPU to one of them
● CPU scheduling decisions may take place when
a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
● Scheduling under 1 and 4 is nonpreemptive
● All other scheduling is preemptive
● Dispatch latency – time it takes for the
dispatcher to stop one process and start
another running
Algorithms:

1. FCFS
2. SJF /SJN
3. SRTN
4. Priority
5. Round Robin
6. Multi level queue
First-Come, First-Served (FCFS) Scheduling

Process Burst Time


P1 24
P2 3
P3 3
● Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P1 P2 P3

0 24 27 30

● Waiting time for P1 = 0; P2 = 24; P3 = 27


● Average waiting time: (0 + 24 + 27)/3 = 17
FCFS Scheduling Example Process
es
Arrival
Time (ms)
Burst
Time (ms)
Process Wait Time : Service Time - Arrival Time

P0 0-0=0 P0 0 5

P1 5-1=4 P1 1 3

P2 8-2=6 P2 2 8

P3 16 - 3 = 13 P3 3 6

Average Wait Time: (0+4+6+13) / 4 = 5.75


First-Come, First-Served (FCFS) Scheduling

● Convoy effect as all the other processes wait for the one big
process to get off the CPU. This effect results in lower CPU and
device utilization than might be possible if the shorter processes
were allowed to go first.

Advantages of FCFS:

● Easy to implement
● First come, first serve method

Disadvantages of FCFS:

● FCFS suffers from Convoy effect.


● The average waiting time is much higher than the other algorithms.
Shortest Job Next/First (SJN/SJF)

➔ This is also known as shortest job first, or SJF


➔ This is a non-preemptive, preemptive scheduling algorithm.
➔ Best approach to minimize waiting time.
➔ Easy to implement in Batch systems where required CPU time is known in advance.
➔ Impossible to implement in interactive systems where required CPU time is not known.
➔ The processor should know in advance how much time process will take.

Advantages

○ SJF is used because it has the least average waiting time than the other CPU Scheduling Algorithms
○ SJF can be termed or can be called as long term CPU scheduling algorithm.

Disadvantages

○ Starvation is one of the negative traits Shortest Job First CPU Scheduling Algorithm exhibits.
○ Often, it becomes difficult to forecast how long the next CPU request will take
● Shortest Job Next/First (SJN/SJF)
● If Arrival time not given, consider all processes arrived at same time i.e. 0 ms
Example 2
Calculate ATT and AWT using SJF scheduling.

PID Arrival Time Burst Time

1 1 7

2 3 3

3 6 2

4 7 10

5 9 8
Turn Around Time Waiting Time
7 0

AWT = 10 7
ATT = 4 2

24 14

12 4
Example 3
SRTN (Shortest Remaining Time Next)
Process Arrival Burst
ID Time Time
P1 0 8

P2 1 4

P3 2 2

P4 3 1

P5 4 3

P6 5 2
Proc Arriv Burs Process ID Turn Around Waiting Response
ess al t Time Time Time
ID Time Tim
e P1 20 12 0

P1 0 8 P2 9 5 1
P3 2 0 2
P2 1 4
P4 2 1 4
P3 2 2
P5 9 6 10
P4 3 1
P6 2 0 5
P5 4 3
Gantt Chart
P6 5 2
GATE 2011
Q. Consider the following table of arrival time and burst time for three processes P0, P1
and P2.

Process Arrival time Burst Time


P0 0 ms 9 ms
P1 1 ms 4 ms
P2 2 ms 9 ms
The preemptive shortest job first scheduling algorithm is used. Scheduling is
carried out only at arrival or completion of processes. What is the average waiting
time for the three processes?
A 5.0 ms
B 4.33 ms

C 6.33

D 7.33
Solution

Process ID Arrival Time Burst Time Completion Time Turn Around Time Waiting Time

1 0 9 13 13 4

2 1 4 5 4 0

3 2 9 22 20 11

Average Waiting Time(AWT) = (4+0+11)/3


= 15/3
= 5 ms
Priority Scheduling
● Priority scheduling in OS is the scheduling algorithm that schedules processes according
to the priority assigned to each of the processes.
● Higher priority processes are executed before lower priority processes.
Example 1
Example 2

Process Priority Burst Time Arrival Time

P1 1 4 0

P2 2 3 0

P3 1 7 6

P4 3 4 11

P5 2 2 12
Round Robin
Thread
● Thread is the segment of a process which means a process can
have multiple threads and these multiple threads are contained
within a process.
● A thread is the subset of a process and is also known as the
lightweight process.
● A process can have more than one thread, and these threads
are managed independently by the scheduler.
● All the threads within one process are interrelated to each
other.
● Threads have some common information, such as data
segment, code segment, files, etc., that is shared to their
peer threads. But contains its own registers, stack, and counter.
● A thread has three states: Running, Ready, and Blocked.
S.NO Process Thread

Process means any program is in


1. Thread means a segment of a process.
execution.

The process takes more time to


2. The thread takes less time to terminate.
terminate.

3. It takes more time for creation. It takes less time for creation.

It also takes more time for context


4. It takes less time for context switching.
switching.

The process is less efficient in terms of


5. Thread is more efficient in terms of communication.
communication.

Multiprogramming holds the concepts We don’t need multi programs in action for multiple threads because
6.
of multi-process. a single process consists of multiple threads.

7. The process is isolated. Threads share memory.


The process is called the heavyweight A Thread is lightweight as each thread in a process shares code, data, and
8.
process. resources.

Process switching uses an interface in an Thread switching does not require calling an operating system and causes
9.
operating system. an interrupt to the kernel.

If one process is blocked then it will not If a user-level thread is blocked, then all other user-level threads are
10.
affect the execution of other processes blocked.

The process has its own Process Control Thread has Parents’ PCB, its own Thread Control Block, and Stack and
11.
Block, Stack, and Address Space. common Address space.

Since all threads of the same process share address space and other
Changes to the parent process do not
12. resources so any changes to the main thread may affect the behavior of
affect child processes.
the other threads of the process.

13. A system call is involved in it. No system call is involved, it is created using APIs.

The process does not share data with


14. Threads share data with each other.
each other.
Two types of threads

1. Kernel level thread.


2. User-level thread.

User-level thread

● The operating system does not recognize the user-level thread.


● User threads can be easily implemented and it is implemented by the user.
● If a user performs a user-level thread blocking operation, the whole process
is blocked.
● The kernel level thread does not know nothing about the user level thread.
● The kernel-level thread manages user-level threads as if they are single-
threaded processes?
● examples: Java thread, POSIX threads, etc.
Kernel Thread
● The kernel thread recognizes the operating system.
● There is a thread control block and process control block in the system for each thread
and process in the kernel-level thread.
● The kernel-level thread is implemented by the operating system.
● The kernel knows about all the threads and manages them.
● The kernel-level thread offers a system call to create and manage the threads from user-
space.
● The implementation of kernel threads is more difficult than the user thread.
● Context switch time is longer in the kernel thread.
● If a kernel thread performs a blocking operation, the Banky thread execution can
continue.
● Example: Window Solaris.
Multithreading
THREAD Models
• Many-to-One
S
• One-to-One
How do user and kernel
threads map into each other?
• Many-to-Many

4: Threads 65
THREAD Many-to-One


S
Many user-level threads
mapped to single kernel
thread.

• Used on systems that


do not support kernel
threads.
• Examples:
Solaris Green Threads
GNU Portable Threads
4: Threads 66
THREAD One-to-One

S
Each user-level thread maps to kernel thread.

• Examples
- Windows 95/98/NT/2000
- Linux

4: Threads 67
Threading Issues
THREAD
Semantics of fork() and exec() system calls
S fork() duplicate only the calling thread or all threads?
• Does

Thread cancellation
• Terminating a thread before it has finished
• Two general approaches:
• Asynchronous cancellation terminates the target thread
immediately
• Deferred cancellation allows the target thread to
periodically check if it should be cancelled

4: Threads 68
Threading Issues
THREAD
Signal handling

S
• Signals are used in UNIX systems to notify a process that a particular event has occurred
• A signal handler is used to process signals
1. Signal is generated by particular event
2. Signal is delivered to a process
3. Signal is handled
• Options:
• Deliver the signal to the thread to which the signal applies
• Deliver the signal to every thread in the process
• Deliver the signal to certain threads in the process
• Assign a specific thread to receive all signals for the process
Thread pools
• Create a number of threads in a pool where they await work
• Advantages:
• Usually slightly faster to service a request with an existing thread than create a new thread
• Allows the number of threads in the application(s) to be bound to the size of the pool
4: Threads 69
Threading Issues
THREAD
S
Thread specific data
• Allows each thread to have its own copy of data
• Useful when you do not have control over the thread creation process
(i.e., when using a thread pool)

Scheduler activations
• Many:Many models require communication to maintain the appropriate
number of kernel threads allocated to the application
• Scheduler activations provide upcalls - a communication mechanism
from the kernel to the thread library
• This communication allows an application to maintain the correct
number kernel threads
4: Threads 70
THREAD Various Implementations
PThreads
• ASPOSIX standard (IEEE 1003.1c) API for thread creation and synchronization
• API specifies behavior of the thread library, implementation is up to
development of the library
• Common in UNIX operating systems (Solaris, Linux, Mac OS X)
Windows Threads
• Implements the one-to-one mapping
• Each thread contains
• A thread id
• Register set
• Separate user and kernel stacks
• Private data storage area
• The register set, stacks, and private storage area are known as the context of
the threads
4: Threads 71
THREAD Various Implementations
Linux Threads
S refers to them as tasks rather than threads
• Linux
• Thread creation is done through clone() system call
• clone() allows a child task to share the address space of the parent task (process)

Java Threads
• Java threads may be created by:
• Extending Thread class
• Implementing the Runnable interface
• Java threads are managed by the JVM.

4: Threads 72

You might also like