Notes On OS (BCA CACS 251)
Notes On OS (BCA CACS 251)
Course Objectives
The general objectives of this subject are to provide the basic feature, function and interface with the
hardware and application software to run the computer smoothly.
Extended Machine,
The concept of an operating system as an extended machine refers to the idea that the operating system
provides an abstraction layer between the underlying hardware and user applications, making the system
easier to use and program. It presents a higher-level interface to applications, hiding the complexities of the
underlying hardware and providing a consistent and standardized environment.
Here are some key aspects of the operating system as an extended machine:
Abstraction: The operating system abstracts the hardware resources, such as the CPU, memory, storage,
and devices, into a unified and standardized interface. It provides a set of well-defined APIs (Application
Programming Interfaces) that applications can use to interact with the hardware without needing to
know the low-level details of the specific hardware implementation.
Resource Management: The operating system manages the allocation and utilization of system
resources on behalf of applications. It provides mechanisms for scheduling CPU time, managing memory
allocation, coordinating access to devices, and handling input/output operations. This abstraction allows
applications to focus on their specific tasks without worrying about low-level resource management.
Process and Thread Management: The operating system provides process and thread management
facilities, allowing applications to execute concurrently and efficiently utilize the CPU. It handles tasks
such as process creation, scheduling, context switching, and inter-process communication. This
abstraction enables applications to run in a multitasking environment, giving the illusion of parallel
execution.
File System: The operating system presents a file system abstraction that allows applications to create,
read, write, and organize files and directories. It provides a hierarchical structure and file access
mechanisms, shielding applications from the complexities of physical storage devices and disk
management.
Device Drivers: The operating system provides device drivers that act as intermediaries between
applications and hardware devices. These drivers abstract the device-specific details and provide a
standardized interface for applications to interact with different types of devices, such as printers, disks,
network interfaces, and graphics cards.
User Interface: The operating system presents a user interface (UI) that allows users to interact with the
system and applications. This can be a command-line interface (CLI) or a graphical user interface (GUI).
The UI provides a consistent and intuitive way for users to perform tasks, launch applications, and
manage system settings.
By acting as an extended machine, the operating system simplifies the programming process, enhances
application portability across different hardware platforms, and provides a user-friendly environment. It
shields applications from the underlying hardware complexities, allowing developers and users to focus on
higher-level tasks and functionality.
Kernel: The kernel is the core component of the operating system. It resides in memory and provides
essential services and functionality to interact with the underlying hardware. The kernel handles tasks
such as process management, memory management, device management, and system resource
allocation. It acts as an intermediary between user applications and hardware, facilitating
communication and coordination.
Device Drivers: Device drivers are software components that enable communication between the
operating system and hardware devices. They provide a standardized interface for the operating system
to interact with different types of devices, such as keyboards, mice, printers, disks, and network
interfaces. Device drivers handle device-specific details and manage device access, data transfer, and
device configuration.
File System: The file system component manages the organization, storage, and retrieval of files on
storage devices. It provides a hierarchical structure, directory management, and file access controls. The
file system handles file operations such as creation, deletion, reading, and writing. It also implements
caching mechanisms and disk scheduling algorithms to optimize file access and storage efficiency.
Process Management: The process management component is responsible for creating, scheduling, and
managing processes (running programs). It handles process creation, termination, and context switching.
It also manages process synchronization and communication mechanisms, allowing processes to
communicate and coordinate with each other. Process management ensures fair CPU time allocation
and efficient execution of multiple processes.
Memory Management: Memory management involves managing system memory resources, including
allocation, deallocation, and protection. It handles memory allocation to processes, manages virtual
memory systems, and implements memory protection mechanisms to prevent unauthorized access.
Memory management also involves techniques such as paging, swapping, and memory fragmentation
handling.
File I/O and Input/ Output Management: The file I/O and input/output management component
manages input and output operations, including interactions with devices, files, and network resources.
It handles I/O requests, buffers data for efficient transfer, and coordinates I/O operations to ensure
timely and reliable data transfer between applications and devices.
User Interface: The user interface component provides the means for users to interact with the
operating system. It can be a command-line interface (CLI), where users type commands, or a graphical
user interface (GUI) with windows, icons, menus, and pointers. The user interface component handles
user input, displays system status and information, and facilitates the execution of user commands and
application launching.
These are some of the common structural components of an operating system. The specific design and
organization may vary between different operating systems, and some systems may have additional
components or variations based on their goals and requirements.
Layered System,
A layered system is a common architectural design used in operating systems, where the functionality of the
operating system is organized into distinct layers. Each layer provides a set of services to the layer above it
and utilizes services from the layer below it. The layered system helps in modularizing the operating system
and simplifying its design and development. Here is a typical layered structure of an operating system:
Hardware Layer: The lowest layer directly interfaces with the hardware components of the computer
system, including the CPU, memory, disks, network interfaces, and other devices. It provides basic
services for accessing and controlling hardware resources.
Kernel Layer: The kernel layer forms the core of the operating system and provides essential services to
the upper layers. It manages system resources, such as process management, memory management,
device drivers, and file systems. The kernel interacts with the hardware layer to control and allocate
resources efficiently.
System Call Layer: The system call layer provides an interface between user-level applications and the
kernel. It allows applications to request services from the kernel through system calls, which are
predefined functions that provide access to the operating system's functionalities. System calls handle
tasks such as process creation, file operations, network communication, and inter-process
communication.
Library Layer: The library layer consists of libraries that provide higher-level functions and routines to
applications. These libraries abstract the complexity of system calls and provide a convenient interface
for application development. They may include standard libraries for input/output operations,
networking, graphics, and other common tasks.
Application Layer: The topmost layer consists of user-level applications that utilize the services provided
by lower layers. These applications can range from simple command-line tools to complex software like
word processors, web browsers, and media players. They interact with the system through system calls
or libraries and make use of the underlying operating system services.
Each layer in the layered system encapsulates a specific set of functionalities, and the interactions between
layers follow a hierarchical structure. The layering helps in modularization, making it easier to develop,
maintain, and enhance the operating system. It also allows for portability, as different implementations of
the same layer can be used on different hardware platforms.
It's worth noting that the specific layers and their functionalities can vary between different operating
systems. Some operating systems may have additional layers or different organization based on their design
principles and goals.
Kernel,
The kernel is a fundamental component of an operating system. It is the core part that directly interacts with
the underlying hardware and provides essential services to other parts of the operating system and user
applications. The kernel acts as a bridge between software and hardware, enabling the efficient and
controlled use of system resources.
Here are some key characteristics and responsibilities of a kernel:
Hardware Interaction: The kernel directly interacts with hardware components such as the CPU,
memory, disks, network interfaces, and input/output devices. It manages and controls these resources,
allocating them to different processes and ensuring their proper utilization.
Resource Management: The kernel is responsible for managing system resources, including memory,
CPU time, and devices. It performs tasks such as process management, memory management, and
device management to ensure fair and efficient allocation of resources among running processes.
Process Management: The kernel handles process creation, termination, and scheduling. It manages the
execution of multiple processes or threads, allocating CPU time to them and handling context switches
to allow for multitasking and concurrency.
Memory Management: The kernel manages system memory, allocating memory to processes, handling
memory allocation and deallocation, and implementing memory protection mechanisms to prevent
unauthorized access. It also handles techniques such as virtual memory, which allows processes to use
more memory than physically available.
Device Drivers: The kernel includes device drivers that act as intermediaries between the operating
system and hardware devices. Device drivers facilitate communication between the kernel and devices,
enabling the operating system to control and utilize the various input/output devices connected to the
system.
System Calls: The kernel provides an interface for user applications to request services from the
operating system. These requests are made through system calls, which are predefined functions or
methods that allow applications to access kernel services. System calls handle operations such as file I/O,
network communication, process creation, and resource allocation.
Kernel Security: The kernel enforces security measures to protect the system and user data. It manages
user authentication, access control, and implements security mechanisms to ensure that only authorized
users can access system resources.
Error Handling and Exception Handling: The kernel handles errors and exceptions that occur during
system operation. It detects and responds to hardware faults, software errors, and exceptions raised by
user applications, ensuring system stability and reliability.
The kernel is typically implemented as a low-level software layer that runs in privileged mode, with direct
access to hardware resources. Its efficient and reliable functioning is crucial for the overall performance and
stability of the operating system. Different operating systems may have different types of kernels, such as
monolithic kernels, microkernels, or hybrid kernels, depending on their design principles and goals.
Types of Kernel;
There are several types of kernels used in operating systems, each with its own design principles and
characteristics. The main types of kernels include:
Monolithic Kernel:
Microkernel:
Hybrid Kernel:
Exo-kernel:
Nano-kernel:
It's important to note that the choice of kernel type depends on the specific requirements, goals, and design
principles of the operating system. Different operating systems may employ different kernel types to achieve
the desired balance between performance, modularity, and security. Additionally, variations and
combinations of these kernel types exist, making the classification less rigid and allowing for customized
designs.
Monolithic Kernel;
A monolithic kernel is a type of operating system kernel where the entire operating system, including device
drivers, file systems, and system services, runs in kernel space. In a monolithic kernel, all the kernel
components are tightly integrated and share the same address space, meaning they run in a single large
process or kernel thread.
In a monolithic kernel, the kernel directly interacts with hardware and provides services to user applications.
It handles essential tasks such as process management, memory management, device drivers, file systems,
and inter-process communication. All these functionalities are tightly coupled within the kernel, allowing for
efficient and direct communication between components.
Examples of operating systems that use monolithic kernels include Linux (the Linux kernel), Unix, and earlier
versions of Windows (such as Windows 9x and Windows NT). However, it's worth noting that even in
monolithic kernels, certain functionalities may be implemented as loadable kernel modules, which can be
dynamically loaded and unloaded without rebooting the system, allowing for some level of modularity.
Overall, monolithic kernels provide a simple and efficient design but may sacrifice modularity and flexibility.
The design choice between monolithic kernels and other kernel types depends on the specific goals and
requirements of the operating system.
Micro Kernel;
A microkernel is a type of operating system kernel design that aims to provide only the most essential
functions and services in the kernel, while moving higher-level services, such as device drivers and file
systems, out of the kernel and into user space. The microkernel concept is based on the principle of
minimizing the kernel's size and complexity, promoting modularity, and isolating components for improved
reliability and security.
In a microkernel design, the kernel provides a minimal set of core services, typically including process
management, memory management, and inter-process communication (IPC). It focuses on ensuring that
these core services are efficient, reliable, and protected. Other non-essential services, such as file systems,
device drivers, and networking protocols, are implemented as separate user-level processes or servers that
run in user space.
Here are some key characteristics of a microkernel:
Minimalism: The microkernel design philosophy advocates for keeping the kernel small and minimal,
providing only the essential functions necessary for system operation. This approach simplifies the
kernel's design and reduces its complexity.
Modularity: By moving non-essential services out of the kernel, a microkernel promotes modularity.
Each service or driver is implemented as a separate user-level process, allowing for easier development,
maintenance, and customization. Adding or updating a service can be done without modifying or
restarting the kernel.
Message Passing: Inter-process communication (IPC) becomes a critical mechanism in a microkernel-
based system. Processes communicate with each other through message passing, using well-defined
protocols. IPC allows for the exchange of data and requests between processes running in user space.
Fault Isolation: The separation of services and drivers into user space processes provides better fault
isolation. If a user-level process or server fails, it does not affect the entire system or crash the kernel.
This isolation enhances system reliability and availability.
Security: The microkernel design enhances security by minimizing the trusted computing base. Since
non-essential services are running in user space, any vulnerability or exploit in those components is
contained within that process and does not directly compromise the kernel or other critical system
components.
Examples of operating systems that adopt a microkernel design include QNX, MINIX 3, and L4. The
microkernel concept has also influenced other kernel designs, such as the hybrid kernel, which combines
some characteristics of both monolithic and microkernel designs.
While microkernels offer advantages such as modularity and improved security, they can introduce
performance overhead due to the need for inter-process communication and potential context switches
between user-level processes. The design choice between microkernels and other kernel types depends on
the specific goals and trade-offs desired for the operating system.
Exo Kernel;
An exokernel is a type of operating system kernel design that takes the concept of minimalism to the
extreme. It aims to provide a minimalistic kernel that exposes low-level hardware abstractions directly to
user-level applications. The exokernel design philosophy emphasizes flexibility and efficiency by removing
most of the traditional abstractions and policies typically found in other kernel designs.
In an exokernel-based system, the kernel's primary role is to securely multiplex and allocate hardware
resources among different applications. It does not provide higher-level abstractions like file systems,
networking protocols, or memory management. Instead, it exposes low-level primitives and hardware
abstractions to user-level applications, allowing them to directly manage and control these resources.
The exokernel concept challenges traditional operating system designs by providing a minimalistic
foundation for application-level resource management. It trades off higher-level abstractions and policies for
increased flexibility and performance. Exokernels have been explored primarily in research and experimental
operating systems, and their practical use in production systems is limited.
Examples of exokernel-based operating systems include the MIT Exokernel and the Barrelfish project. While
exokernels offer unique advantages, their design and implementation require significant expertise and
careful consideration of system-level requirements and trade-offs.
Hybrid kernel;
A hybrid kernel is a type of operating system kernel design that combines elements of both monolithic and
microkernel architectures. It aims to strike a balance between the performance and efficiency of a
monolithic kernel and the modularity and fault isolation of a microkernel. Hybrid kernels retain some kernel
functions in kernel space while moving non-essential services and drivers to user space.
In a hybrid kernel design, the core operating system services, such as process management, memory
management, and basic device drivers, are implemented in kernel space, similar to a monolithic kernel. This
allows for direct and efficient communication between these core components. However, certain services,
such as file systems, networking protocols, and advanced device drivers, are moved out of the kernel and
into user space as separate modules or servers.
Examples of operating systems that use a hybrid kernel design include Microsoft Windows NT and its
derivatives, such as Windows 2000, Windows XP, and later versions. These operating systems retain core
services and drivers in kernel space but implement some services, such as file systems and network
protocols, as user-level modules.
Hybrid kernels offer a balance between performance and modularity, making them a popular choice for
many commercial operating systems. However, they still require careful design and development to manage
the interactions between kernel and user space components effectively.
Nano Kernel;
A nanokernel, also known as a minimal kernel or nanokernel, is an extremely minimalistic operating system
kernel design that provides only the most essential functions required for system operation. It aims to
reduce the kernel's size, complexity, and resource usage to the bare minimum while still providing the
necessary functionality for the system to function.
In a nanokernel design, the kernel focuses on fundamental services such as thread management, basic inter-
process communication, and hardware abstraction. It delegates most system services, such as memory
management, file systems, and device drivers, to user-level libraries or servers that run in user space.
It's important to note that the term "nanokernel" is not as widely used or recognized as other kernel types
like monolithic, microkernel, or hybrid. The concept of a nanokernel emphasizes extreme minimalism and
flexibility, and it is often employed in specialized or resource-constrained systems where small size and
efficiency are paramount.
Client-Server Model,
The client-server model is a computing architecture in which tasks and responsibilities are divided between
two types of entities: clients and servers. It is a common architectural pattern used in operating systems and
networked environments to facilitate communication and resource sharing between multiple entities.
In the client-server model, the client is typically an application or a user's device that requests services or
resources from a server. The server, on the other hand, is a dedicated entity that provides services,
processes requests, and manages resources on behalf of the clients. Clients and servers communicate with
each other over a network using standardized protocols.
The client-server model is widely used in various domains, including web applications, database systems,
cloud computing, and network infrastructure. It provides a modular and scalable approach to system design,
allowing for efficient resource sharing and centralized management of services.
Virtual Machines,
A virtual machine (VM) in the context of operating systems is a software emulation of a physical computer
system. It enables the execution of multiple operating systems or instances of the same operating system on
a single physical machine, providing isolation and flexibility for running different software environments
concurrently.
Here are some key points about virtual machines:
Virtualization: Virtual machines are created and managed by a virtualization layer, often called a
hypervisor or a virtual machine monitor (VMM). The hypervisor abstracts the underlying physical
hardware and allows multiple virtual machines to run independently on the same physical machine.
Guest Operating Systems: Each virtual machine is configured with its own guest operating system, which
behaves as if it is running on a dedicated physical machine. The guest operating system, such as
Windows, Linux, or macOS, interacts with the virtual hardware provided by the hypervisor.
Resource Allocation: The hypervisor manages the allocation of physical resources, such as CPU, memory,
storage, and network connectivity, among the virtual machines. It ensures that each virtual machine
receives a fair share of resources and prevents interference or conflicts between them.
Isolation: Virtual machines provide a high degree of isolation between different instances. Each virtual
machine runs in its own isolated environment, with its own memory space, file system, and network
interfaces. This isolation enhances security and stability, as issues in one virtual machine do not directly
affect others.
Snapshot and Migration: Virtual machines often support snapshotting and migration capabilities.
Snapshots allow capturing the state of a virtual machine at a specific point in time, enabling easy rollback
or cloning of virtual machine instances. Migration allows moving a running virtual machine from one
physical host to another without disruption, providing flexibility and load balancing.
Virtual Hardware: Virtual machines present virtualized hardware to the guest operating systems,
including virtual processors (CPU cores), virtual memory, virtual disks, and virtual network interfaces.
These virtual hardware components are managed and controlled by the hypervisor.
Virtual machines have numerous applications, including:
Server Virtualization: Consolidating multiple servers onto a single physical machine, reducing hardware
costs and improving resource utilization.
Software Development and Testing: Providing developers with isolated environments for testing
software on different operating systems or configurations.
Legacy System Support: Running older or incompatible software on virtual machines to maintain
compatibility.
Cloud Computing: Virtual machines form the foundation of Infrastructure-as-a-Service (IaaS) cloud
offerings, allowing users to provision and manage virtual machines remotely.
Popular virtualization platforms include VMware vSphere, Microsoft Hyper-V, Xen, KVM (Kernel-based
Virtual Machine), and VirtualBox.
Shell;
In the context of operating systems, a shell is a command-line interface (CLI) or a graphical user interface
(GUI) that allows users to interact with the operating system and execute commands. It acts as a user
interface layer that interprets and processes user commands to perform various tasks and manage the
system resources.
Examples of commonly used shells in Unix-like operating systems include Bash, C shell (csh), Korn shell (ksh),
and Zsh. In Windows, the default shell is the Command Prompt (cmd.exe) or PowerShell, depending on the
version.
Overall, shells provide an interface for users to interact with the operating system, execute commands,
automate tasks, and manage system resources, making them an essential component of the user experience
in command-line or graphical environments.
Unit 3) Process Management
Process Concepts:
Definitions of Process,
In the context of an operating system (OS), a process refers to an instance of a computer program that is
being executed. It is an active entity that represents the execution of a particular program and encompasses
the program code, its associated data, and the execution context.
A process is a fundamental concept in OS design and management, and it plays a crucial role in providing
multitasking and resource allocation capabilities. Each process is isolated from other processes, ensuring
that they cannot interfere with each other's memory or resources.
Processes can interact with each other through inter-process communication mechanisms provided by the
OS, such as shared memory, message passing, or synchronization primitives. They can also create child
processes or terminate themselves, allowing for the creation of complex applications and multitasking
environments.
Overall, processes are the fundamental building blocks of an operating system, enabling the execution of
multiple programs simultaneously and providing an organized framework for resource management and
coordination.
The choice of process model depends on the specific goals and requirements of the operating system and
the applications it supports. Each model has its advantages and trade-offs in terms of performance, resource
utilization, and complexity. Operating systems often employ a combination of process models to provide a
flexible and efficient environment for executing processes.
Process States,
In an operating system, the concept of process states refers to the various states that a process can be in
during its lifecycle. These states represent different stages of a process from its creation until its termination.
The process states typically include:
New: This is the initial state of a process. In this state, the process is being created, and the necessary
resources are being allocated to it by the operating system. After the process initialization is complete, it
transitions to the ready state.
Ready: In this state, the process is prepared to be executed but is waiting to be assigned to a processor.
The process is in main memory, and all the necessary resources are allocated to it. It is waiting for the
CPU scheduler to select it for execution.
Running: When a process is in the running state, it is being executed by the CPU. The processor is
actively executing the instructions of the process, and the process is utilizing system resources to
perform its tasks.
Blocked (or Waiting): In this state, a process is unable to proceed further because it is waiting for an
event or a resource that is currently unavailable. For example, a process might be waiting for user input,
waiting for a file to be read from disk, or waiting for a network response. Once the event or resource
becomes available, the process transitions to the ready state and can resume execution.
Terminated: This is the final state of a process. It indicates that the process has completed its
execution or has been explicitly terminated by the operating system or by the process itself. In this
state, the process releases all the resources it has acquired during its execution, and its process
control block (PCB) is removed from the system.
Note that depending on the operating system and its process management mechanisms, there can be
additional process states or variations of the states mentioned above. For example, some systems may have
an "Suspended" state where a process is temporarily halted or moved to secondary storage to free up
system resources.
The transitions between process states are typically managed by the operating system's process scheduler,
which determines which processes are eligible to run and makes decisions on context switching and
resource allocation. The process states and their transitions are crucial for efficient process management and
scheduling in an operating system.
Process State Transition,
Process state transition refers to the movement of a process from one state to another in an operating
system. It represents the changes in the lifecycle of a process as it progresses through different stages. These
state transitions are typically triggered by specific events, actions, or conditions within the system. The
process state transitions commonly include:
New to Ready: This transition occurs when a process is created and initialized, and it is ready to be
scheduled for execution. The transition is typically initiated by the operating system when the necessary
resources are allocated to the process.
Ready to Running: This transition happens when the operating system selects a process from the ready
queue and assigns it to the CPU for execution. The process starts executing its instructions and enters
the running state.
Running to Blocked (or Waiting): This transition occurs when a running process encounters an event or
condition that prevents it from proceeding further, such as waiting for user input, I/O operation, or a
resource that is currently unavailable. The process is moved to the blocked state, and its execution is
halted until the event or condition is resolved.
Running to Ready: This transition happens when a running process is interrupted by the operating
system scheduler due to the expiration of its time slice or a higher-priority process becoming ready. The
running process is preempted, and it is moved back to the ready state, allowing other processes to be
scheduled for execution.
Blocked to Ready: This transition occurs when a blocked process's requested event or resource becomes
available. The process is unblocked, and it is moved back to the ready state, becoming eligible for
execution.
Running to Terminated: This transition takes place when a running process completes its execution or is
explicitly terminated by the operating system or by the process itself. The process releases its allocated
resources, and its process control block (PCB) is removed from the system.
The process state transitions are managed by the operating system's process scheduler and various event-
driven mechanisms. The transitions are crucial for effective process management, scheduling, and resource
allocation in the operating system, ensuring that processes progress through different states efficiently and
effectively.
The PCB contains various attributes and data related to a process, including:
Process Identifier (PID): A unique identifier assigned to each process by the operating system, allowing it
to differentiate and track individual processes.
Process State: The current state of the process, such as new, ready, running, blocked, or terminated. It
indicates the process's position in its lifecycle and helps the operating system manage the process's
execution and resource allocation.
Program Counter (PC): The address of the next instruction to be executed by the process. It allows the
operating system to keep track of the process's progress and resume execution from the correct point
during context switches.
CPU Registers: The contents of the CPU registers for the process, including general-purpose registers,
stack pointers, and program status registers. These values are crucial for saving and restoring the
process's execution context during context switches.
Process Priority: The priority assigned to the process, which determines its relative importance and
influences its scheduling and resource allocation compared to other processes.
Memory Management Information: Information about the memory allocated to the process, including
base and limit registers, page tables, and other memory-related attributes.
I/O Information: Details about the I/O devices or files being used by the process, including open file
descriptors, I/O request queues, and status flags.
Accounting Information: Statistics and metrics related to the process's resource usage, execution time,
CPU usage, and other performance-related data.
Parent-Child Relationship: Pointers or references to the process's parent and child processes, forming a
process hierarchy or tree structure.
The PCB is maintained by the operating system and is associated with each process currently active in the
system. When a context switch occurs, the PCB of the currently executing process is saved, and the PCB of
the next process to be scheduled is loaded, allowing for seamless transitions between processes.
By storing critical information about each process, the PCB enables the operating system to effectively
manage process scheduling, resource allocation, synchronization, and inter-process communication. It
serves as a vital data structure for process management in an operating system.
Operations on Processes:
Operations on Processes refer to the actions and functionalities provided by an operating system to create,
terminate, manage process hierarchies, and implement process-related mechanisms. Let's define each
operation:
Process Creation: The process creation operation involves the creation of a new process by the operating
system. The creation process typically includes the following steps:
Allocating necessary resources: The operating system allocates resources such as memory, CPU time, I/O
devices, and files to the new process.
Creating the Process Control Block (PCB): A PCB is created to store information about the new process,
including process ID, state, program counter, CPU registers, and other relevant attributes.
Setting up the execution context: The initial values of the program counter, CPU registers, and other
necessary parameters are set up to start the execution of the process.
Process Termination: The process termination operation involves the orderly termination of a process by the
operating system. Termination may occur for various reasons, such as completing the execution of the
process or due to an error or explicit request. The steps involved in process termination include:
Reclaiming resources: The operating system releases the resources allocated to the terminated process,
such as memory, files, and I/O devices.
Updating process status: The process's state in the PCB is changed to "terminated" or a similar status to
indicate its completion.
Notifying the parent process: If the terminated process has a parent process, a notification is sent to the
parent to handle the termination and any necessary cleanup operations.
Process Hierarchies: Process hierarchies represent the organization of processes in a hierarchical structure,
typically referred to as a process tree or process group. This operation allows processes to have parent-child
relationships, where a parent process can create child processes. Some common operations related to
process hierarchies include:
Creating child processes: A process can create one or more child processes, forming a parent-child
relationship. Child processes typically inherit certain attributes, such as resource allocations and access
rights, from their parent process.
Process communication: Processes within a hierarchy can communicate with each other through inter-
process communication mechanisms, such as shared memory, message passing, or synchronization
primitives.
Process synchronization: Processes in a hierarchy can synchronize their execution using synchronization
mechanisms like semaphores, locks, or condition variables to coordinate their activities.
Implementation of Processes: The implementation of processes involves the mechanisms and techniques
used by the operating system to manage and control processes effectively. Some key aspects of process
implementation include:
Process scheduling: The operating system employs scheduling algorithms to determine which processes
should be executed and in what order. This ensures fair utilization of CPU resources.
Context switching: The process of saving the execution context of one process and restoring the context
of another process during a context switch. This allows for efficient switching between processes and
maintaining their execution states.
Inter-process communication: Mechanisms provided by the operating system to allow processes to
exchange data, coordinate activities, and communicate with each other.
Resource allocation and management: The operating system manages the allocation of system
resources, such as memory, CPU time, I/O devices, and files, to ensure efficient utilization and prevent
conflicts among processes.
These operations collectively enable the operating system to create, manage, and control processes,
facilitating multitasking, resource sharing, and coordination in a computing system.
Cooperating Processes,
System Calls
Process Management,
File Management,
Directory Management,
Threads:
Definitions of Threads;
Types of Thread Process (Single and Multithreaded Process);
Benefits of Multithread;
Multithreading Models;
One-to-One Model,
Many-to-One-Model,
Many-to-Many Model,
Process Scheduling:
Basic Concept,
Type of Scheduling (Preemptive Scheduling, Non-preemptive Scheduling, Batch, Interactive, Real Time
Scheduling),
Scheduling Criteria or Performance Analysis,
Scheduling Algorithm (Round-Robin, First Come First Served, Shortest-Job- First, Shortest Process Next,
Shortest Remaining Time Next, Real Time, Priority Fair Share, Guaranteed, Lottery Scheduling, HRN,
Multiple Queue, Multilevel Feedback Queue);
Some Numerical Examples on Scheduling.
Unit 4) Deadlocks
System Model,
System Resources: Pre-emptible and Non-Pre-emptible;
Conditions for Resource Deadlocks, Deadlock Modeling,
The OSTRICH Algorithm,
Method of Handling Deadlocks,
Deadlock Prevention,
Deadlock Avoidance: Banker's Algorithm,
Deadlock Detection: Resource Allocation Graph,
Recovery from Deadlock.
Virtual Memory:
Background,
Paging, Structure of Page Table: Hierarchical Page Table,
Hashed Page Table,
Inverted Page Table, Shared Page Table,
Block Mapping Vs. Direct Mapping,
Demand Paging,
Page Replacement and Page Faults,
Page Replacement Algorithms: FIFO, OPR, LRU, SCP;
Some Numerical Examples on Page Replacement,
Thrashing, Segmentation, Segmentation with Paging.
Text Books
Andrew S. Tanenbaum, "Modern Operating System 6/e", PHI, 2011/12
Silberschatz, P.B. Galvin, G. Gagne, "Operating System Concepts 8/e ", Wiley India, 2014 ISBN
9788126520510
Reference Books
Andrew S. Tanenbaum, "Distributed Operating System", Pearson
D M Dhamdhere, "System Programming and Operating System", Tata McGraw- Hill, 2009
P. Pa] Choudhury, "Operating Systems Principles and Design", PHI, 2011
# # #