0% found this document useful (0 votes)
56 views

Notes On OS (BCA CACS 251)

Notes of bca

Uploaded by

bishal000baniya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views

Notes On OS (BCA CACS 251)

Notes of bca

Uploaded by

bishal000baniya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 20

Course Title: Operating System

Course Code: CACS 251


Year/ Semester: 2nd Year/ 4th Semester
Class Load: 6 Hrs. (Theory 3 Hrs./ Practical 3 Hrs.)
------------------------------------------------------------------------------------------------------------------------------------------------
Course Description
This course includes the topics that help students understand operating system and it's functionality along
with its types.

Course Objectives
The general objectives of this subject are to provide the basic feature, function and interface with the
hardware and application software to run the computer smoothly.

Unit 1) Introduction to Operating System;


Introduction to OS;
An operating system (OS) is a software that acts as an interface between computer hardware and user
applications. It is a fundamental component of a computer system that manages various resources and
provides services to facilitate the execution of software programs.
The primary functions of an operating system include:
 Process Management:
 Memory Management:
 File System Management:
 Device Management:
 User Interface:
 Security:
 Networking:
Different operating systems have varying designs, architectures, and features, catering to different types of
computers, devices, and computing environments. Examples of popular operating systems include Microsoft
Windows, macOS, Linux, Android, and iOS.

Generation of Operating System,


Operating systems have evolved over several generations, each introducing new features and
advancements. Here is a general overview of the generations of operating systems:
 First Generation (1940s-1950s): The first generation of operating systems was primarily focused on
running a single program at a time. These systems were often batch processing systems where users
would submit jobs to be processed sequentially. Examples include the Electronic Delay Storage
Automatic Calculator (EDSAC) and the IBM 701.
 Second Generation (1950s-1960s): The second generation saw the development of multiprogramming
operating systems, which allowed multiple programs to be loaded into memory simultaneously. This
enabled more efficient resource utilization and improved system performance. Notable operating
systems of this generation include the IBM OS/360 and Burroughs MCP.
 Third Generation (1960s-1970s): The third generation introduced time-sharing operating systems,
enabling multiple users to interact with a computer system simultaneously. Users could run their
programs and share system resources such as CPU time and memory. Examples include the Compatible
Time-Sharing System (CTSS) and Multics.
 Fourth Generation (1970s-1980s): The fourth generation brought the development of microprocessor-
based systems and the rise of personal computers. It witnessed the emergence of operating systems
designed for personal computers, such as Microsoft Disk Operating System (MS-DOS) and Apple DOS.
 Fifth Generation (1980s-Present): The fifth generation saw significant advancements in operating
systems, including the development of graphical user interfaces (GUIs) that made computers more user-
friendly. This era gave rise to popular operating systems like Microsoft Windows and Apple's Mac OS.
 Sixth Generation (1990s-Present): The sixth generation introduced networked and distributed operating
systems, enabling computers to communicate and share resources over networks. This led to the growth
of client-server architectures and the development of operating systems with built-in networking
capabilities, such as Novell NetWare and various UNIX variants.
 Seventh Generation (2000s-Present): The seventh generation is characterized by the widespread use of
mobile devices and the development of mobile operating systems. This includes operating systems like
Android and iOS, which are specifically designed for smartphones and tablets.
It's important to note that the concept of generations is a broad categorization, and operating systems
continue to evolve with new features and advancements in each generation. Additionally, advancements in
virtualization, cloud computing, and other technologies have influenced the development of modern
operating systems.

Objectives (Two Views of OS),


The two common views of an operating system are the user view and the system view. Let's explore each of
them:
Resource Manager,
Operating systems play a crucial role as resource managers, efficiently allocating and managing system
resources to ensure optimal utilization and coordination. Here are some key aspects of the operating system
as a resource manager:
 CPU Management: The operating system manages the central processing unit (CPU) resources of a
computer system. It schedules and allocates CPU time among different processes and threads, ensuring
fairness and maximizing overall system performance. The operating system employs scheduling
algorithms to determine which processes should be given access to the CPU and for how long.
 Memory Management: Operating systems handle system memory resources, managing the allocation
and deallocation of memory to different processes and applications. They track the availability of
memory, allocate memory space for processes, handle memory fragmentation, and implement memory
protection mechanisms. Memory management ensures efficient utilization of available memory and
prevents unauthorized access to memory areas.
 Device Management: Operating systems manage various input/output (I/O) devices, such as keyboards,
mice, printers, disks, and network interfaces. They handle device drivers, which act as intermediaries
between the operating system and the devices, enabling communication and data transfer. The
operating system coordinates access to devices, handles I/O requests from processes, and optimizes
device utilization to ensure efficient I/O operations.
 File System Management: Operating systems provide file system management, handling the
organization, storage, and retrieval of files on secondary storage devices. They manage file access
permissions, enforce file integrity and security, and facilitate file-related operations such as creation,
deletion, and modification. The operating system ensures efficient storage allocation and implements
file caching mechanisms to improve performance.
 Network Resource Management: In networked environments, operating systems manage network
resources, including network interfaces, protocols, and connections. They handle network configuration,
routing, and traffic management. Operating systems facilitate network communication, allow
applications to establish network connections, and handle data transfer between systems.
 Resource Allocation and Scheduling: The operating system is responsible for allocating and scheduling
system resources effectively. It balances resource allocation based on factors such as process priorities,
fairness, and system load. The operating system employs scheduling algorithms to determine the order
in which processes are executed, ensuring efficient resource utilization and responsiveness.
 Security and Access Control: Operating systems implement security mechanisms to protect system
resources and user data. They enforce user authentication, access control policies, and encryption
techniques. Operating systems manage user accounts, permissions, and privileges, ensuring that only
authorized users have access to system resources.
By efficiently managing system resources, the operating system ensures that multiple processes and
applications can run concurrently without conflicts, maximizes resource utilization, and provides a stable and
secure computing environment.

Extended Machine,
The concept of an operating system as an extended machine refers to the idea that the operating system
provides an abstraction layer between the underlying hardware and user applications, making the system
easier to use and program. It presents a higher-level interface to applications, hiding the complexities of the
underlying hardware and providing a consistent and standardized environment.
Here are some key aspects of the operating system as an extended machine:
 Abstraction: The operating system abstracts the hardware resources, such as the CPU, memory, storage,
and devices, into a unified and standardized interface. It provides a set of well-defined APIs (Application
Programming Interfaces) that applications can use to interact with the hardware without needing to
know the low-level details of the specific hardware implementation.
 Resource Management: The operating system manages the allocation and utilization of system
resources on behalf of applications. It provides mechanisms for scheduling CPU time, managing memory
allocation, coordinating access to devices, and handling input/output operations. This abstraction allows
applications to focus on their specific tasks without worrying about low-level resource management.
 Process and Thread Management: The operating system provides process and thread management
facilities, allowing applications to execute concurrently and efficiently utilize the CPU. It handles tasks
such as process creation, scheduling, context switching, and inter-process communication. This
abstraction enables applications to run in a multitasking environment, giving the illusion of parallel
execution.
 File System: The operating system presents a file system abstraction that allows applications to create,
read, write, and organize files and directories. It provides a hierarchical structure and file access
mechanisms, shielding applications from the complexities of physical storage devices and disk
management.
 Device Drivers: The operating system provides device drivers that act as intermediaries between
applications and hardware devices. These drivers abstract the device-specific details and provide a
standardized interface for applications to interact with different types of devices, such as printers, disks,
network interfaces, and graphics cards.
 User Interface: The operating system presents a user interface (UI) that allows users to interact with the
system and applications. This can be a command-line interface (CLI) or a graphical user interface (GUI).
The UI provides a consistent and intuitive way for users to perform tasks, launch applications, and
manage system settings.
By acting as an extended machine, the operating system simplifies the programming process, enhances
application portability across different hardware platforms, and provides a user-friendly environment. It
shields applications from the underlying hardware complexities, allowing developers and users to focus on
higher-level tasks and functionality.

Types of Operating System,


Operating systems can be categorized into several types based on their design, purpose, and the devices
they run on. Here are some of the common types of operating systems:
 Desktop Operating Systems: These are designed for personal computers and workstations. Examples
include Microsoft Windows, macOS (previously OS X), and Linux distributions like Ubuntu and Fedora.
 Server Operating Systems: These are optimized for running servers and providing services to multiple
clients or users. Popular server operating systems include Windows Server, Linux distributions like
CentOS and Debian, and UNIX variants like FreeBSD.
 Mobile Operating Systems: These are designed for mobile devices such as smartphones and tablets.
Well-known mobile operating systems include Android (developed by Google), iOS (developed by Apple),
and Windows 10 Mobile.
 Real-Time Operating Systems (RTOS): RTOS is designed for systems that require precise and predictable
timing, often used in embedded systems and industrial applications. Examples of RTOS include VxWorks,
FreeRTOS, and QNX.
 Multi-User Operating Systems: These operating systems allow multiple users to access and use a
computer simultaneously. Examples include Unix-like systems (Linux, macOS, and various versions of
Unix itself), mainframe operating systems like IBM's z/OS, and some server operating systems
 Embedded Operating Systems: These are lightweight operating systems designed for embedded devices
with limited resources and specific purposes. They are commonly found in devices like routers, smart
appliances, and IoT (Internet of Things) devices. Examples include Embedded Linux, FreeRTOS, and
Windows Embedded.
 Time-Sharing Operating Systems: These operating systems allow multiple users to share a computer's
resources by dividing the CPU time among them. The users can run different applications
simultaneously. UNIX and its variants are often used as time-sharing operating systems.
 Distributed Operating Systems: These operating systems run on multiple interconnected computers and
enable them to work together as a single system. They facilitate communication, resource sharing, and
coordination among the networked computers. Examples include Amoeba, Plan 9 from Bell Labs, and
Windows Distributed File System (DFS).
 Virtualization Operating Systems: These operating systems provide virtualization capabilities, allowing
multiple virtual machines (VMs) to run on a single physical machine. Examples include VMware ESXi,
Microsoft Hyper-V, and Xen.
These are just a few examples of the different types of operating systems. Each type serves specific needs
and has its own characteristics and features.

Function of Operating system.


Operating systems perform a variety of functions to manage and control computer hardware and software
resources. These functions can be categorized into several broad categories:
 Process Management: The operating system manages processes, which are running instances of
programs. It creates, schedules, and terminates processes, allocates CPU time to processes, and
facilitates inter-process communication and synchronization.
 Memory Management: The operating system is responsible for managing system memory. It allocates
and deallocates memory to processes, handles memory fragmentation, and implements techniques like
virtual memory to efficiently utilize available memory resources.
 File System Management: The operating system provides file system management, which involves
organizing, storing, and retrieving files on secondary storage devices. It handles file creation, deletion,
reading, and writing, as well as managing file access permissions and enforcing file integrity.
 Device Management: Operating systems manage input/output (I/O) devices such as keyboards, mice,
printers, disks, and network interfaces. They handle device drivers, which facilitate communication
between the operating system and the devices, and manage device access, data transfer, and device
configuration.
 User Interface: The operating system provides a user interface (UI) that allows users to interact with the
computer system. This can be a command-line interface (CLI) or a graphical user interface (GUI). The UI
handles user input, displays system information and status, and provides a platform for running
applications.
 Resource Allocation and Scheduling: The operating system is responsible for allocating system resources,
including CPU time, memory, and I/O devices, to different processes and applications. It employs
scheduling algorithms to determine the order in which processes are executed and manages resource
sharing and contention to ensure efficient resource utilization.
 Security and Access Control: Operating systems incorporate security mechanisms to protect system
resources and user data. They enforce user authentication, access control policies, and encryption
techniques. Operating systems manage user accounts, permissions, and privileges to ensure secure and
authorized access to system resources.
 Error Handling and Fault Tolerance: Operating systems handle errors and faults that occur during system
operation. They implement error detection and recovery mechanisms, such as exception handling, to
ensure system stability and reliability. Operating systems also provide fault tolerance features like
backup and recovery mechanisms to minimize the impact of failures.
 Networking and Communication: In networked environments, operating systems facilitate network
communication and data transfer. They handle network protocols, manage network connections, and
provide services like file sharing, internet connectivity, and network printing.
These are some of the key functions performed by operating systems. Each function plays a crucial role in
managing and coordinating the resources of a computer system, providing a stable and user-friendly
environment for applications and users.
Unit 2) Operating System Structure
Introduction,
The structure of an operating system refers to its organization and components that work together to
provide various functionalities and services. The structure can vary based on the design principles, goals, and
architectural decisions of a particular operating system. However, there are a few common structural
components found in many operating systems:

 Kernel: The kernel is the core component of the operating system. It resides in memory and provides
essential services and functionality to interact with the underlying hardware. The kernel handles tasks
such as process management, memory management, device management, and system resource
allocation. It acts as an intermediary between user applications and hardware, facilitating
communication and coordination.
 Device Drivers: Device drivers are software components that enable communication between the
operating system and hardware devices. They provide a standardized interface for the operating system
to interact with different types of devices, such as keyboards, mice, printers, disks, and network
interfaces. Device drivers handle device-specific details and manage device access, data transfer, and
device configuration.
 File System: The file system component manages the organization, storage, and retrieval of files on
storage devices. It provides a hierarchical structure, directory management, and file access controls. The
file system handles file operations such as creation, deletion, reading, and writing. It also implements
caching mechanisms and disk scheduling algorithms to optimize file access and storage efficiency.
 Process Management: The process management component is responsible for creating, scheduling, and
managing processes (running programs). It handles process creation, termination, and context switching.
It also manages process synchronization and communication mechanisms, allowing processes to
communicate and coordinate with each other. Process management ensures fair CPU time allocation
and efficient execution of multiple processes.
 Memory Management: Memory management involves managing system memory resources, including
allocation, deallocation, and protection. It handles memory allocation to processes, manages virtual
memory systems, and implements memory protection mechanisms to prevent unauthorized access.
Memory management also involves techniques such as paging, swapping, and memory fragmentation
handling.
 File I/O and Input/ Output Management: The file I/O and input/output management component
manages input and output operations, including interactions with devices, files, and network resources.
It handles I/O requests, buffers data for efficient transfer, and coordinates I/O operations to ensure
timely and reliable data transfer between applications and devices.
 User Interface: The user interface component provides the means for users to interact with the
operating system. It can be a command-line interface (CLI), where users type commands, or a graphical
user interface (GUI) with windows, icons, menus, and pointers. The user interface component handles
user input, displays system status and information, and facilitates the execution of user commands and
application launching.
These are some of the common structural components of an operating system. The specific design and
organization may vary between different operating systems, and some systems may have additional
components or variations based on their goals and requirements.

Layered System,
A layered system is a common architectural design used in operating systems, where the functionality of the
operating system is organized into distinct layers. Each layer provides a set of services to the layer above it
and utilizes services from the layer below it. The layered system helps in modularizing the operating system
and simplifying its design and development. Here is a typical layered structure of an operating system:

 Hardware Layer: The lowest layer directly interfaces with the hardware components of the computer
system, including the CPU, memory, disks, network interfaces, and other devices. It provides basic
services for accessing and controlling hardware resources.
 Kernel Layer: The kernel layer forms the core of the operating system and provides essential services to
the upper layers. It manages system resources, such as process management, memory management,
device drivers, and file systems. The kernel interacts with the hardware layer to control and allocate
resources efficiently.
 System Call Layer: The system call layer provides an interface between user-level applications and the
kernel. It allows applications to request services from the kernel through system calls, which are
predefined functions that provide access to the operating system's functionalities. System calls handle
tasks such as process creation, file operations, network communication, and inter-process
communication.
 Library Layer: The library layer consists of libraries that provide higher-level functions and routines to
applications. These libraries abstract the complexity of system calls and provide a convenient interface
for application development. They may include standard libraries for input/output operations,
networking, graphics, and other common tasks.
 Application Layer: The topmost layer consists of user-level applications that utilize the services provided
by lower layers. These applications can range from simple command-line tools to complex software like
word processors, web browsers, and media players. They interact with the system through system calls
or libraries and make use of the underlying operating system services.
Each layer in the layered system encapsulates a specific set of functionalities, and the interactions between
layers follow a hierarchical structure. The layering helps in modularization, making it easier to develop,
maintain, and enhance the operating system. It also allows for portability, as different implementations of
the same layer can be used on different hardware platforms.
It's worth noting that the specific layers and their functionalities can vary between different operating
systems. Some operating systems may have additional layers or different organization based on their design
principles and goals.

Kernel,
The kernel is a fundamental component of an operating system. It is the core part that directly interacts with
the underlying hardware and provides essential services to other parts of the operating system and user
applications. The kernel acts as a bridge between software and hardware, enabling the efficient and
controlled use of system resources.
Here are some key characteristics and responsibilities of a kernel:
 Hardware Interaction: The kernel directly interacts with hardware components such as the CPU,
memory, disks, network interfaces, and input/output devices. It manages and controls these resources,
allocating them to different processes and ensuring their proper utilization.
 Resource Management: The kernel is responsible for managing system resources, including memory,
CPU time, and devices. It performs tasks such as process management, memory management, and
device management to ensure fair and efficient allocation of resources among running processes.
 Process Management: The kernel handles process creation, termination, and scheduling. It manages the
execution of multiple processes or threads, allocating CPU time to them and handling context switches
to allow for multitasking and concurrency.
 Memory Management: The kernel manages system memory, allocating memory to processes, handling
memory allocation and deallocation, and implementing memory protection mechanisms to prevent
unauthorized access. It also handles techniques such as virtual memory, which allows processes to use
more memory than physically available.
 Device Drivers: The kernel includes device drivers that act as intermediaries between the operating
system and hardware devices. Device drivers facilitate communication between the kernel and devices,
enabling the operating system to control and utilize the various input/output devices connected to the
system.
 System Calls: The kernel provides an interface for user applications to request services from the
operating system. These requests are made through system calls, which are predefined functions or
methods that allow applications to access kernel services. System calls handle operations such as file I/O,
network communication, process creation, and resource allocation.
 Kernel Security: The kernel enforces security measures to protect the system and user data. It manages
user authentication, access control, and implements security mechanisms to ensure that only authorized
users can access system resources.
 Error Handling and Exception Handling: The kernel handles errors and exceptions that occur during
system operation. It detects and responds to hardware faults, software errors, and exceptions raised by
user applications, ensuring system stability and reliability.
The kernel is typically implemented as a low-level software layer that runs in privileged mode, with direct
access to hardware resources. Its efficient and reliable functioning is crucial for the overall performance and
stability of the operating system. Different operating systems may have different types of kernels, such as
monolithic kernels, microkernels, or hybrid kernels, depending on their design principles and goals.

Types of Kernel;
There are several types of kernels used in operating systems, each with its own design principles and
characteristics. The main types of kernels include:
 Monolithic Kernel:
 Microkernel:
 Hybrid Kernel:
 Exo-kernel:
 Nano-kernel:
It's important to note that the choice of kernel type depends on the specific requirements, goals, and design
principles of the operating system. Different operating systems may employ different kernel types to achieve
the desired balance between performance, modularity, and security. Additionally, variations and
combinations of these kernel types exist, making the classification less rigid and allowing for customized
designs.

Monolithic Kernel;
A monolithic kernel is a type of operating system kernel where the entire operating system, including device
drivers, file systems, and system services, runs in kernel space. In a monolithic kernel, all the kernel
components are tightly integrated and share the same address space, meaning they run in a single large
process or kernel thread.

In a monolithic kernel, the kernel directly interacts with hardware and provides services to user applications.
It handles essential tasks such as process management, memory management, device drivers, file systems,
and inter-process communication. All these functionalities are tightly coupled within the kernel, allowing for
efficient and direct communication between components.

Here are some key characteristics of a monolithic kernel:


 Integration: All the core components of the operating system, including device drivers, file systems, and
system services, are integrated into a single large binary running in kernel space. This tight integration
enables direct and efficient communication between kernel components.
 Performance: Due to the direct access and sharing of memory, monolithic kernels often provide better
performance compared to other kernel types. The absence of inter-process communication overhead
can result in faster execution of system calls and lower latency.
 Simplified Communication: With the components running in the same address space, communication
between different kernel modules is relatively straightforward. They can share data structures and
function calls without the need for additional inter-process communication mechanisms.
 Lack of Modularity: The tight coupling of components in a monolithic kernel makes it less modular.
Modifications or updates to one component can affect the behavior and stability of other components,
potentially leading to system crashes or instability.
 Kernel Privilege: Since all components run in kernel space, they have full access to system resources and
operate with kernel-level privileges. This unrestricted access means that a bug or vulnerability in any
component can potentially compromise the entire system.

Examples of operating systems that use monolithic kernels include Linux (the Linux kernel), Unix, and earlier
versions of Windows (such as Windows 9x and Windows NT). However, it's worth noting that even in
monolithic kernels, certain functionalities may be implemented as loadable kernel modules, which can be
dynamically loaded and unloaded without rebooting the system, allowing for some level of modularity.

Overall, monolithic kernels provide a simple and efficient design but may sacrifice modularity and flexibility.
The design choice between monolithic kernels and other kernel types depends on the specific goals and
requirements of the operating system.

Micro Kernel;
A microkernel is a type of operating system kernel design that aims to provide only the most essential
functions and services in the kernel, while moving higher-level services, such as device drivers and file
systems, out of the kernel and into user space. The microkernel concept is based on the principle of
minimizing the kernel's size and complexity, promoting modularity, and isolating components for improved
reliability and security.

In a microkernel design, the kernel provides a minimal set of core services, typically including process
management, memory management, and inter-process communication (IPC). It focuses on ensuring that
these core services are efficient, reliable, and protected. Other non-essential services, such as file systems,
device drivers, and networking protocols, are implemented as separate user-level processes or servers that
run in user space.
Here are some key characteristics of a microkernel:
 Minimalism: The microkernel design philosophy advocates for keeping the kernel small and minimal,
providing only the essential functions necessary for system operation. This approach simplifies the
kernel's design and reduces its complexity.
 Modularity: By moving non-essential services out of the kernel, a microkernel promotes modularity.
Each service or driver is implemented as a separate user-level process, allowing for easier development,
maintenance, and customization. Adding or updating a service can be done without modifying or
restarting the kernel.
 Message Passing: Inter-process communication (IPC) becomes a critical mechanism in a microkernel-
based system. Processes communicate with each other through message passing, using well-defined
protocols. IPC allows for the exchange of data and requests between processes running in user space.
 Fault Isolation: The separation of services and drivers into user space processes provides better fault
isolation. If a user-level process or server fails, it does not affect the entire system or crash the kernel.
This isolation enhances system reliability and availability.
 Security: The microkernel design enhances security by minimizing the trusted computing base. Since
non-essential services are running in user space, any vulnerability or exploit in those components is
contained within that process and does not directly compromise the kernel or other critical system
components.

Examples of operating systems that adopt a microkernel design include QNX, MINIX 3, and L4. The
microkernel concept has also influenced other kernel designs, such as the hybrid kernel, which combines
some characteristics of both monolithic and microkernel designs.
While microkernels offer advantages such as modularity and improved security, they can introduce
performance overhead due to the need for inter-process communication and potential context switches
between user-level processes. The design choice between microkernels and other kernel types depends on
the specific goals and trade-offs desired for the operating system.

Exo Kernel;
An exokernel is a type of operating system kernel design that takes the concept of minimalism to the
extreme. It aims to provide a minimalistic kernel that exposes low-level hardware abstractions directly to
user-level applications. The exokernel design philosophy emphasizes flexibility and efficiency by removing
most of the traditional abstractions and policies typically found in other kernel designs.

In an exokernel-based system, the kernel's primary role is to securely multiplex and allocate hardware
resources among different applications. It does not provide higher-level abstractions like file systems,
networking protocols, or memory management. Instead, it exposes low-level primitives and hardware
abstractions to user-level applications, allowing them to directly manage and control these resources.

Here are some key characteristics of an exokernel:


 Resource Abstraction: An exokernel provides a minimal interface that exposes low-level hardware
resources, such as CPU, memory, disks, and network interfaces, to user-level applications. It allows
applications to directly manipulate and control these resources based on their specific requirements.
 Minimal Protection: Unlike traditional kernels that enforce strict policies and abstractions, an exokernel
provides minimal protection and does not impose any specific resource management policies. It leaves
most policy decisions, such as memory allocation and protection, to the application level.
 Flexibility and Customizability: The lack of abstractions and policies in the exokernel design allows
applications to have maximum flexibility and control over resource management. Applications can
implement their own resource allocation strategies and customize the system behavior according to
their specific needs.
 Performance Optimization: By exposing low-level hardware abstractions, an exokernel aims to eliminate
unnecessary overhead and maximize performance. Applications can optimize resource usage directly,
avoiding the overhead introduced by traditional abstractions implemented in the kernel.
 Complex Application-Level Management: With the removal of higher-level abstractions and policies, the
responsibility of managing resources and ensuring system correctness falls primarily on the application
developers. This puts a greater burden on application-level software to handle resource allocation,
protection, and synchronization, which can be complex and error-prone.

The exokernel concept challenges traditional operating system designs by providing a minimalistic
foundation for application-level resource management. It trades off higher-level abstractions and policies for
increased flexibility and performance. Exokernels have been explored primarily in research and experimental
operating systems, and their practical use in production systems is limited.

Examples of exokernel-based operating systems include the MIT Exokernel and the Barrelfish project. While
exokernels offer unique advantages, their design and implementation require significant expertise and
careful consideration of system-level requirements and trade-offs.

Hybrid kernel;
A hybrid kernel is a type of operating system kernel design that combines elements of both monolithic and
microkernel architectures. It aims to strike a balance between the performance and efficiency of a
monolithic kernel and the modularity and fault isolation of a microkernel. Hybrid kernels retain some kernel
functions in kernel space while moving non-essential services and drivers to user space.

In a hybrid kernel design, the core operating system services, such as process management, memory
management, and basic device drivers, are implemented in kernel space, similar to a monolithic kernel. This
allows for direct and efficient communication between these core components. However, certain services,
such as file systems, networking protocols, and advanced device drivers, are moved out of the kernel and
into user space as separate modules or servers.

Here are some key characteristics of a hybrid kernel:


 Kernel Space and User Space Separation: Hybrid kernels maintain a distinction between kernel space and
user space. The essential core services and drivers reside in kernel space for efficient access to hardware
resources and low-level operations. Non-essential services and drivers are implemented in user space for
improved modularity and fault isolation.
 Improved Modularity: By moving non-essential components to user space, hybrid kernels promote
modularity. Different modules or servers can be added, updated, or removed without modifying or
restarting the kernel. This enhances flexibility and simplifies development and maintenance.
 Performance and Efficiency: The core components residing in kernel space provide direct access to
hardware resources and efficient communication. This enables high-performance operations and
reduces overhead compared to fully user-space implementations.
 Fault Isolation and Reliability: By moving non-essential services to user space, hybrid kernels enhance
fault isolation. If a user-level module or server fails, it does not impact the stability or crash the kernel.
This improves system reliability and availability.
 Enhanced Security: The separation of user-level services from the kernel in a hybrid design contributes
to improved security. User-level services have limited access to system resources, reducing the risk of
compromising critical kernel components.

Examples of operating systems that use a hybrid kernel design include Microsoft Windows NT and its
derivatives, such as Windows 2000, Windows XP, and later versions. These operating systems retain core
services and drivers in kernel space but implement some services, such as file systems and network
protocols, as user-level modules.

Hybrid kernels offer a balance between performance and modularity, making them a popular choice for
many commercial operating systems. However, they still require careful design and development to manage
the interactions between kernel and user space components effectively.

Nano Kernel;
A nanokernel, also known as a minimal kernel or nanokernel, is an extremely minimalistic operating system
kernel design that provides only the most essential functions required for system operation. It aims to
reduce the kernel's size, complexity, and resource usage to the bare minimum while still providing the
necessary functionality for the system to function.

In a nanokernel design, the kernel focuses on fundamental services such as thread management, basic inter-
process communication, and hardware abstraction. It delegates most system services, such as memory
management, file systems, and device drivers, to user-level libraries or servers that run in user space.

The main characteristics of a nanokernel include:


 Minimalism: A nanokernel design emphasizes simplicity and minimalism. It strives to keep the kernel
code size as small as possible, reducing the complexity and resource requirements of the kernel.
 Basic Services: The nanokernel provides only the most essential services required for system operation,
such as thread management, basic synchronization primitives, and inter-process communication
mechanisms.
 Delegated Services: Non-essential services, including memory management, file systems, and device
drivers, are implemented as separate user-level libraries or servers running in user space. These services
interact with the nanokernel through well-defined interfaces.
 User-Level Flexibility: By delegating system services to user space, a nanokernel provides flexibility for
users to choose and customize their preferred implementation of these services. It allows for different
user-level libraries or servers to coexist and be easily replaced or upgraded without modifying or
restarting the kernel.
 Reduced Resource Usage: The minimalist design of a nanokernel results in reduced resource usage,
including memory footprint and CPU overhead. This can lead to improved system performance and
efficiency.

It's important to note that the term "nanokernel" is not as widely used or recognized as other kernel types
like monolithic, microkernel, or hybrid. The concept of a nanokernel emphasizes extreme minimalism and
flexibility, and it is often employed in specialized or resource-constrained systems where small size and
efficiency are paramount.
Client-Server Model,
The client-server model is a computing architecture in which tasks and responsibilities are divided between
two types of entities: clients and servers. It is a common architectural pattern used in operating systems and
networked environments to facilitate communication and resource sharing between multiple entities.

In the client-server model, the client is typically an application or a user's device that requests services or
resources from a server. The server, on the other hand, is a dedicated entity that provides services,
processes requests, and manages resources on behalf of the clients. Clients and servers communicate with
each other over a network using standardized protocols.

Here are the key characteristics of the client-server model:


 Request-Response Interaction: Clients initiate requests to servers, specifying the desired service or
resource. Servers receive these requests, process them, and respond with the requested data or perform
the requested action. The communication between clients and servers typically follows a request-
response pattern.
 Service Provision: Servers offer specific services or resources that clients can access. These services can
include file sharing, database management, web hosting, email delivery, or any other functionality
provided by the server application. Clients interact with servers to access and utilize these services.
 Resource Management: Servers manage and control shared resources on behalf of clients. This includes
handling concurrent access to resources, enforcing security measures, and maintaining data consistency.
By centralizing resource management, servers can ensure efficient and controlled access to shared
resources.
 Scalability: The client-server model allows for scalable system architectures. Servers can be designed to
handle multiple client requests concurrently, providing services to numerous clients simultaneously. This
scalability enables systems to handle increasing workloads and accommodate a growing number of
clients.
 Distributed Computing: In a distributed computing environment, clients and servers can be located on
different machines or networks. Clients can connect to remote servers over a network, enabling
distributed processing and resource utilization.

The client-server model is widely used in various domains, including web applications, database systems,
cloud computing, and network infrastructure. It provides a modular and scalable approach to system design,
allowing for efficient resource sharing and centralized management of services.

Virtual Machines,
A virtual machine (VM) in the context of operating systems is a software emulation of a physical computer
system. It enables the execution of multiple operating systems or instances of the same operating system on
a single physical machine, providing isolation and flexibility for running different software environments
concurrently.
Here are some key points about virtual machines:
 Virtualization: Virtual machines are created and managed by a virtualization layer, often called a
hypervisor or a virtual machine monitor (VMM). The hypervisor abstracts the underlying physical
hardware and allows multiple virtual machines to run independently on the same physical machine.
 Guest Operating Systems: Each virtual machine is configured with its own guest operating system, which
behaves as if it is running on a dedicated physical machine. The guest operating system, such as
Windows, Linux, or macOS, interacts with the virtual hardware provided by the hypervisor.
 Resource Allocation: The hypervisor manages the allocation of physical resources, such as CPU, memory,
storage, and network connectivity, among the virtual machines. It ensures that each virtual machine
receives a fair share of resources and prevents interference or conflicts between them.
 Isolation: Virtual machines provide a high degree of isolation between different instances. Each virtual
machine runs in its own isolated environment, with its own memory space, file system, and network
interfaces. This isolation enhances security and stability, as issues in one virtual machine do not directly
affect others.
 Snapshot and Migration: Virtual machines often support snapshotting and migration capabilities.
Snapshots allow capturing the state of a virtual machine at a specific point in time, enabling easy rollback
or cloning of virtual machine instances. Migration allows moving a running virtual machine from one
physical host to another without disruption, providing flexibility and load balancing.
 Virtual Hardware: Virtual machines present virtualized hardware to the guest operating systems,
including virtual processors (CPU cores), virtual memory, virtual disks, and virtual network interfaces.
These virtual hardware components are managed and controlled by the hypervisor.
Virtual machines have numerous applications, including:
 Server Virtualization: Consolidating multiple servers onto a single physical machine, reducing hardware
costs and improving resource utilization.
 Software Development and Testing: Providing developers with isolated environments for testing
software on different operating systems or configurations.
 Legacy System Support: Running older or incompatible software on virtual machines to maintain
compatibility.
 Cloud Computing: Virtual machines form the foundation of Infrastructure-as-a-Service (IaaS) cloud
offerings, allowing users to provision and manage virtual machines remotely.

Popular virtualization platforms include VMware vSphere, Microsoft Hyper-V, Xen, KVM (Kernel-based
Virtual Machine), and VirtualBox.

Shell;
In the context of operating systems, a shell is a command-line interface (CLI) or a graphical user interface
(GUI) that allows users to interact with the operating system and execute commands. It acts as a user
interface layer that interprets and processes user commands to perform various tasks and manage the
system resources.

Here are some key points about shells:


Command Interpreter: The shell serves as a command interpreter, which means it interprets the commands
entered by the user and executes the corresponding actions or programs. It provides a way for users to
interact with the operating system and run various commands or programs.
 Command Execution: When a user enters a command, the shell parses and interprets it to identify the
appropriate action. It then initiates the execution of the command, which can involve launching built-in
commands provided by the shell or invoking external programs or scripts.
 Scripting Capabilities: Shells often support scripting, allowing users to write sequences of commands and
save them in script files. These scripts can be executed by the shell, automating repetitive tasks or
complex operations.
 Environment Customization: Shells provide options for customizing the environment and behavior
according to user preferences. Users can define environment variables, set aliases for frequently used
commands, configure prompt appearance, and modify other settings to personalize their shell
experience.
 Input/Output Redirection: Shells support input/output redirection, allowing users to redirect the input or
output of a command to or from files or other processes. This feature enables powerful command
chaining and manipulation of data streams.
 Job Control: Shells provide job control features that allow users to manage and control multiple
processes running simultaneously. Users can start background processes, switch between foreground
and background execution, suspend or resume processes, and monitor their status.
 Script Interpreters: Shells are often used as interpreters for scripting languages like Bash (Bourne Again
SHell), PowerShell, or Python. These shells interpret and execute scripts written in the respective
scripting language, providing powerful automation capabilities.

Examples of commonly used shells in Unix-like operating systems include Bash, C shell (csh), Korn shell (ksh),
and Zsh. In Windows, the default shell is the Command Prompt (cmd.exe) or PowerShell, depending on the
version.
Overall, shells provide an interface for users to interact with the operating system, execute commands,
automate tasks, and manage system resources, making them an essential component of the user experience
in command-line or graphical environments.
Unit 3) Process Management
Process Concepts:
Definitions of Process,
In the context of an operating system (OS), a process refers to an instance of a computer program that is
being executed. It is an active entity that represents the execution of a particular program and encompasses
the program code, its associated data, and the execution context.

A process is a fundamental concept in OS design and management, and it plays a crucial role in providing
multitasking and resource allocation capabilities. Each process is isolated from other processes, ensuring
that they cannot interfere with each other's memory or resources.

Key characteristics of a process include:


 Program Code: A process consists of the instructions of a specific program, which are stored in
executable files on disk. These instructions are loaded into memory when the process starts.
 Data: A process includes the data that the program operates on, such as variables, data structures, and
buffers. This data is also stored in memory during the process's execution.
 Execution Context: The execution context of a process includes various attributes that define the state of
the process, such as the values of program counters, registers, and stack pointers. It represents the
current progress of the program and allows the OS to manage the execution and switching of processes.
 Resources: Each process has its own set of allocated system resources, such as memory, input/output
devices, files, and network connections. The OS ensures that processes are provided with the necessary
resources and manages their allocation and deallocation.
 Scheduling: The OS schedules processes to utilize the available CPU time efficiently. It determines which
process should be executed next based on scheduling algorithms and priorities.

Processes can interact with each other through inter-process communication mechanisms provided by the
OS, such as shared memory, message passing, or synchronization primitives. They can also create child
processes or terminate themselves, allowing for the creation of complex applications and multitasking
environments.

Overall, processes are the fundamental building blocks of an operating system, enabling the execution of
multiple programs simultaneously and providing an organized framework for resource management and
coordination.

The Process Model,


The process model in an operating system defines the structure and behavior of processes and their
interactions within the system. It provides a conceptual framework for managing and executing processes
efficiently. There are several process models used in operating systems, including:
 Single-Process Model: In this model, the operating system supports the execution of only one process at
a time. The entire system resources are dedicated to that process until it completes or relinquishes
control. This model is simple but not practical for most modern operating systems.
 Multi-Process Model: This model allows the concurrent execution of multiple processes, where each
process runs independently of others. Each process has its own memory space and resources, and the
operating system schedules and manages their execution. This model enables multitasking and is the
foundation for most modern operating systems.
 Hierarchical Process Model: In this model, processes are organized in a hierarchical structure, typically
referred to as a process tree. A parent process can create child processes, forming a parent-child
relationship. Child processes inherit certain attributes from their parent, such as resource allocations and
access rights. This model facilitates process management and resource sharing.
 Client-Server Model: In this model, the operating system supports the execution of two types of
processes: client processes and server processes. Client processes make requests for services or
resources, and server processes provide those services. The client and server processes communicate
through inter-process communication mechanisms. This model is commonly used in distributed systems
and networked environments.
 Thread Model: The thread model extends the process model by introducing the concept of threads.
Threads are lightweight execution units within a process that share the same memory space and
resources. Multiple threads can exist within a single process and execute concurrently, allowing for
parallelism and increased efficiency. Thread models enhance responsiveness and enable efficient
utilization of system resources.
 Hybrid Models: Some operating systems combine different process models to address specific
requirements. For example, a real-time operating system may incorporate a mix of single-process and
multi-process models to ensure timely and predictable execution of critical tasks.

The choice of process model depends on the specific goals and requirements of the operating system and
the applications it supports. Each model has its advantages and trade-offs in terms of performance, resource
utilization, and complexity. Operating systems often employ a combination of process models to provide a
flexible and efficient environment for executing processes.

Process States,
In an operating system, the concept of process states refers to the various states that a process can be in
during its lifecycle. These states represent different stages of a process from its creation until its termination.
The process states typically include:

 New: This is the initial state of a process. In this state, the process is being created, and the necessary
resources are being allocated to it by the operating system. After the process initialization is complete, it
transitions to the ready state.
 Ready: In this state, the process is prepared to be executed but is waiting to be assigned to a processor.
The process is in main memory, and all the necessary resources are allocated to it. It is waiting for the
CPU scheduler to select it for execution.
 Running: When a process is in the running state, it is being executed by the CPU. The processor is
actively executing the instructions of the process, and the process is utilizing system resources to
perform its tasks.
 Blocked (or Waiting): In this state, a process is unable to proceed further because it is waiting for an
event or a resource that is currently unavailable. For example, a process might be waiting for user input,
waiting for a file to be read from disk, or waiting for a network response. Once the event or resource
becomes available, the process transitions to the ready state and can resume execution.
 Terminated: This is the final state of a process. It indicates that the process has completed its
execution or has been explicitly terminated by the operating system or by the process itself. In this
state, the process releases all the resources it has acquired during its execution, and its process
control block (PCB) is removed from the system.

Note that depending on the operating system and its process management mechanisms, there can be
additional process states or variations of the states mentioned above. For example, some systems may have
an "Suspended" state where a process is temporarily halted or moved to secondary storage to free up
system resources.

The transitions between process states are typically managed by the operating system's process scheduler,
which determines which processes are eligible to run and makes decisions on context switching and
resource allocation. The process states and their transitions are crucial for efficient process management and
scheduling in an operating system.
Process State Transition,
Process state transition refers to the movement of a process from one state to another in an operating
system. It represents the changes in the lifecycle of a process as it progresses through different stages. These
state transitions are typically triggered by specific events, actions, or conditions within the system. The
process state transitions commonly include:
 New to Ready: This transition occurs when a process is created and initialized, and it is ready to be
scheduled for execution. The transition is typically initiated by the operating system when the necessary
resources are allocated to the process.
 Ready to Running: This transition happens when the operating system selects a process from the ready
queue and assigns it to the CPU for execution. The process starts executing its instructions and enters
the running state.
 Running to Blocked (or Waiting): This transition occurs when a running process encounters an event or
condition that prevents it from proceeding further, such as waiting for user input, I/O operation, or a
resource that is currently unavailable. The process is moved to the blocked state, and its execution is
halted until the event or condition is resolved.
 Running to Ready: This transition happens when a running process is interrupted by the operating
system scheduler due to the expiration of its time slice or a higher-priority process becoming ready. The
running process is preempted, and it is moved back to the ready state, allowing other processes to be
scheduled for execution.
 Blocked to Ready: This transition occurs when a blocked process's requested event or resource becomes
available. The process is unblocked, and it is moved back to the ready state, becoming eligible for
execution.
 Running to Terminated: This transition takes place when a running process completes its execution or is
explicitly terminated by the operating system or by the process itself. The process releases its allocated
resources, and its process control block (PCB) is removed from the system.

The process state transitions are managed by the operating system's process scheduler and various event-
driven mechanisms. The transitions are crucial for effective process management, scheduling, and resource
allocation in the operating system, ensuring that processes progress through different states efficiently and
effectively.

The Process Control Block,


The Process Control Block (PCB) is a data structure used by an operating system to store and manage
information about a running process. It serves as the central repository of process-related information,
providing the operating system with the necessary details to manage and control the execution of processes.

The PCB contains various attributes and data related to a process, including:

 Process Identifier (PID): A unique identifier assigned to each process by the operating system, allowing it
to differentiate and track individual processes.
 Process State: The current state of the process, such as new, ready, running, blocked, or terminated. It
indicates the process's position in its lifecycle and helps the operating system manage the process's
execution and resource allocation.
 Program Counter (PC): The address of the next instruction to be executed by the process. It allows the
operating system to keep track of the process's progress and resume execution from the correct point
during context switches.
 CPU Registers: The contents of the CPU registers for the process, including general-purpose registers,
stack pointers, and program status registers. These values are crucial for saving and restoring the
process's execution context during context switches.
 Process Priority: The priority assigned to the process, which determines its relative importance and
influences its scheduling and resource allocation compared to other processes.
 Memory Management Information: Information about the memory allocated to the process, including
base and limit registers, page tables, and other memory-related attributes.
 I/O Information: Details about the I/O devices or files being used by the process, including open file
descriptors, I/O request queues, and status flags.
 Accounting Information: Statistics and metrics related to the process's resource usage, execution time,
CPU usage, and other performance-related data.
 Parent-Child Relationship: Pointers or references to the process's parent and child processes, forming a
process hierarchy or tree structure.

The PCB is maintained by the operating system and is associated with each process currently active in the
system. When a context switch occurs, the PCB of the currently executing process is saved, and the PCB of
the next process to be scheduled is loaded, allowing for seamless transitions between processes.

By storing critical information about each process, the PCB enables the operating system to effectively
manage process scheduling, resource allocation, synchronization, and inter-process communication. It
serves as a vital data structure for process management in an operating system.

Operations on Processes:
Operations on Processes refer to the actions and functionalities provided by an operating system to create,
terminate, manage process hierarchies, and implement process-related mechanisms. Let's define each
operation:
Process Creation: The process creation operation involves the creation of a new process by the operating
system. The creation process typically includes the following steps:
 Allocating necessary resources: The operating system allocates resources such as memory, CPU time, I/O
devices, and files to the new process.
 Creating the Process Control Block (PCB): A PCB is created to store information about the new process,
including process ID, state, program counter, CPU registers, and other relevant attributes.
 Setting up the execution context: The initial values of the program counter, CPU registers, and other
necessary parameters are set up to start the execution of the process.

Process Termination: The process termination operation involves the orderly termination of a process by the
operating system. Termination may occur for various reasons, such as completing the execution of the
process or due to an error or explicit request. The steps involved in process termination include:
 Reclaiming resources: The operating system releases the resources allocated to the terminated process,
such as memory, files, and I/O devices.
 Updating process status: The process's state in the PCB is changed to "terminated" or a similar status to
indicate its completion.
 Notifying the parent process: If the terminated process has a parent process, a notification is sent to the
parent to handle the termination and any necessary cleanup operations.

Process Hierarchies: Process hierarchies represent the organization of processes in a hierarchical structure,
typically referred to as a process tree or process group. This operation allows processes to have parent-child
relationships, where a parent process can create child processes. Some common operations related to
process hierarchies include:
 Creating child processes: A process can create one or more child processes, forming a parent-child
relationship. Child processes typically inherit certain attributes, such as resource allocations and access
rights, from their parent process.
 Process communication: Processes within a hierarchy can communicate with each other through inter-
process communication mechanisms, such as shared memory, message passing, or synchronization
primitives.
 Process synchronization: Processes in a hierarchy can synchronize their execution using synchronization
mechanisms like semaphores, locks, or condition variables to coordinate their activities.
Implementation of Processes: The implementation of processes involves the mechanisms and techniques
used by the operating system to manage and control processes effectively. Some key aspects of process
implementation include:
 Process scheduling: The operating system employs scheduling algorithms to determine which processes
should be executed and in what order. This ensures fair utilization of CPU resources.
 Context switching: The process of saving the execution context of one process and restoring the context
of another process during a context switch. This allows for efficient switching between processes and
maintaining their execution states.
 Inter-process communication: Mechanisms provided by the operating system to allow processes to
exchange data, coordinate activities, and communicate with each other.
 Resource allocation and management: The operating system manages the allocation of system
resources, such as memory, CPU time, I/O devices, and files, to ensure efficient utilization and prevent
conflicts among processes.

These operations collectively enable the operating system to create, manage, and control processes,
facilitating multitasking, resource sharing, and coordination in a computing system.

Cooperating Processes,

System Calls
Process Management,
File Management,
Directory Management,

Threads:
Definitions of Threads;
Types of Thread Process (Single and Multithreaded Process);
Benefits of Multithread;
Multithreading Models;
One-to-One Model,
Many-to-One-Model,
Many-to-Many Model,

Inter-Process Communication and Synchronization:


Introduction,
Race Condition,
Critical Regions,
Avoiding Critical Region:
Mutual Exclusion
Serializability;

Mutual Exclusion Conditions,


Proposals for Achieving Mutual Exclusion:
Disabling Interrupts,
Lock Variable,
Strict Alteration (Peterson's Solution),
The TSL Instruction,
Sleep and Wakeup,
Types of Mutual Exclusion
Semaphore,
Monitors,
Mutexes,
Message Passing,
Bounded Buffer,
Serializability:
Locking Protocols
Time Stamp Protocols;
Classical IPC Problems
Dinning Philosophers Problems,
The Readers and Writers Problem,
The Sleeping Barber's Problem.

Process Scheduling:
Basic Concept,
Type of Scheduling (Preemptive Scheduling, Non-preemptive Scheduling, Batch, Interactive, Real Time
Scheduling),
Scheduling Criteria or Performance Analysis,
Scheduling Algorithm (Round-Robin, First Come First Served, Shortest-Job- First, Shortest Process Next,
Shortest Remaining Time Next, Real Time, Priority Fair Share, Guaranteed, Lottery Scheduling, HRN,
Multiple Queue, Multilevel Feedback Queue);
Some Numerical Examples on Scheduling.

Unit 4) Deadlocks
System Model,
System Resources: Pre-emptible and Non-Pre-emptible;
Conditions for Resource Deadlocks, Deadlock Modeling,
The OSTRICH Algorithm,
Method of Handling Deadlocks,
Deadlock Prevention,
Deadlock Avoidance: Banker's Algorithm,
Deadlock Detection: Resource Allocation Graph,
Recovery from Deadlock.

Unit 5) Memory Management


Basic Memory Management:
Introduction,
Memory Hierarchy,
Logical Versus Physical Address Space,
Memory Management with Swapping: Memory Management with Bitmaps and with Linked List;
Memory Management without Swapping,
Contiguous-Memory Allocation: Memory Protection, Memory Allocation,
Fragmentation (Inter Fragmentation and External Fragmentation);
Mom-Contiguous Memory Allocation,
Fixed Partitioning Vs. Variable Partitioning,
Relocation and Protection,
Coalescing and Compaction.

Virtual Memory:
Background,
Paging, Structure of Page Table: Hierarchical Page Table,
Hashed Page Table,
Inverted Page Table, Shared Page Table,
Block Mapping Vs. Direct Mapping,
Demand Paging,
Page Replacement and Page Faults,
Page Replacement Algorithms: FIFO, OPR, LRU, SCP;
Some Numerical Examples on Page Replacement,
Thrashing, Segmentation, Segmentation with Paging.

Unit 6) Input/ Output Device Management


Principle of I/O Devices,
Device Controllers,
Memory Mapped I/O,
Direct Memory Access;
Principle of I/O Software: Goals of I/O Software,
Program I/O,
Interrupt - Driven I/O,
I/O Using DMA;
I/O Software Layers: Interrupts Handler,
Device Drivers,
Device Independent I/O Software,
User-Space I/O Software;
Disk Hardware;
Disk Scheduling: Seek Time, Rational Delay, Transfer Time;
Disk Scheduling Algorithms: FCFS Scheduling, SSTF Scheduling, SCAN Scheduling, C-SCAN Scheduling,
Lock Scheduling.

Unit 7) File System Interface Management


File Concept: File Naming, File Type, File Access, File Attributes, File Operation and File Operation and File
Descriptors:
Directories: Single-Level Directory Systems, Hierarchical Directory Systems, Path Names, Directory
Operation;
Access Methods: Sequential, Directory Operation;
Access methods: Sequential, Direct;
Protection: Types of Access Control List,
Access Control Matrix.

Unit 8) Security Management


Introduction,
Security Problems,
User Authentication: Passwords, password Vulnerabilities, Encrypted password, One Time Password and
Biometrics password;
User Authorization,
Program Threats: Trojan Horse, Trap Door,
Stack and Buffer Overflow;
System Threats: Worms, Viruses, Denial of Services.

Unit 9) Distributed Operating System


Introduction.
Advantages of Distributed System over Centralized System.
Advantages of Distributed System over Independent PCs,
Disadvantages of Distributed System,
Hardware and Software Concepts,
Communication in Distributed Systems,
Message Passing,
Remote Procedure Call,
Process in Distribution System,
Clock Synchronization.

Unit 10) Case Study


DOS and Windows Operating System,
Unix Operating System,
Linux Operating System.

Text Books
Andrew S. Tanenbaum, "Modern Operating System 6/e", PHI, 2011/12
Silberschatz, P.B. Galvin, G. Gagne, "Operating System Concepts 8/e ", Wiley India, 2014 ISBN
9788126520510
Reference Books
Andrew S. Tanenbaum, "Distributed Operating System", Pearson
D M Dhamdhere, "System Programming and Operating System", Tata McGraw- Hill, 2009
P. Pa] Choudhury, "Operating Systems Principles and Design", PHI, 2011

# # #

You might also like