0% found this document useful (0 votes)
10 views

Operating System

The document provides a comprehensive overview of key concepts in operating systems, including definitions of processes, context switching, memory management techniques, and file operations. It explains various scheduling algorithms, deadlock handling techniques, and the structure of operating systems, along with comparisons of different memory allocation methods. Additionally, it covers synchronization tools like semaphores and outlines the critical section problem, directory structures, and access methods for files.

Uploaded by

gta.v.yt11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Operating System

The document provides a comprehensive overview of key concepts in operating systems, including definitions of processes, context switching, memory management techniques, and file operations. It explains various scheduling algorithms, deadlock handling techniques, and the structure of operating systems, along with comparisons of different memory allocation methods. Additionally, it covers synchronization tools like semaphores and outlines the critical section problem, directory structures, and access methods for files.

Uploaded by

gta.v.yt11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

[2 marks]

1. Define Process:
A process is a program in execution. It includes the program code, program counter, stack,
data section, and resources.

2. What is context switch?


Context switch is the process of saving the state of a currently running process and loading
the state of the next scheduled process.

3. What is page frame?


A page frame is a fixed-size block of physical memory into which pages of a process are
loaded in a paging system.

4. List various operations on files:

• Create

• Read

• Write

• Delete

5. What is meant by rotational latency in disk scheduling?


Rotational latency is the time delay to rotate the disk to the correct position under the
read/write head.

6. Define critical section:


A critical section is a code segment where a process accesses shared resources and must not
be executed by more than one process at a time.

7. State Belady’s anomaly:


Belady’s anomaly is the counter-intuitive situation where increasing the number of page
frames results in more page faults.

8. List any 4 characteristics of operating system:

• Multitasking

• Resource management

• User interface

• Security and protection

9. Define deadlock:
Deadlock is a state in which a set of processes are blocked because each process is holding a
resource and waiting for another.

10. What is the role of operating system?


An OS manages hardware resources, provides an interface for user interaction, and enables
execution of applications.
11. Define ‘Least Recently Used’ in memory management:
Least Recently Used (LRU) is a page replacement algorithm that removes the page that
hasn’t been used for the longest time.

12. List various properties of files:

• Name

• Type

• Size

• Location

13. What is seek time in Disk scheduling?


Seek time is the time taken by the disk arm to move to the track where data is located.

14. What is compaction?


Compaction is the process of combining free memory space to reduce external
fragmentation in memory.

15. Define safe state:


A system is in a safe state if it can allocate resources to all processes without leading to
deadlock.

16. What is starvation?


Starvation is a condition where a process waits indefinitely to get a resource due to
continuous allocation to other processes.

17. What is term Operating System:


An Operating System is system software that acts as an intermediary between users and
computer hardware.

18. Define system program:


System programs are programs that support the operating system by providing additional
functionalities like compilers, assemblers, and file management tools.

19. Which scheduler controls the degree of multiprogramming?


The Long-term scheduler (Job scheduler) controls the degree of multiprogramming.

20. What is turn around time?


Turnaround time is the total time taken from the submission of a process to its completion.

21. What is demand paging?


Demand paging is a memory management technique where pages are loaded into memory
only when they are needed.

22. List four attributes of files:

• File name

• File type

• File size

• Creation and modification date


23. What does FIFO and MFU stand for?

• FIFO: First In First Out

• MFU: Most Frequently Used

24. Define Rollback:


Rollback is the process of reverting a system or database to a previous consistent state in
case of failure.

25. What is meant by multiprocessing system?


A multiprocessing system has two or more processors that execute instructions
simultaneously to increase performance.

26. Define burst time:


Burst time is the time a process requires the CPU for execution without interruption.

27. What is semaphores?


Semaphores are synchronization tools used to control access to shared resources by
multiple processes.

28. What is meant by Address binding?


Address binding is the process of mapping logical addresses to physical addresses in
memory.

[4 marks]
1] ‘Operating system is like a manager of the computer system’. Explain.

An Operating System (OS) functions as a manager that handles various operations of a


computer system efficiently, much like a business manager who controls different departments.

Key Management Roles of OS:

1. Process Management – Manages process creation, execution, and termination. Uses CPU
scheduling algorithms.

2. Memory Management – Allocates and deallocates memory spaces as required by programs.

3. File Management – Organizes, stores, retrieves, and protects data on storage devices.

4. Device Management – Manages input and output devices using device drivers and I/O
controllers.

5. Security and Access Control – Prevents unauthorized access and ensures system protection.

6. User Interface Management – Provides CLI (Command Line Interface) or GUI (Graphical User
Interface) for user interaction.

[2] What is scheduling? Compare short-term scheduler with medium-term scheduler.

Scheduling refers to the method by which work is assigned to resources that complete the work.
In OS, it primarily deals with assigning CPU time to different processes.

Short-term Scheduler:

• Also called CPU Scheduler.


• Selects from processes in the ready queue.

• Frequency: Very high.

• Controls which process will run on CPU next.

Medium-term Scheduler:

• Manages suspended processes.

• Performs swapping: Moves processes between main memory and secondary storage.

• Used to control degree of multiprogramming.

Feature Short-Term Scheduler Medium-Term Scheduler

Main Task CPU allocation Swapping/suspending processes

Execution Frequency Frequent Moderate

Resource Managed CPU Memory

Queue Type Ready queue Suspended queue

[3] Draw and explain Process Control Block (PCB).

Process Control Block (PCB)

A Process Control Block (PCB) is a data structure used by the operating system to store all
information about a process. Each process has a unique PCB, and the OS uses this information
to track the process's status and manage execution.

Components:

1. Process State: Current state of the process (e.g., Ready, Running, Waiting, Terminated).

2. Program Counter (PC):Address of the next instruction to be executed for the process.

3. CPU Registers: Contents of all process-specific registers (like accumulator, base, stack
pointer, etc.).
4. CPU Scheduling information: Includes a process priority, pointer to scheduling queues, and
any other scheduling parameters.

5. Memory Management Information: Details about memory allocated to the process, such as
base and limit registers, page tables, or segment tables.

6. Accounting Information: Tracks CPU usage, time limits, process priority, job or user numbers
for accounting purposes.

7. I/O Status Information: List of I/O devices allocated to the process, open file tables, etc.

[4] Compare multiprogramming with a multiprocessing system.

Feature Multiprogramming System Multiprocessing System

Execution of multiple programs on a Use of two or more CPUs to


Definition single CPU by switching between execute multiple processes
them. simultaneously.

CPU Count Only one CPU is used. Two or more CPUs are used.

No true parallelism, only CPU True parallel processing is


Parallelism
switching. possible.

Increases CPU utilization, but


High performance due to parallel
Performance performance depends on process
execution.
switching.

Less expensive, as it needs only one More expensive due to multiple


Cost
processor. processors.

More complex OS design needed


Complexity Less complex OS design.
for coordination.

Running MS Word and VLC on a High-end servers or multicore


Example
single-core CPU. desktops running many tasks.

[5] Draw and explain Process State Diagram.

The Process State Diagram represents various states of a process during its lifetime and the
transitions between them.

States:

1. New – Process is created.

2. Ready – Process is waiting for CPU.

3. Running – Process is being executed.

4. Waiting – Waiting for an event (like I/O).

5. Terminated – Finished execution.


• New → Ready

• Ready → Running

• Running → Waiting

• Waiting → Ready

• Running → Terminated

• Running → Ready

[6] Compare internal and external fragmentation.

Aspect Internal Fragmentation External Fragmentation

Wasted memory within allocated Wasted memory outside allocated


Definition
memory blocks. memory blocks.

Occurs when fixed-sized memory blocks Occurs when free memory is scattered in
Cause
are assigned, but process uses less. small non-contiguous blocks.

Memory Typically occurs in fixed partitioning Typically occurs in variable partitioning


Allocation systems. systems.

Allocated block = 100 KB; Process size = 3 blocks of 10 KB free but no 30 KB block
Example
80 KB ⇒ 20 KB wasted inside the block. available ⇒ process can't fit.

Wastage is internal to the partition, not Wastage is external, visible as small free
Visibility
easily visible. chunks.

Solution Use dynamic or variable partitioning. Use compaction or paging/segmentation.

[7] Explain semaphores and its types.


A semaphore is a synchronization tool used in operating systems to manage access to shared
resources by multiple processes in a concurrent system (like multitasking or multiprocessing).
It helps to avoid race conditions and ensures mutual exclusion.

Semaphores are basically integer variables that are modified atomically using two standard
operations:

• wait() (also known as P or down)

• signal() (also known as V or up)

Types of Semaphores:

1. Counting Semaphore:

o Value can range over an unrestricted domain.

o Used to control access to a resource pool with multiple instances.

2. Binary Semaphore (Mutex):

o Value is either 0 or 1.

o Used for mutual exclusion (mutex).

[8] What is deadlock? Explain various deadlock handling techniques.

A deadlock occurs when a group of processes are waiting for each other in a circular chain, but none
can proceed.

Deadlock Handling Techniques:

1. Deadlock Prevention:

o Prevent at least one of the four necessary conditions.

o Example: Don’t allow hold-and-wait.

2. Deadlock Avoidance:

o Use algorithms like Banker’s Algorithm to allocate resources safely.

3. Deadlock Detection :

o Allow deadlocks to occur but detect them using Resource Allocation Graph (RAG).

4. Recovery:

o Recover by terminating or preempting processes.

[9] What are different types of directory structure? Explain.

Directory structures help organize files in an operating system.

Types of Directory Structures:

1. Single-Level Directory:
o All files in a single directory.

o Simple but leads to name conflicts.

2. Two-Level Directory:

o Each user has their own directory.

o User isolation, but no file sharing.

3. Tree-Structured Directory:

o Hierarchical structure.

o Allows directories inside directories (sub-directories).

o Most common.

4. Acyclic Graph Directory:

o Allows shared files/directories using links.

o Avoids cycles.

[10] Explain linked allocation in file.

Linked Allocation is a file allocation method where each file is a linked list of disk blocks, which may
be scattered anywhere on the disk.

Working:

• Each block contains data and a pointer to the next block.

• Directory stores the starting block of the file.

Advantages:

• No external fragmentation.

• File size can grow dynamically.

Disadvantages:

• Slow access for direct access (no random access).

• Pointer overhead in each block.

[11] Compare Paging and Segmentation.

Aspect Paging Segmentation

Divides logical memory into Divides logical memory into variable-


Basic Idea
fixed-size pages. size segments.

Segments are of different sizes


Size All pages are of equal size.
depending on the program.
Aspect Paging Segmentation

Logical address = Page number Logical address = Segment number +


Address Structure
+ Offset Offset

Memory Based on physical division of Based on logical division of a program


Management memory. (code, stack, data).

May suffer from internal


Fragmentation May suffer from external fragmentation.
fragmentation.

More aligned with the logical view of a


Ease of Use Simple for OS to manage.
program.

Segment 0 (code), Segment 1 (data),


Example Page 0, Page 1, Page 2, etc.
Segment 2 (stack), etc.

[12] Explain file structure with the help of diagram.

1. Stream of bytes :
o OS considers a file to be Unstructured.
o Simplifies file management for the OS. Applications can impose their own structure.
o Used by UNIX, windows and most modern OS.
2. Records :
o A file is a sequence of fixed length record, each with some internal structure.
o Collection of bytes is treated as a unit.
o Eg. Employee record.
3. Tree of records :
o A file consist of a tree of records, not necessarily all the same length.
o Records can be of variable length.

[13] Explain Operating System Structure.


1. Hardware (Core Layer):

• This is the innermost layer, consisting of:

o CPU (Central Processing Unit)

o Main Memory

o I/O Devices

o Secondary Storage (like HDDs or SSDs)

• It performs all the basic computing and storage tasks.

2. Operating System:

• Sits between hardware and application software.

• Controls and coordinates how hardware is used.

• Allocates CPU time, memory, and handles input/output operations.

3. System and Application Programs:

• Includes:

o Compilers

o Assemblers

o Text Editors

o Database Systems

o User Applications

• These programs help users interact with the OS and hardware to perform useful tasks.

4. Users (Outer Layer):

• Multiple users (User 1, User 2, ..., User n) access the system.

• Each uses different applications for various computing needs.

[14] Draw and explain Round Robin Scheduling with example.


o Round Robin (RR) Scheduling is designed for time-sharing system.
o FCFS cheduling along with preemtion to switch between processes.
o Time Quantum (Time Slice): A fixed CPU time (e.g., 10–100 ms) assigned to each process.
o Processes are kept in a circular ready queue (FIFO).
o If a process doesn't finish in its time quantum, it's preempted and sent to the back of the
queue.
o Ensures fair CPU access for all processes.
o Suitable for interactive systems .
[15] Draw and explain Contiguous Memory Allocation.

Contiguous Memory Allocation is one of the simplest and oldest memory management techniques
used in operating systems, where each process is allocated a single contiguous block of memory.

• Main memory is divided into two parts:

1. Operating System (OS) area

2. User processes area

• The OS can be placed in low memory or high memory.

o The interrupt vector, which is often located in low memory, affects this decision.

o Typically, both OS and interrupt vector are placed in low memory for better
efficiency.

• When a process needs to be loaded, the system finds a large enough single block of free
memory.

• The entire process (code, data, stack) is placed in that continuous section.

• This method assumes that a process will not grow in size after allocation.

[16] State and explain Critical Section Problem.

The Critical Section Problem is a fundamental issue in concurrent programming in operating


systems. It arises when multiple processes access shared resources (like memory, files, variables,
printers) and try to change or use them at the same time.

• Mutual Exclusion:
Only one process can enter the critical section at a time.
• Progress:
If no process is in the critical section, one of the waiting processes should be allowed to
enter next without unnecessary delay.
• Bounded Waiting:
A process must be guaranteed to enter the critical section after a bounded number of
attempts (to avoid starvation).

17] Explain file system Access Methods.

Access methods define the way data in a file can be read, written, or modified. The Operating
System provides different ways to access and retrieve data stored in files depending on the
application requirement.
1. Sequential Access

• Definition: Data in the file is accessed in a linear order, one record after another.

• It is the simplest and most common method.

• Used in text editors, compilers, etc.

2.Direct (or Random) Access

• Definition: Allows direct access to any block/record of the file using offset or position.

• Suitable for applications like databases.

3. Indexed Access

• Definition: Uses an index to keep track of various blocks of a file. The index contains
pointers to the actual data blocks.

• Works like an index in a book.

[18] Explain paging in case of memory management.

Paging is a memory management technique used to eliminate the problem of fitting varying sized
memory chunks onto the physical memory.

It allows the logical memory (used by programs) and physical memory (RAM) to be divided into
fixed-size blocks, making memory allocation efficient and reducing external fragmentation.

• Logical Memory (Virtual Memory): Divided into blocks called pages.


• Physical Memory (Main Memory): Divided into blocks of the same size called frames.

[19] Explain Job Control Block with diagram.

A Job Control Block (JCB) is a data structure maintained by the Operating System to manage and
track information about a job (or batch job) submitted by a user.

It is similar to the Process Control Block (PCB) used for processes, but the JCB is specifically used in
batch processing systems, where jobs are submitted and managed in queues.

Fields:

• Job ID

• Job Priority

• Program Counter

• CPU Registers

• I/O Requests

• Memory Requirements

[20] Characteristics and Necessary Conditions for Deadlock.

Characteristics of Deadlock (Coffman Conditions)


1. Mutual Exclusion: Resources are non-shareable, and only one process can hold a resource at
a time.

2. Hold and Wait: Processes holding resources can wait for additional resources held by others.

3. No Preemption: Resources cannot be forcibly taken from a process; they must be released
voluntarily.

4. Circular Wait: A cycle of processes exists, where each process waits for a resource held by
another in the cycle.

Necessary Conditions for Deadlock

1. Mutual Exclusion: Ensures resources are allocated exclusively to one process at a time.

2. Hold and Wait: Processes can hold resources while waiting for others, creating potential for
deadlock.

3. No Preemption: Resources cannot be taken away from processes, preventing resolution of


deadlock.

4. Circular Wait: A cycle of dependencies among processes causes them to be blocked.

[21] Explain memory management through fragmentation.

Fragmentation occurs when memory is inefficiently utilized due to allocation and deallocation of
processes in varying sizes. It is of two types:

1. External Fragmentation

o Free memory is scattered in small, non-contiguous blocks, making it difficult to


allocate large contiguous blocks, even if total free memory is enough.

o Compaction: Rearranging memory to consolidate free space.

o Paging: Using fixed-size blocks to avoid the need for contiguous memory.

2. Internal Fragmentation

• Wasted memory within allocated blocks when the allocated block is larger than needed by
the process.

o Smaller Allocation Units: Use smaller memory blocks to reduce waste.

o Dynamic Allocation: Adjust block sizes based on process needs.

[22] List and explain services provided by the operating system.

1. Process Management: Manages processes (creation, scheduling, termination).

2. Memory Management: Allocates and manages memory (including virtual memory).

3. File System Management: Manages file creation, deletion, and access.

4. Device Management: Controls I/O devices and handles data buffering.

5. Security: Provides authentication, access control, and encryption.

6. User Interface: Offers CLI or GUI for user interaction.


7. Networking: Manages network communication and resource sharing.

8. Error Handling: Detects and handles errors.

9.Performance Monitoring: Tracks and optimizes system performance.

23.Explain ‘Dining Philosopher’ Synchronization problem.

The Dining Philosopher's Problem is a classic synchronization issue where five philosophers sit
at a table, each needing two forks to eat. There is one fork between each pair of philosophers.

Challenges:

1. Deadlock: If all philosophers pick up one fork simultaneously and wait for the other, no one
can eat.

2. Starvation: A philosopher might never get both forks if others keep taking them.

Solutions:

1. Resource Hierarchy: Philosophers pick up forks in a specific order to avoid deadlock.

2. Arbitrator: A central controller decides who can eat, preventing deadlock and starvation.

[24] What is fragmentation? Explain its types.

Fragmentation occurs when memory is inefficiently utilized due to allocation and deallocation of
processes in varying sizes. It is of two types:

1. External Fragmentation

o Free memory is scattered in small, non-contiguous blocks, making it difficult to


allocate large contiguous blocks, even if total free memory is enough.

o Compaction: Rearranging memory to consolidate free space.

o Paging: Using fixed-size blocks to avoid the need for contiguous memory.

2. Internal Fragmentation

• Wasted memory within allocated blocks when the allocated block is larger than needed by
the process.

o Smaller Allocation Units: Use smaller memory blocks to reduce waste.

o Dynamic Allocation: Adjust block sizes based on process needs.

25.Describe I/O Hardware with its types of I/O devices.

I/O Hardware enables communication between the computer and external devices, facilitating data
input, output, and storage.

Types of I/O Devices

1. Input Devices: Send data into the system.

o Examples: Keyboard, Mouse, Scanner, Microphone.

2. Output Devices: Present data from the system.


o Examples: Monitor, Printer, Speakers, Projector.

3. Storage Devices: Store data.

o Examples: Hard Disk Drive (HDD), Solid State Drive (SSD), USB Flash Drive.

4. Bidirectional Devices: Both send and receive data.

o Examples: Network Interface Card (NIC), Modem.

[26] Explain various types of system programs.

1. Operating System (OS): Manages hardware and software resources.

o Examples: Windows, Linux.

2. Device Drivers: Enable OS to interact with hardware devices.

o Examples: Printer drivers, USB drivers.

3. Compilers: Convert high-level code to machine code.

o Examples: GCC, javac.

4. Interpreters: Execute high-level code line by line.

o Examples: Python Interpreter, Ruby.

5. Linkers: Combine object files into executable programs.

o Example: GNU Linker (ld).

6. Loaders: Load programs into memory for execution.

o Examples: Windows Loader, Linux exec().

7. Utilities: Perform system maintenance tasks.

o Examples: Disk Cleanup, File Compression.

8. Debuggers: Help find and fix bugs in programs.

o Examples: GDB, LLDB.

9. File Management Utilities: Manage and organize files.

o Examples: File Explorer, Finder.

10. System Monitors: Track system performance and resources.

o Examples: Task Manager, System Monitor.

27.Explain indexed allocation in detail

Each file has an index block which contains pointers to actual data blocks.

Diagram:

scss

CopyEdit
Index Block → [5, 9, 12, 15]

Blocks → [Data5][Data9][Data12][Data15]

Pros:

• Supports direct and sequential access.

• Solves fragmentation problems.

Cons:

Index block size limits file size. Indexed Allocation is a file storage method where each file has an
index block containing pointers to its data blocks, which can be scattered across the disk.

• A file is divided into blocks, and an index block holds pointers to these data blocks.

• The index block is stored separately and can point to multiple data blocks.

Types

• Single-Level Indexing: One index block for the file.

• Multilevel Indexing: Multiple levels of index blocks for large files.

• Combined Indexing: A mix of direct and indirect pointers.

28.List any two types of multiprocessor.

1. Symmetric Multiprocessing (SMP): Multiple processors share a single memory and have
equal access to resources.

o Example: Multi-core processors in modern computers.

2. Asymmetric Multiprocessing (AMP): One master processor controls the system, while slave
processors handle specific tasks.

o Example: Older systems with a main processor and specialized tasks.

29.Explain different method for recovery from deadlock?

1. Process Termination:To break the deadlock, one or more processes involved are terminated.

• Terminate All Deadlocked Processes:


Simple and effective, but leads to loss of all progress made by those processes.

• Terminate One Process at a Time:


Processes are terminated one by one and system checks for deadlock after each
termination.
Less data loss but takes more time.

2. Resource Preemption: Resources are forcibly taken from some processes and reallocated to
others to resolve deadlock.

• The system selects a victim process to preempt its resources.

• The victim process may be rolled back and restarted later.

• Preemption must be done carefully to avoid further deadlocks or starvation.


30.Explain advantages and Disadvantages of linked allocation methods.

Advantages:

1. No external fragmentation – Blocks can be placed anywhere.

2. Efficient space use – No need for contiguous space.

3. Easy file growth – Just link a new block.

Disadvantages:

1. No random access – Must read from the start.

2. Pointer overhead – Each block stores a pointer.

3. Risk of pointer loss – File may become unreadable.

31.Define 1. Logical address 2.Physicsl Address

1. Logical Address:

A logical address is the address generated by the CPU, used to access memory locations
independently of the actual physical location.

2. Physical Address:

A physical address is the actual address in the main memory (RAM) where data is stored.

32.List and explain system calls related to process and Job control.

1. End, Abort

• End: Terminates the process normally after execution.

• Abort: Terminates the process abnormally due to an error or interruption.

2. Load, Execute

• Load: Loads a program into memory.

• Execute: Runs the loaded program.

3. Create Process, Terminate Process

• Create: Starts a new process (e.g., using fork()).

• Terminate: Ends a process (e.g., using exit()).

4. Ready Process, Dispatch Process

• Ready: Moves process to ready queue (waiting for CPU).

• Dispatch: Assigns CPU to a ready process (starts execution).

5. Suspend Process, Resume Process

• Suspend: Pauses a process temporarily.

• Resume: Continues the execution of a suspended process.


6. Get Process Attributes, Set Process Attributes

• Get: Retrieves process details (e.g., PID, priority).

• Set: Modifies attributes like priority or scheduling info.

7. Wait for Time

• Delays process execution for a specific time (sleep/wait).

8. Wait Event, Signal Event

• Wait: Process waits for an event to occur.

• Signal: Notifies a waiting process that an event has occurred.

9. Change Priority of a Process

• Adjusts a process’s priority to control scheduling (e.g., using nice() in UNIX).

33.Explain multilevel feedback queue algorithm.

The Multilevel Feedback Queue (MLFQ) is a scheduling algorithm that uses multiple queues with
different priorities and allows processes to move between them based on their behavior.

• Processes start in Queue 0 (highest priority, RR with 8ms quantum).

• If not completed, moved to Queue 1 (RR with 16ms quantum).

• If still not done, moved to Queue 2 (lowest, FCFS).

• Higher queues preempt lower ones.

• CPU-bound processes move down; I/O-bound stay up.

• Aging: Long-waiting processes in lower queues can move up to prevent starvation.

34.Write a note on interrupts.

An interrupt is a signal sent to the CPU to indicate that an event needs immediate attention. It
temporarily stops the current process, handles the event, and then resumes the previous process.

• Improves CPU efficiency.

• Handles I/O devices asynchronously.

• Enables multitasking.

Types:

• Hardware Interrupt: Generated by devices (e.g., keyboard, mouse, printer).


• Software Interrupt: Triggered by programs (e.g., system calls or exceptions).

[35] Free Space Management – Bit Vector and Grouping.

Free Space Management is the process of tracking available (free) and occupied space in storage to
efficiently allocate or deallocate space.

Bit Vector

A bit vector is an array of bits where each bit represents a block in storage:

• 1 = Block is occupied.

• 0 = Block is free.

Example: 10101100 means blocks 1, 3, 5, and 6 are occupied; blocks 2, 4, 7, and 8 are free.

Grouping

Grouping is an improvement over bit vectors, where instead of tracking each block individually,
groups of free blocks are stored with the starting block and the count of consecutive free blocks.

Example: {0-3, 5, 7-8} means blocks 0 to 3, 5, and 7 to 8 are free.

[36] Explain Resource Allocation Graph.

• Deadlock can be described more precisely in term of a directed graph called a system
resource allocation graph.
• A Resource Allocation Graph (RAG) is a directed graph used to represent the allocation of
resources in a system and is particularly useful in detecting deadlocks in a system. It shows
the relationship between processes and resources in a system.

1. Processes (P): Represented as nodes.

2. Resources (R): Represented as nodes, with multiple instances possible.

3.Edges:

• Request edge (P → R): Process requests a resource.

• Assignment edge (R → P): Resource is allocated to a process.


[37] Preemptive vs Non-Preemptive Scheduling.

Feature Preemptive Scheduling Non-Preemptive Scheduling

Process
Can be interrupted at any time. Cannot be interrupted once started.
Interruptions

Examples Round Robin, SRTF FCFS, SJF

CPU Utilization Typically better. May be inefficient in certain cases.

Complexity More complex due to context switching. Simpler to implement.

Higher responsiveness to high-priority Lower responsiveness for longer


Responsiveness
tasks. tasks.

[Short notes on]

1. Spooling:
Spooling (Simultaneous Peripheral Operations On-Line) is a process of placing data into a temporary
storage area (called a spool) to be accessed and processed by devices like printers or disk drives.
Spooling allows the CPU to continue processing while slower devices handle data transfer
sequentially.

2. Dining Philosophers Problem:


The Dining Philosophers Problem is a classic synchronization problem in computer science, used to
illustrate the challenges of avoiding deadlock and ensuring resource allocation between multiple
processes. It involves five philosophers sitting around a table, each with a fork between them. They
must alternate between thinking and eating but must acquire both forks to eat. The problem is
solved by ensuring mutual exclusion, preventing deadlock, and allowing concurrency without
starvation.

3. Contiguous Memory Allocation:


In contiguous memory allocation, each process is allocated a single, contiguous block of memory.
The operating system keeps track of which parts of memory are occupied and free. This method is
simple and fast but suffers from fragmentation issues (both external and internal) and inefficient
memory usage as processes are loaded and unloaded.

4. Shortest Seek Time First (SSTF):


SSTF is a disk scheduling algorithm that selects the disk I/O request with the shortest seek time (the
smallest distance between the current position of the disk arm and the requested position). This
reduces the overall seek time but can lead to starvation for requests located farther from the
current position.

5.Linked Allocation for File System:


Linked allocation involves using a linked list of blocks to store file data. Each file block contains a
pointer to the next block in the file. This method provides flexibility and eliminates external
fragmentation but may have overhead due to pointer management and slower access times
compared to contiguous allocation.

6. Address Binding in Case of Memory Management:


Address binding refers to mapping logical addresses (generated by programs) to physical addresses
(in main memory). This can be done at different stages:

• Compile-time binding: Addresses are determined at compile time, assuming that memory
will not change.

• Load-time binding: Addresses are determined when the program is loaded into memory.

• Execution-time binding: Addresses are determined during program execution, often using
dynamic memory allocation techniques.

7.Medium-Term Scheduler:
The medium-term scheduler is responsible for swapping processes in and out of memory. It works in
conjunction with the long-term and short-term schedulers to balance the degree of
multiprogramming. The medium-term scheduler temporarily removes processes from the CPU (swap
out) to free up memory and later swaps them back in (swap in) when resources become available.

8. Indexed Allocation:
Indexed allocation involves maintaining an index block for each file, where each index block contains
pointers to the actual data blocks of the file. This method eliminates fragmentation issues found in
contiguous allocation and allows for non-contiguous storage, but it requires additional space for
index blocks and increases access time.

9.Solution for Critical Section Problem:


The critical section problem involves ensuring that multiple processes do not simultaneously execute
in their critical sections (where shared resources are accessed). Solutions typically rely on mutual
exclusion, progress, and bounded waiting. Popular algorithms to solve this problem include:

• Peterson's Algorithm

• Semaphore-based solutions

• Monitors

• Mutex locks These methods ensure that only one process can be in its critical section at a
time and that other processes are appropriately synchronized.

You might also like