0% found this document useful (0 votes)
42 views

Os Unit Iii-1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views

Os Unit Iii-1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 22

UNIT- III MEMORY MANAGEMENT

Memory Management - Basic Concept of Memory - Address Binding; Logical


and Physical Address Space- Memory Partitioning - Memory Allocation –
Paging – Segmentation - Segmentation and Paging - Protection-Fragmentation –
Compaction – Demand Paging – Page Replacement Algorithm – Classification
of Page Replacement Algorithm.

MEMORY MANAGEMENT BASIC CONCEPT

Memory management is the functionality of an operating system which handles


or manages primary memory and moves processes back and forth between main
memory and disk during execution. Memory management keeps track of each
and every memory location, regardless of either it is allocated to some process
or it is free. It checks how much memory is to be allocated to processes. It
decides which process will get memory at what time. It tracks whenever some
memory gets freed or unallocated and correspondingly it updates the status.

Main Memory refers to a physical memory that is the internal memory to the
computer. The word main is used to distinguish it from external mass storage
devices such as disk drives. Main memory is also known as RAM.

Logical and Physical Address in Operating System

Logical Address is generated by CPU while a program is running. The logical


address is virtual address as it does not exist physically, therefore, it is also
known as Virtual Address.
This address is used as a reference to access the physical memory location by
CPU. The term Logical Address Space is used for the set of all logical addresses
generated by a program’s perspective.
The hardware device called Memory-Management Unit is used for mapping
logical address to its corresponding physical address.
Physical Address identifies a physical location of required data in a memory.
The user never directly deals with the physical address but can access by its
corresponding logical address.
Mapping Virtual Addresses to Physical Addresses
Address binding is the process of mapping from one address space to another
address space. Logical address is address generated by CPU during execution
whereas Physical Address refers to location in memory unit (the one that is
loaded into memory).Note that user deals with only logical address(Virtual
address). The logical address undergoes translation by the MMU. The output of
this process is the appropriate physical address or the location of code/data in
RAM.

An address binding can be done in three different ways:


1. Compile Time – If you know that during compile time where process
will reside in memory then absolute address is generated i.e physical
address is embedded to the executable of the program during compilation.
Loading the executable as a process in memory is very fast. But if the
generated address space is preoccupied by other process, then the
program crashes and it becomes necessary to recompile the program to
change the address space.
2. Load time – If it is not known at the compile time where process will
reside then relocatable address will be generated. Loader translates the
relocatable address to absolute address. The base address of the process in
main memory is added to all logical addresses by the loader to generate
absolute address. In this if the base address of the process changes then
we need to reload the process again.
3. Execution time- The instructions are in memory and are being processed
by the CPU. Additional memory may be allocated and/or de allocated at
this time. This is used if process can be moved from one memory to
another during execution(dynamic linking-Linking that is done during
load or run time). e.g – Compaction.
MEMORY PARTITIONING
In operating systems, Memory Management is the function responsible for
allocating and managing computer’s main memory. Memory Management
function keeps track of the status of each memory location, either allocated or
free to ensure effective and efficient use of Primary Memory.

There are two Memory Management Techniques: Contiguous and Non-


Contiguous. In Contiguous Technique, executing process must be loaded
entirely in main-memory. Contiguous Technique can be divided into:
1. Fixed (or static) partitioning
2. Variable (or dynamic) partitioning
1. Fixed Partitioning:
This is the oldest and simplest technique used to put more than one
processes in the main memory. In this partitioning, number of partitions
(non-overlapping) in RAM are fixed but size of each partition may
or may not be same. As it is contiguous allocation, hence no spanning is
allowed. Here partition are made before execution or during system
configure.
As illustrated in above figure, first process is only consuming 1MB out of 4MB
in the main memory.
Hence, Internal Fragmentation in first block is (4-1) = 3MB.
Sum of Internal Fragmentation in every block = (4-1)+(8-7)+(8-7)+(16-14)=
3+1+1+2 = 7MB.

Suppose process P5 of size 7MB comes. But this process cannot be


accommodated inspite of available free space because of contiguous allocation
(as spanning is not allowed). Hence, 7MB becomes part of External
Fragmentation.

Advantages of Fixed Partitioning:


1. Easy to implement:
Algorithms needed to implement Fixed Partitioning are easy to implement.
It simply requires putting a process into certain partition without focussing
on the emergence of Internal and External Fragmentation.
2. Little OS overhead:
Processing of Fixed Partitioning require lesser excess and indirect
computational power.
Disadvantages of Fixed Partitioning:
1. Internal Fragmentation:
Main memory use is inefficient. Any program, no matter how small,
occupies an entire partition. This can cause internal fragmentation.
2. External Fragmentation:
The total unused space (as stated above) of various partitions cannot be
used to load the processes even though there is space available but not in
the contiguous form (as spanning is not allowed).
3. Limit process size:
Process of size greater than size of partition in Main Memory cannot be
accommodated. Partition size cannot be varied according to the size of
incoming process’s size. Hence, process size of 32MB in above stated
example is invalid.
4. Limitation on Degree of Multiprogramming:
Partition in Main Memory are made before execution or during system
configure. Main Memory is divided into fixed number of partition.
Suppose if there are n1 partitions in RAM and n2 are the number of
processes, then n2<=n1 condition must be fulfilled. Number of processes
greater than number of partitions in RAM is invalid in Fixed Partitioning.
2. Variable Partitioning
It is a part of Contiguous allocation technique. It is used to alleviate the
problem faced by Fixed Partitioning. In contrast with fixed partitioning,
partitions are not made before the execution or during system configure.
Various features associated with variable Partitioning:
1. Initially RAM is empty and partitions are made during the run-time
according to process’s need instead of partitioning during system
configure.
2. The size of partition will be equal to incoming process.
3. The partition size varies according to the need of the process so that the
internal fragmentation can be avoided to ensure efficient utilization of
RAM.
4. Number of partitions in RAM is not fixed and depends on the number of
incoming process and Main Memory’s size.
There are some advantages and disadvantages of variable partitioning over fixed
partitioning as given below.

Advantages of Variable Partitioning


1. No Internal Fragmentation:
In variable Partitioning, space in main memory is allocated strictly
according to the need of process, hence there is no case of internal
fragmentation. There will be no unused space left in the partition.
2. No restriction on Degree of Multiprogramming:
More number of processes can be accommodated due to absence of internal
fragmentation. A process can be loaded until the memory is empty.
3. No Limitation on the size of the process:
In Fixed partitioning, the process with the size greater than the size of the
largest partition could not be loaded and process cannot be divided as it is
invalid in contiguous allocation technique. Here, in variable partitioning,
the process size can’t be restricted since the partition size is decided
according to the process size.
Disadvantages of Variable Partitioning
1. Difficult Implementation:
Implementing variable Partitioning is difficult as compared to Fixed
Partitioning as it involves allocation of memory during run-time rather than
during system configure.
2. External Fragmentation:
There will be external fragmentation inspite of absence of internal
fragmentation.
For example, suppose in above example- process P1(2MB) and process
P3(1MB) completed their execution. Hence two spaces are left i.e. 2MB
and 1MB. Let’s suppose process P5 of size 3MB comes. The empty space
in memory cannot be allocated as no spanning is allowed in contiguous
allocation. The rule says that process must be contiguously present in main
memory to get executed. Hence it results in External Fragmentation.

Now P5 of size 3 MB cannot be accommodated in spite of required


available space because in contiguous no spanning is allowed.

MEMORY ALLOCATION

To get a process executed it must be first placed in the memory. Assigning


space to a process in memory is called memory allocation. Memory allocation is
a general aspect of the term binding. We have two types of memory allocation
or we can say two methods of binding, static and dynamic binding.

Types of Memory Allocation


1. Static Memory Allocation
Static memory allocation is performed, when the compiler compiles the
program and generate object files, linker merges all these object files and
creates a single executable file, and loader loads this single executable file in
main memory, for execution. In static memory allocation, the size of the data
required by the process must be known before the execution of the process
initiates.

If the data sizes are not known before the execution of the process, then they
have to be guessed. If the data size guessed is larger than the required, then it
leads to wastage of memory. If the guessed size is smaller, then it leads to
inappropriate execution of the process.

Static memory allocation method does not need any memory allocation
operation during the execution of the process. As all the memory allocation
operation required for the process is done before the execution of the process
has started. So, it leads to faster execution of a process.

Static memory allocation provides more efficiency when compared by the


dynamic memory allocation.

Advantages of Static Memory Allocation

1. Static memory allocation provides an efficient way of assigning the memory to


a process.
2. All the memory assigning operations are done before the execution starts. So,
there are no overheads of memory allocation operations at the time of
execution of the program.
3. Static memory allocation provides faster execution, as at the time of execution
it doesn’t have to waste time in allocation memory to the program.
Disadvantages of Static Memory Allocation

1. In static memory allocation, the system is unaware of the memory requirement


of the program. So, it has to guess the memory required for the program.

Static memory allocation leads to memory wastage. As it estimates the size of


memory required by the program. So, if the estimated size is larger, it will lead
to memory wastage else if the estimated size is smaller, then the program will
execute inappropriately.

2. Dynamic Memory Allocation

Dynamic memory allocation is performed while the program is in execution.


Here, the memory is allocated to the entities of the program when they are to be
used for the first time while the program is running.

The actual size, of the data required, is known at the run time so, it allocates
the exact memory space to the program thereby, reducing the memory wastage.

Dynamic memory allocation provides flexibility to the execution of the


program. As it can decide what amount of memory space will be required by the
program. If the program is large enough then a dynamic memory allocation is
performed on the different parts of the program, which is to be used currently.
This reduces memory wastage and improves the performance of the system.

Allocating memory dynamically creates an overhead over the system. Some


allocation operations are performed repeatedly during the program execution
creating more overheads, leading in slow execution of the program.

Dynamic memory allocation does not require special support from the operating
system. It is the responsibility of the programmer to design the program in a
way to take advantage of dynamic memory allocation method.
Thus the dynamic memory allocation is flexible but slower than static memory
allocation.
Advantages of Dynamic Memory Allocation

1. Dynamic memory allocation provides a flexible way of assigning the memory


to a process.
2. Dynamic memory allocation reduces the memory wastage as it assigns
memory to a process during the execution of that program. So, it is aware of the
exact memory size required by the program.
3. If the program is large then the dynamic memory allocation is performed on the
different parts of the program. Memory is assigned to the part of a program that
is currently in use. This also reduces memory wastage and indeed
improves system performance.

Disadvantages of Dynamic Memory allocation

1. Dynamic memory allocation method has an overhead of assigning the memory


to a process during the time of its execution.
2. Sometimes the memory allocation actions are repeated several times during the
execution of the program which leads to more overheads.
3. The overheads of memory allocation at the time of its execution slowdowns the
execution to some extent.

Paging in Operating System

Paging is a memory management scheme that eliminates the need for


contiguous allocation of physical memory. This scheme permits the physical
address space of a process to be non – contiguous.

 Logical Address or Virtual Address (represented in bits): An address


generated by the CPU
 Logical Address Space or Virtual Address Space( represented in words or
bytes): The set of all logical addresses generated by a program
 Physical Address (represented in bits): An address actually available on
memory unit
 Physical Address Space (represented in words or bytes): The set of all
physical addresses corresponding to the logical addresses

Mapping Pages to Frames:

The mapping from virtual to physical address is done by the memory


management unit (MMU) which is a hardware device and this mapping is
known as paging technique.

 The Physical Address Space is conceptually divided into a number of


fixed-size blocks, called frames.
 The Logical address Space is also splitted into fixed-size blocks,
called pages.
 Page Size = Frame Size

Let us consider an example:

 Physical Address = 12 bits, then Physical Address Space = 4 K words


 Logical Address = 13 bits, then Logical Address Space = 8 K words
 Page size = frame size = 1 K words (assumption)
Address generated by CPU is divided into

 Page number(p): Number of bits required to represent the pages in


Logical Address Space or Page number
 Page offset(d): Number of bits required to represent particular word in a
page or page size of Logical Address Space or word number of a page or
page offset.

Physical Address is divided into

 Frame number(f): Number of bits required to represent the frame of


Physical Address Space or Frame number.
 Frame offset(d): Number of bits required to represent particular word in
a frame or frame size of Physical Address Space or word.
Segmentation in Operating System

A process is divided into Segments. The chunks that a program is divided into
which are not necessarily all are of the same sizes are called segments.
Segmentation gives user’s view of the process which paging does not give. Here
the user’s view is mapped to physical memory.
There are types of segmentation:

1. Virtual memory segmentation:


Each process is divided into number of segments, not all of which are
resident at any one point in time.
2. Simple segmentation:
Each process is divided into number of segments, all of which are loaded
into memory at run time, though not necessarily contiguously.

There is no simple relationship between logical addresses and physical


addresses in segmentation. A table stores the information about all such
segments and is called Segment Table.

Segment Table – It maps two-dimensional Logical address into one-


dimensional Physical address. Its each table entry has:

 Base Address: It contains the starting physical address where the


segments reside in memory.
 Limit: It specifies the length of the segment.
Translation of two dimensional Logical Address to one dimensional Physical
Address.

Address generated by the CPU is divided into:


 Segment number (s): Number of bits required to represent the segment.
 Segment offset (d): Number of bits required to represent the size of the
segment.
Advantages of Segmentation:
 No Internal fragmentation.
 Segment Table consumes less space in comparison to Page table in
paging.
Disadvantage of Segmentation:

 As processes are loaded and removed from the memory, the free memory
space is broken into little pieces, causing External fragmentation.

COMPACTION

 Compaction is a technique to collect all the free memory present in form


of fragments into one large chunk of free memory, which can be used to
run other processes.
 It does that by moving all the processes towards one end of the memory
and all the available free space towards the other end of the memory so
that it becomes contiguous.
 It is not always easy to do compaction. Compaction can be done only
when the relocation is dynamic and done at execution time. Compaction
cannot be done when relocation is static and is performed at load time or
assembly time.

Before Compaction
 Before compaction, the main memory has some free space between
occupied spaces. This condition is known as external fragmentation.
Due to less free space between occupied spaces, large processes cannot
be loaded into them.

Main Memory

Occupied space

Free space

Occupied space

Occupied space

Free space

After Compaction
 After compaction, all the occupied space has been moved up and the
free space at the bottom. This makes the space contiguous and removes
external fragmentation. Processes with large memory requirements can
be now loaded into the main memory.

Main Memory

Occupied space

Occupied space

Occupied space
Free space

Free space

Purpose of Compaction in Operating System


 While allocating memory to process, the operating system often faces a
problem when there’s a sufficient amount of free space within the
memory to satisfy the memory demand of a process. However, the
process’s memory request can’t be fulfilled because the free memory
available is in a non-contiguous manner, this problem is referred to as
external fragmentation. To solve such kinds of problems compaction
technique is used.

DEMAND PAGING:

Demand paging follows that pages should only be brought into memory if the
executing process demands them. This is often referred to as lazy evaluation as
only those pages demanded by the process are swapped from secondary
storage to main memory.

Commonly, to achieve this process a page table implementation is used. The


page table maps logical memory to physical memory. The page table uses
a bitwise operator to mark if a page is valid or invalid. A valid page is one that
currently resides in main memory. An invalid page is one that currently resides
in secondary memory. When a process tries to access a page, the following
steps are generally followed:

 Attempt to access page.


 If page is valid (in memory) then continue processing instruction as normal.
 If page is invalid then a page-fault trap occurs.
 Check if the memory reference is a valid reference to a location on
secondary memory. If not, the process is terminated (illegal memory
access). Otherwise, we have to page in the required page.
 Schedule disk operation to read the desired page into main memory.
 Restart the instruction that was interrupted by the operating system trap.

PAGE REPLACEMENT ALGORITHMS


In an operating system that uses paging for memory management, a page
replacement algorithm is needed to decide which page needs to be replaced
when new page comes in.

Page Fault – A page fault happens when a running program accesses a memory
page that is mapped into the virtual address space, but not loaded in physical
memory.
Different page replacement algorithms suggest different ways to decide which
page to replace. The target for all algorithms is to reduce the number of page
faults.
TYPES:
1. First in first out (FIFO) page replacement algorithm
2. Optimal page replacement algorithm
3. Least Recently Used (LRU) page replacement algorithm
4. Not Recently Used (NRU) page replacement algorithm
5. Second Chance page replacement algorithm
1.First in first out (FIFO) page replacement algorithm
This is the simplest page replacement algorithm. In this algorithm, the operating
system keeps a track of all the pages present in the memory in a queue fashion.
When a page needs to be replaced, the oldest page in the queue is selected and
replaced with the new page.

For instance, suppose we have a page reference string as (1, 3, 0, 3, 5, 6,


3) with three-page frames. Initially, all the slots are empty so the first three
reference strings (1, 3, 0) will be allocated to the empty slots with 3 page-faults.
Next in the reference string is (3) which is already in the memory, so no page
fault. Then comes (5). Since there is no available memory space the newly
arrived page will replace the oldest residing page in the memory, i.e., (1) with 1
page-fault. Likewise, now (6) will arrive and will replace (3) with another page
fault. And then (3) will arrive and will replace (0) with another page fault.

Every time the OS has to replace a page, a page-fault is counted.


Total Page Fault: 6

2.Optimal page replacement algorithm

Optimal Page Replacement algorithm says that the newly arrived page will
replace a page in memory which wouldn’t be used for the longest period of time
in the future.

For example, Let’s consider a page reference string (7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0,


3, 2) with 4 page frames.

Initially all the memory slots will be empty so (7, 0, 1, 2) will be allocated to the
memory with 4 page-faults. As (0) is already there, there’s no page fault. Next
in the string is (3), it’ll replace (7) as it’s not used for the longest period of time
in the future, with one page fault. Again, 0 is already there, so no page fault. 4
will replace 1 with one page fault. And for the rest of the string, there will be no
page fault as all the arriving pages are already there in the memory.

Total Page Fault = 6

3.Least Recently Used (LRU) page replacement algorithm

In this algorithm, the new page is replaced with the existing page that has been
used the least recently. In other words, the page which has not been referred for
a long time will be replaced with the newly arrived page.

This algorithm is just opposite to the Optimal Page Replacement algorithm.


So, if we have a page reference string (7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2) with
four page frames, then the page replacement will take place as shown below:

Total Page Fault = 6

4.Not Recently Used (NRU) page replacement algorithm

In most computers with virtual memory, each page is associated with two status
bits.

 Referenced (R) bit is set whenever a page is read or written.


 Modified (M) bit is set whenever the page is written.

In this algorithm, the operating system uses (R) and (M) bits to distinguish
between pages. When a process starts, both page bits (R) and (M) are set to 0 by
the operating system. If a page has been referenced, a page fault will occur and
the (R) bit is set by the OS.

If a page is modified, then another page fault will occur and the OS will set the
(M) bit.

Periodically the R bit is cleared by a clock interrupt. When a page fault occurs
and there are no empty frames, the operating system inspects all the pages and
divides them into 4 categories:

 Class 0: not referenced, not modified


 Class 1: not referenced, modified
 Class 2: referenced, not modified
 Class 3: referenced, modified

The M bit doesn’t reset by the clock interrupt.

Based on the R and M bits, all the pages are categorized into the above 4
categories. The algorithm removes a random page with the lowest numbered
non-empty class. If class 0 is empty, then a random page from class 1 is
replaced and so on.

5.Second Chance page replacement algorithm

Second chance page replacement algorithm is a modified version of FIFO


algorithm.

As the name suggests, the pages are given a second chance. The pages that
arrived are stored in a linked list. If the oldest page in the linked list gets
referenced, a page fault occurs and the R bit is cleared. The oldest page is now
cleared from memory and is pushed to the latest clock interval.

When the time arrives to replace a page, the operating system replaces the
oldest page that has also not been referenced.

You might also like