0% found this document useful (0 votes)
15 views

Module_4_Memory Management

The document discusses memory management and virtual memory, detailing various strategies such as contiguous and non-contiguous memory allocation, partition management techniques, and the importance of efficient memory utilization. It outlines the requirements for memory management, including relocation, protection, sharing, logical organization, and physical organization, while also addressing issues like fragmentation and compaction. Additionally, it covers techniques like paging and segmentation, along with their respective algorithms for managing memory effectively.

Uploaded by

Jagruti Chavan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Module_4_Memory Management

The document discusses memory management and virtual memory, detailing various strategies such as contiguous and non-contiguous memory allocation, partition management techniques, and the importance of efficient memory utilization. It outlines the requirements for memory management, including relocation, protection, sharing, logical organization, and physical organization, while also addressing issues like fragmentation and compaction. Additionally, it covers techniques like paging and segmentation, along with their respective algorithms for managing memory effectively.

Uploaded by

Jagruti Chavan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 69

Memory Management and

Virtual Memory
Memory Management Strategies

Contiguous versus non- Contiguous memory allocation


Partition Management Techniques, Logical versus Physical
Address space
Swapping, Paging

Agenda Segmentation, Segmentation with Paging

Demand Paging, Page Replacement

Page-replacement Algorithms, Performance of Demand Paging

Thrashing

Demand Segmentation

05/03/2025 OS:IV:Memory Management & Virtual Memory 2


The need for memory
management
• Memory is cheap today, and
getting cheaper
– But applications are demanding
more and more memory, there is
never enough!
• Memory Management, involves
swapping blocks of data from
secondary storage.
• Memory I/O is slow compared to a
CPU
Memory Management
Memory needs to be allocated to ensure a reasonable
supply of ready processes to consume available
processor time

It is a kind of method or functionality to manage the


various kind of memories for example RAM, HDD, cache,
registers

Goal: efficient utilization of memory


Memory Management
Requirements
• Relocation
• Protection
• Sharing
• Logical
organisation
• Physical
organisation
Requirements:
Relocation
• The programmer does not know where the program
will be placed in memory when it is executed,
– it may be swapped to disk and return to main memory at
a different location (relocated)
• Memory references must be translated to the
actual physical memory address
Requirements: Relocation
• The available memory is generally shared among a number of
processes in a multiprogramming system, so it is not possible to
know in advance which other programs will be resident in main
memory at the time of execution of this program. Swapping the
active processes in and out of the main memory enables the
operating system to have a larger pool of ready-to-execute process.

When a program gets swapped out to a disk memory, then it is not


always possible that when it is swapped back into main memory
then it occupies the previous memory location, since the location
may still be occupied by another process. We may need
to relocate the process to a different area of memory. Thus there is
a possibility that program may be moved in main memory due to
swapping.
Memory
Managemen
t Terms
Table 7.1 Memory Management Terms
Term Description
Frame Fixed-length block of
main memory.
Page Fixed-length block of data in
secondary memory (e.g. on
disk).
Segmen Variable-length block of data
t that resides in secondary
memory.
Addressin
g
Requirements:
Protection
• Processes should not be able to reference
memory locations in another process without
permission
• every process must be protected against unwanted
interference when other process tries to write in a
process whether accidental or incidental.
• Impossible to check absolute addresses at
compile time
• Must be checked at run time
Requirements:
Sharing
• Allow several processes to
access the same portion of
memory
• Better to allow each process access
to the same copy of the program
rather than have their own separate
copy
Requirements: Logical
Organization
• Main memory is organized as linear, or it can be a
one-dimensional address space which consists of a
sequence of bytes or words.
• Programs are written in modules
– Modules can be written and compiled independently
• Different degrees of protection given to modules
(read-only, execute-only)
• Share modules among processes
• Segmentation helps here
Requirements: Physical
Organization
• Computer memory is organized into two level:
A)Main Memory: It provides fast access at
relatively high cost, is volatile and dose not
provide permanent storage.
B)Secondary Memory:
It is slower, cheaper and it is usually not
volatile.
• Programmer does not know how much space
will be available
Memory
Partitioning
• The principal operation of MM is to bring programs
into main memory for execution by the processor.
• Degree of Multiprogramming: keep more and more
number of processes in the RAM

• An early method of managing memory


– Pre-virtual memory
– Not used much now
Memory
Manageme
nt
Techniques

Contiguous Non-
Contiguous

Fixed Variable Pagin Multilev Inverted Segmentatio


Partition Partition el Segme
g Paging n nted
(Static) (Dynam paging
ic) Paging
Types of
Partitioning
• Fixed Partitioning
• Dynamic Partitioning
• Simple Paging
• Simple Segmentation
• Virtual Memory Paging
• Virtual Memory
Segmentation
Contiguous Allocation
Multiple processes resides in memory
Contiguous Allocation
• Main memory usually divided into two partitions:
• Resident operating system, usually held in low memory
• User processes then held in high memory
• Each process contained in single contiguous section of memory

• Contiguous memory allocation is a memory management method


where each process is given a single, continuous block of memory.
This means all the data for a process is stored in adjacent memory
locations.
Contiguous Allocation (Cont.)
• Multiple-partition allocation
• Divide memory into several Fixed size partition
• Each partition stores one process
• Degree of multiprogramming limited by number of
partitions
• If a partition is free, load process from job queue
• MFT (IBM OS/360)
Contiguous Allocation (Cont.)
• Multiple-partition allocation
• Variable partition scheme
• Hole – block of available memory; holes of various size are
scattered throughout memory
• Keeps a table of free memory
• When a process arrives, it is allocated memory from a hole large
enough to accommodate it
• Process exiting frees its partition, adjacent free partitions
combined
• Operating system maintains information about:
a) allocated partitions b) free partitions (hole)
OS OS OS OS OS

process 5 process 5 process 5 process 5


process 9 process 9
Hole process 8 process 10

process 2 process 2 process 2 process 2


Non-contiguous
• Non-contiguous memory allocation is a memory
management method where a process is divided into
smaller parts, and these parts are stored in different,
non-adjacent memory locations. This means the entire
process does not need to be stored in one continuous
block of memory.
Fixed Partitioning
• Number of partitions are fixed
• Size of each partition may or may not same
• Contiguous allocation spanning is not allowed
Fixed
Partitioning
• .
• Equal-size partitions :
Any process whose size is less than
or equal to the partition size can be
loaded into an available partition
• The operating system can swap a
process out of a partition
– If none are in a ready or running
state
Fixed Partitioning
Problems
• A program may not fit in a partition. (limit in
process size)
– The programmer must design the program with
overlays
• Main memory use is inefficient.
– Any program, no matter how small, occupies an entire
partition.
– This is results in internal fragmentation.
• Limitation on degree of programming
Solution – Unequal Size
Partitions
• Lessens both problems
– but doesn’t solve completely
• In Fig. b,
– Programs up to 16M can be
accommodated without
overlay
– Smaller programs can be
placed in smaller partitions,
reducing internal
fragmentation
• Equal-size partitions.
– Placement is trivial (no options)
• Unequal-size
– Can assign each process to the
smallest partition within which it
will fit
– Queue for each partition
– Processes are assigned in such a way
as to minimize wasted memory within
a partition
Remaining Problems with Fixe
Partitions
• The number of active processes is limited by the
system
– I.E limited by the pre-determinednumber of partitions
• A large number of very small process will not use
the space efficiently
– In either fixed or variable length partition methods
– Fixed partitioning scheme in not used
nowadays.
– E.g. an early IBM mainframe OS
Dynamic
Partitioning
► Partitions are of variable length and number
► Whenever the processes are coming into RAM only then we are
allocating space to the processes.
► Each process is allocated exactly as much memory as it requires.
► We keep RAM empty , and do partitioning at run time
► Eventually holes are formed in main memory. This is
called external fragmentation
► Must use compaction to shift processes so they
are contiguous and all free memory is in one block
► Used in IBM’s OS/MVT (Multiprogramming with a
Variable number of Tasks)
Dynamic Partitioning: an
example16

► A hole of 64K is left after loading 3 processes: not enough


room for another process
► Eventually each process is blocked. The OS swaps out process
2 to bring in process 4
Dynamic
17
Partitioning:

► another hole of 96K is created


► Eventually each process is blocked. The OS swaps out
process 1 to bring in again process 2 and another hole of
96K is created...
► Compaction would produce a single hole of 256K
External Fragmentation
and Compaction:
• External fragmentation:
A situation in which there are a lot of small holes in memory. As time goes on
, memory becomes more and more fragmented and memory utilization
declines. This phenomenon is referred as external fragmentation.
• Compaction:
Technique for overcoming external fragmentation is compaction. From time
to time , the OS shifts the processes so that they are contiguous and so that
all the free memory is together in one block. The difficulty with compaction
is that it is a time consuming procedure and wasteful of processor time.
Before Compaction
• Before compaction, the
main memory has some
free space between
occupied space. This
condition is known as
external fragmentation .
Due to less free space
between occupied spaces,
large processes cannot be
loaded into them.
After Compaction
• After compaction, all the
occupied space has been
moved up and the free
space at the bottom. This
makes the space
contiguous and removes
external fragmentation.
Processes with large
memory requirements
can be now loaded into
the main memory.
Advantages of Dynamic Partitioning
• No chance of internal fragmentation
• No limitation on number of processes
• No limitation on process size

Disadvantages of Dynamic Partitioning


• External Fragmentation ( can resolve with compaction but
it takes lot of time)
• Allocation/ deallocation is complex
Internal Fragmentation
• Internal fragmentation occurs when memory blocks are
allocated to the process more than their requested size. Due
to this some unused space is left over and creating an
internal fragmentation problem.
• Example: Suppose there is a fixed partitioning used for
memory allocation and the different sizes of blocks 3MB,
6MB, and 7MB space in memory. Now a new process p4 of
size 2MB comes and demands a block of memory. It gets a
memory block of 3MB but 1MB block of memory is a waste,
and it can not be allocated to other processes too. This is
called internal fragmentation.
External Fragmentation
• External fragmentation: In External Fragmentation, we
have a free memory block, but we can not assign it to a
process because blocks are not contiguous.
• Example: Suppose (consider the above example) three
processes p1, p2, and p3 come with sizes 2MB, 4MB, and 7MB
respectively. Now they get memory blocks of size 3MB, 6MB,
and 7MB allocated respectively. After allocating the process p1
process and the p2 process left 1MB and 2MB. Suppose a new
process p4 comes and demands a 3MB block of memory,
which is available, but we can not assign it because free
memory space is not contiguous. This is called external
fragmentation.
Placement Algorithm: (Dynamic
Partitioning)
• Memory compaction is time consuming so Operating system must
decide which free block to allocate to a process

• 3-placement algorithms might be considered for assigning free block


to process.
1. First Fit
2. Next Fit
3. Best Fit
4. Worst Fit

• All of course, are limited to choosing among free blocks of


main memory that are equal to or larger than the process to
Dynamic
Partitioning
• First-fit algorithm
– Scans memory form the beginning and chooses the
first available block that is large enough
– Allocate the first hole that is big enough
– Fastest
– May have many process loaded in the front end of
memory that must be searched over when trying
to find a free block
Dynamic
Partitioning
• Next-fit
– Scans memory from the location of the last
placement
– More often allocate a block of memory at the end of
memory where the largest block is found
– The largest block of memory is broken up into smaller
blocks
– Compaction is required to obtain a large block
at the end of mem ory
Placement Algorithm: (Dynamic
Partitioning)
• Operating system must decide
which free block to allocate to a
process
• Best-fit algorithm
– Chooses the block that is closest in size
to the request
– Search entire list
– Since smallest block is found for
process, the smallest amount of
fragmentation is left
Dynamic
Partitioning
• Worst – fit
• Allocate the largest hole
• In this allocation technique, the process
traverses the whole memory and always
search for the largest hole/partition, and then
the process is placed in that hole/partition. It is
a slow process because it has to traverse the
entire memory to search the largest hole.
Allocatio
n
Pagin
g
• Partition memory into small equal
fixed- size chunks and divide each
process into the same size chunks
• The chunks of a process are called
pages
• The chunks of memory are called
frames
Pagin
g
• Operating system maintains a page
table for each process
– Contains the frame location for each
page in the process
– Memory address consist of a page
number and offset within the page
Processes and
Frames
A.0
A.1
A.2
A.3
D.0
B.
0
D.1
B.
1
D.2
B.
2
C.0
C.1
C.2
C.3
D.3
D.4
Page
Table
Segmentati
on
• A program can be
subdivided into segments
– Segments may vary in length
– There is a maximum segment
length
• Addressing consist of two
parts
– a segment number and
– an offset
• Segmentation is similar to
Logical
Addresses
Segmentati
on
Pagin
g
Paging
• Physical address space of a process can be noncontiguous;
• process allocates physical memory whenever the latter is available

• Divide physical memory into fixed-sized blocks called frames


• Size is power of 2, between 512 bytes and 16 Mbytes

• Divide logical memory into blocks of same size called pages


• To run a program of size N pages, need to find N free frames and load program

• Backing store likewise split into pages

• Set up a page table to translate logical to physical addresses

• System keeps track of all free frames


Paging Model of Logical and Physical Memory

page table to translate logical to


physical addresses
Address Translation Scheme
• Address generated by CPU is divided into:
• Page number (p) – used as an index into a page table
• which contains base address of each page in physical memory
• Page offset (d) – offset within a page
• combined with base address to define the physical memory address
that is sent to the memory unit
offse
t page
page number page offset

p d
m-n n

• For given logical address space 2m and page size 2n


Paging Hardware
Paging Example
Logical address 0
(0*4+0)
Logical address = 16 Physical address:
Page size=4 (5*4+0)=20
Physical memory=32
Logical address 3
(0*4+3)
Physical address:
(5*4+0)=23

Logical address 4
User’s view (1*4+0)
Physical address:
Run time address (6*4+0)=24
binding
Logical address 13
(3*4+1)
Physical address:
n=2 and m=4 32-byte
(2*4+1)
memory and 4-byte pages
Paging
• External fragmentation??
• Calculating internal fragmentation
• Page size = 2,048 bytes
• Process size = 72,766 bytes
• 35 pages + 1,086 bytes
• Internal fragmentation of 2,048 - 1,086 = 962 bytes
• So small frame sizes desirable?
• But increases the page table size
• Poor disk I/O
• Page sizes growing over time
• Solaris supports two page sizes – 8 KB and 4 MB
• User’s view and physical memory now very different
• user view=> process contains in single contiguous memory
space
• By implementation process can only access its own
memory
• protection
• Each page table entry 4 bytes (32 bits) long
• Each entry can point to 232 page frames
• If each frame is 4 KB
• The system can address 244 bytes (16TB) of physical
memory

Virtual address space 16MB.


Page table size?
• Process P1 arrives
• Requires n pages => n frames must be available
• Allocate n frames to the process P1
• Create page table for P1
Free Frames
Frame table

Use’s view
System’s
view

RAM RAM
Before allocation After allocation
Implementation of Page Table
• For each process, Page table is kept in main memory

• Page-table base register (PTBR) points to the


page table

• Page-table length register (PTLR) indicates size of


the page table

• In this scheme every data/instruction access requires


two memory accesses
• One for the page table and one for the data / instruction

• The two memory access problem can be solved by the


use of a special fast-lookup hardware cache called
associative memory or translation look-aside
buffers (TLBs)
Demand
Paging
⚫ Bring a page into memory only when it is
needed.
⚫ Less I/O needed
⚫ Less memory needed
⚫ Faster response
⚫ More users

is needed ⇒ reference to it
⚫ invalid reference ⇒ abort
⚫ Page

⚫ not-in-memory ⇒ bring to memory


⚫ Valid−Invalid bit:

1 ⇒ in-memory,
⚫ Witheach page table entry a valid−invalid bit is associated

0 ⇒ not-in-memory

⚫ Initiallyvalid−invalid bit is set


to 0 on all entries.

if valid−invalid bit in page table entry is 0 ⇒ page


⚫ During address translation,

fault.
03/05/2025 Subject_acronym:Module_No (roman): Module_Name 69

You might also like