0% found this document useful (0 votes)
34 views

2 Virtual Memory

Virtual memory uses paging to map virtual addresses to physical addresses, allowing processes to access memory as if they have private, contiguous address spaces while physical memory can be fragmented. A translation lookaside buffer (TLB) caches recent virtual to physical address translations to speed up the mapping process. When a requested page is not in memory, it is fetched from disk using demand paging to reduce disk I/O and improve performance. Two-level page tables help reduce the size of page tables for very large virtual address spaces.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

2 Virtual Memory

Virtual memory uses paging to map virtual addresses to physical addresses, allowing processes to access memory as if they have private, contiguous address spaces while physical memory can be fragmented. A translation lookaside buffer (TLB) caches recent virtual to physical address translations to speed up the mapping process. When a requested page is not in memory, it is fetched from disk using demand paging to reduce disk I/O and improve performance. Two-level page tables help reduce the size of page tables for very large virtual address spaces.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 23

Virtual Memory

View of Memory Hierarchies


Regs Upper Level
Instr. Operands Faster
Cache

Thus far
{ Blocks
L2 Cache
Blocks
Memory
Next:
Virtual
Memory
{ Pages
Disk
Files Larger
Tape Lower Level
Memory Hierarchy: Some Facts
Capacity Upper Level
Access Time Staging
Cost Xfer Unit faster
CPU Registers
100s Bytes Registers
<10s ns
Instr. Operands prog./compiler
1-8 bytes
Cache
K Bytes
10-100 ns
Cache
$.01-.001/bit cache cntl
Blocks 8-128 bytes
Main Memory
M Bytes Memory
100ns-1us
$.01-.001 OS
Pages 512-4K bytes
Disk
G Bytes
ms
Disk
-3 -4
10 - 10 cents user/operator
Files Mbytes
Tape Larger
infinite
sec-min Tape Lower Level
10 -6
Virtual Memory: Motivation
• If Principle of Locality allows caches to
offer (usually) speed of cache memory with
size of DRAM memory,
then recursively why not use at next level to
give speed of DRAM memory, size of Disk
memory?
• Treat Memory as “cache” for Disk !!!
• Share memory between multiple processes
but still provide protection – don’t let one
program read/write memory of another
• Address space – give each program the
illusion that it has its own private memory
– Suppose code starts at addr 0x40000000. But
different processes have different code, both at
the same address! So each program has a
different view of memory
Advantages of Virtual Memory
• Translation:
– Program can be given consistent view of memory, even though physical
memory is scrambled
– Makes multithreading reasonable (now used a lot!)
– Only the most important part of program (“Working Set”) must be in physical
memory.
– Contiguous structures (like stacks) use only as much physical memory as
necessary yet still grow later.
• Protection:
– Different threads (or processes) protected from each other.
– Different pages can be given special behavior
• (Read Only, Invisible to user programs, etc).
– Kernel data protected from User programs
– Very important for protection from malicious programs
=> Far more “viruses” under Microsoft Windows
• Sharing:
– Can map same physical page to multiple users
(“Shared memory”)
Virtual to Physical Address Translation

Program
Physical
operates in HW
memory
its virtual mapping
virtual physical (incl. caches)
address
space address address
(inst. fetch (inst. fetch
load, store) load, store)

• Each program operates in its own virtual address space;


~only program running
• Each is protected from the other
• OS can decide where each goes in memory
• Hardware (HW) provides virtual -> physical mapping
Mapping Virtual Memory to Physical Memory
• Divide into equal sized
chunks (about 4KB) 
• Any chunk of Virtual Memory Stack
assigned to any chuck of Physical
Memory (“page”)

64 MB Physical Memory
Heap

Static

Code
0 0
Paging Organization (eg: 1KB Page)

Physical Page is unit Virtual


Address of mapping Address
page 0 1K 0 page 0 1K
0
1K Addr 1024 page 1 1K
1024 page 1
... ... ... Trans 2048 page 2 1K
7168 page 7 1K MAP ... ... ...
Physical 31744 page 31 1K
Memory Page also unit of
transfer from disk Virtual
to physical memory Memory
Virtual Memory Mapping
Virtual Address:
page no. offset (actually,
concatenation)
Page Table
Page Table ...
Base Reg V A.R. P. P. A.
index Val Access Physical +
into -id Rights Page
page Address Physical
table Memory
. Address
...

Page Table located in physical memory


Issues in VM Design
What is the size of information blocks that are transferred from secondary
to main storage (M)?  page size
(Contrast with physical block size on disk, I.e. sector size)

Which region of M is to hold the new block  placement policy

How do we find a page when we look for it?  block identification

Block of information brought into M, and M is full, then some region of M


must be released to make room for the new block
 replacement policy

What do we do on a write?  write policy

Missing item fetched from secondary memory only on the occurrence of a


fault  demand load policy

mem disk
cache

reg
pages
frame
Virtual Memory Problem # 1
• Map every address  1 extra memory
accesses for every memory access
• Observation: since locality in pages of data,
must be locality in virtual addresses of those
pages
• Why not use a cache of virtual to physical
address translations to make translation fast?
(small is fast)
• For historical reasons, cache is called a
Translation Lookaside Buffer, or TLB
Memory Organization with TLB
•TLBs usually small, typically 128 - 256 entries

• Like any other cache, the TLB can be fully


associative, set associative, or direct mapped

VA hit PA miss
TLB Main
Processor Cache
Lookup Memory
miss hit
Trans-
lation
data
Typical TLB Format
Virtual Physical Dirty Ref Valid Access
Address Address Rights

• TLB just a cache on the page table mappings

• TLB access time comparable to cache


(much less than main memory access time)
• Ref: Used to help calculate LRU on replacement
• Dirty: since use write back, need to know whether
or not to write page to disk when replaced
What if not in TLB
• Option 1: Hardware checks page table and loads
new Page Table Entry into TLB
• Option 2: Hardware traps to OS, up to OS to
decide what to do
• MIPS follows Option 2: Hardware knows
nothing about page table format
TLB Miss
• If the address is not in the TLB, MIPS traps
to the operating system
• The operating system knows which program
caused the TLB fault, page fault, and knows
what the virtual address desired was
requested
valid virtual physical
1 2 9
TLB Miss: If data is in Memory

• We simply add the entry to the TLB, evicting


an old entry from the TLB
valid virtual physical
1 7 32
1 2 9
What if data is on disk ?
• We load the page off the disk into a free
block of memory, using a DMA transfer
– Meantime we switch to some other process
waiting to be run
• When the DMA is complete, we get an
interrupt and update the process's page table
– So when we switch back to the task, the desired
data will be in memory
What if the memory is full ?

• We load the page off the disk into a least


recently used block of memory, using a
DMA transfer
– Meantime we switch to some other process
waiting to be run
• When the DMA is complete, we get an
interrupt and update the process's page table
– So when we switch back to the task, the desired
data will be in memory
Virtual Memory Problem # 2

• Page Table too big!


– 4GB Virtual Memory ÷ 4 KB page
 ~ 1 million Page Table Entries
 4 MB just for Page Table for 1 process,
25 processes  100 MB for Page Tables!
• Variety of solutions to tradeoff memory size of
mapping function for slower when miss TLB
– Make TLB large enough, highly associative so
rarely miss on address translation
Two Level Page Tables
Virtual Memory
2nd Level Super 
Page Tables Page Stack
Table

Physical
Memory
64
MB Heap

... Static

Code
0 0
Summary
• Apply Principle of Locality Recursively
• Reduce Miss Penalty? add a (L2) cache
• Manage memory to disk? Treat as cache
– Included protection as bonus, now critical
– Use Page Table of mappings
vs. tag/data in cache
• Virtual memory to Physical Memory
Translation too slow?
– Add a cache of Virtual to Physical Address
Translations, called a TLB
Summary
• Virtual Memory allows protected sharing of
memory between processes with less swapping to
disk, less fragmentation than always swap or
base/bound
• Spatial Locality means Working Set of Pages is all
that must be in memory for process to run fairly
well
• TLB to reduce performance cost of VM
• Need more compact representation to reduce
memory size cost of simple 1-level page table
(especially 32-  64-bit address)

You might also like