2 Virtual Memory
2 Virtual Memory
Thus far
{ Blocks
L2 Cache
Blocks
Memory
Next:
Virtual
Memory
{ Pages
Disk
Files Larger
Tape Lower Level
Memory Hierarchy: Some Facts
Capacity Upper Level
Access Time Staging
Cost Xfer Unit faster
CPU Registers
100s Bytes Registers
<10s ns
Instr. Operands prog./compiler
1-8 bytes
Cache
K Bytes
10-100 ns
Cache
$.01-.001/bit cache cntl
Blocks 8-128 bytes
Main Memory
M Bytes Memory
100ns-1us
$.01-.001 OS
Pages 512-4K bytes
Disk
G Bytes
ms
Disk
-3 -4
10 - 10 cents user/operator
Files Mbytes
Tape Larger
infinite
sec-min Tape Lower Level
10 -6
Virtual Memory: Motivation
• If Principle of Locality allows caches to
offer (usually) speed of cache memory with
size of DRAM memory,
then recursively why not use at next level to
give speed of DRAM memory, size of Disk
memory?
• Treat Memory as “cache” for Disk !!!
• Share memory between multiple processes
but still provide protection – don’t let one
program read/write memory of another
• Address space – give each program the
illusion that it has its own private memory
– Suppose code starts at addr 0x40000000. But
different processes have different code, both at
the same address! So each program has a
different view of memory
Advantages of Virtual Memory
• Translation:
– Program can be given consistent view of memory, even though physical
memory is scrambled
– Makes multithreading reasonable (now used a lot!)
– Only the most important part of program (“Working Set”) must be in physical
memory.
– Contiguous structures (like stacks) use only as much physical memory as
necessary yet still grow later.
• Protection:
– Different threads (or processes) protected from each other.
– Different pages can be given special behavior
• (Read Only, Invisible to user programs, etc).
– Kernel data protected from User programs
– Very important for protection from malicious programs
=> Far more “viruses” under Microsoft Windows
• Sharing:
– Can map same physical page to multiple users
(“Shared memory”)
Virtual to Physical Address Translation
Program
Physical
operates in HW
memory
its virtual mapping
virtual physical (incl. caches)
address
space address address
(inst. fetch (inst. fetch
load, store) load, store)
64 MB Physical Memory
Heap
Static
Code
0 0
Paging Organization (eg: 1KB Page)
mem disk
cache
reg
pages
frame
Virtual Memory Problem # 1
• Map every address 1 extra memory
accesses for every memory access
• Observation: since locality in pages of data,
must be locality in virtual addresses of those
pages
• Why not use a cache of virtual to physical
address translations to make translation fast?
(small is fast)
• For historical reasons, cache is called a
Translation Lookaside Buffer, or TLB
Memory Organization with TLB
•TLBs usually small, typically 128 - 256 entries
VA hit PA miss
TLB Main
Processor Cache
Lookup Memory
miss hit
Trans-
lation
data
Typical TLB Format
Virtual Physical Dirty Ref Valid Access
Address Address Rights
Physical
Memory
64
MB Heap
... Static
Code
0 0
Summary
• Apply Principle of Locality Recursively
• Reduce Miss Penalty? add a (L2) cache
• Manage memory to disk? Treat as cache
– Included protection as bonus, now critical
– Use Page Table of mappings
vs. tag/data in cache
• Virtual memory to Physical Memory
Translation too slow?
– Add a cache of Virtual to Physical Address
Translations, called a TLB
Summary
• Virtual Memory allows protected sharing of
memory between processes with less swapping to
disk, less fragmentation than always swap or
base/bound
• Spatial Locality means Working Set of Pages is all
that must be in memory for process to run fairly
well
• TLB to reduce performance cost of VM
• Need more compact representation to reduce
memory size cost of simple 1-level page table
(especially 32- 64-bit address)