0% found this document useful (0 votes)
18 views105 pages

COA Unit-4 - 1713428101

Uploaded by

mayur jagdale
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views105 pages

COA Unit-4 - 1713428101

Uploaded by

mayur jagdale
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 105

CHAMELI DEVI GROUP OF

INSTITUTIONS

Computer Organization & Architecture (Unit


4)

Rupanshi Patidar
Assistant Professor
CI Department
Main memory

A memory unit is the collection of storage units or devices


together. The memory unit stores the binary information in
the form of bits. Generally, memory/storage is classified into
2 categories:

 Volatile Memory: This loses its data, when power is switched


off.
 Non-Volatile Memory: This is a permanent storage and does
not lose any data when power is switched off.
Main memory

Types of Memory Hierarchy


This Memory Hierarchy Design is divided into 2 main types:
 External Memory or Secondary Memory: Comprising of
Magnetic Disk, Optical Disk, and Magnetic Tape i.e. peripheral
storage devices which are accessible by the processor via an
I/O Module.

 Internal Memory or Primary Memory: Comprising of Main


Memory, Cache Memory & CPU registers. This is directly
accessible by the processor.
Main memory
Main memory
 The main memory is the central storage unit. It is an essential
component of a digital computer since it stores data and
programs.

It is of two types:
 RAM (Random Access Memory)
 ROM (Read Only Memory)
Main memory
RAM

It is a volatile memory. Volatile memory stores information


based on the power supply. If the power supply fails/
interrupted/stopped, all the data and information on this
memory will be lost. RAM is used for booting up or start the
computer. It temporarily stores programs/data which has to
be executed by the processor.
RAM (Random Access Memory)

RAM is of two types:


 S RAM (Static RAM): S RAM uses transistors and the circuits
of this memory are capable of retaining their state as long as
the power is applied. This memory consists of the number of
flip flops with each flip flop storing 1 bit. It has less access
time and hence, it is faster.
 D RAM (Dynamic RAM): D RAM uses capacitors and
transistors and stores the data as a charge on the capacitors.
They contain thousands of memory cells. It needs refreshing
of charge on capacitor after a few milliseconds. This memory
is slower than S RAM.
ROM (Read Only Memory)
 It is a non-volatile memory. Non-volatile memory stores
information even when there is a power supply failed/
interrupted/stopped.
 ROM is used to store information that is used to operate the
system.
 As its name refers to read-only memory, we can only read the
programs and data that is stored on it.
 It contains some electronic fuses that can be programmed for
a piece of specific information.
 The information stored in the ROM in binary format. It is also
known as permanent memory.
ROM (Read Only Memory): Types

 MROM(Masked ROM): Hard-wired devices with a pre-


programmed collection of data or instructions were the first
ROMs. Masked ROMs are a type of low-cost ROM that works
in this way.

 PROM (Programmable Read Only Memory): This read-only


memory is modifiable once by the user. The user purchases a
blank PROM and uses a PROM program to put the required
contents into the PROM. Its content can’t be erased once
written.
ROM (Read Only Memory): Types

 EPROM (Erasable Programmable Read Only


Memory): EPROM is an extension to PROM where you can
erase the content of ROM by exposing it to Ultraviolet rays for
nearly 40 minutes.

 EEPROM (Electrically Erasable Programmable Read Only


Memory): Here the written contents can be erased
electrically. You can delete and reprogramme EEPROM up to
10,000 times. Erasing and programming take very little time,
i.e., nearly 4 -10 ms(milliseconds). Any area in an EEPROM
can be wiped and programmed selectively.
2. Secondary Memory
 Primary memory has limited storage capacity and is volatile.

 Secondary memory overcomes this limitation by providing


permanent storage of data and in bulk quantity.

 Secondary memory is also termed external memory and


refers to the various storage media on which a computer can
store data and programs.

 The Secondary storage media can be fixed or removable.


Fixed Storage media is an internal storage medium like a hard
disk that is fixed inside the computer.

 A storage medium that is portable and can be taken outside


2. Secondary Memory
 It is also known as auxiliary memory
and backup memory.
 It is used to store a large amount of data or
information.
 The data or information stored in secondary memory
is permanent.
 A CPU cannot access secondary memory directly.
 The data/information from the auxiliary memory is
first transferred to the main memory, and then the
CPU can access it.
Characteristics of Secondary Memory

 It is a slow memory but reusable.


 It is a reliable and non-volatile memory.
 It is cheaper than primary memory.
 The storage capacity of secondary memory is large.
 A computer system can run without secondary
memory.
 In secondary memory, data is stored permanently
even when the power is off.
Uses of Secondary Media:
 Permanent Storage: Primary Memory (RAM) is volatile, i.e. it
loses all information when the electricity is turned off, so in
order to secure the data permanently in the device,
Secondary storage devices are needed.

 Portability: Storage mediums, like CDs, flash drives can be


used to transfer the data from one device to another.
Fixed and Removable Storage
Fixed Storage-
 Fixed storage is an internal media device that is used by a
computer system to store data, and usually, these are
referred to as the Fixed disk drives or Hard Drives.

 Fixed storage devices are literally not fixed, obviously, these


can be removed from the system for repairing work,
maintenance purposes, and also for an upgrade, etc.
Fixed Storage
Types of fixed storage:

 Internal flash memory (rare)

 SSD (solid-state disk) units

 Hard disk drives (HDD)


Removable Storage-
 Removable storage is an external media device that is used by
a computer system to store data, and usually, these are
referred to as the Removable Disks drives or the External
Drives.
 Removable storage is any type of storage device that can be
removed/ejected from a computer system while the system is
running.
 Removable storage makes it easier for a user to transfer data
from one computer system to another.
 In storage factors, the main benefit of removable disks is that
they can provide the fast data transfer rates associated with
storage area networks (SANs)
Removable Storage-

Types of Removable Storage:

 Optical discs (CDs, DVDs, Blu-ray discs)


 Memory cards
 Floppy disks
 Magnetic tapes
 Disk packs
 Paper storage (punched tapes, punched cards)
Magnetic Tape
 Magnetic drums, magnetic tape and magnetic disks are types
of magnetic memory.
 These memories use property for magnetic memory.

Magnetic Tape memory : Magnetic tapes are used in most


organizations to save data files. Magnetic tapes use a read-
write mechanism. The read-write mechanism defines writing
data on or reading data from a magnetic tape. The tapes
sequentially save the data manner. In this sequential
processing, the device should start searching at the starting
and check each record until the desired information is
available.
Magnetic Tape
 Magnetic tape is the low-cost average for storage because it
can save a huge number of binary digits, bytes, or frames on
each inch of the tape. The benefit of magnetic tape contains
unconditional storage, inexpensive, high data density, fast
transfer rate, flexibility, and ease of use.

 Magnetic tape units can be interrupted, initiated to transfer


forward, or in the opposite, or it can be reversed. However,
they cannot be initiated or stopped fast enough between
single characters.
Magnetic Tape
Application Areas of Magnetic Tapes

 Serial or sequential processing.

 Backing up data on tape is very cheap.

 It is applicable for the transfer of data between multiple


machines.

 It is suitable for the storage of a large volume of data.


Advantages of Magnetic Tapes
 Cost − Magnetic tape is one of the low-cost storage media.
Therefore, backing up data on tape is very cheap.

 Storage capacity − It is very large.

 Portability − It is easily portable.

 Reusable − It can remove a specific data and save another


data at the same place. Therefore it can be reused.
Disadvantages of Magnetic Tapes
 Access Time − Accessing a record requires accessing all the
records before the required record. So access time is very large
in magnetic tape.

 Non-flexibility − Magnetic tape is not flexible.

 Transmission Speed − The cost of data transfer is moderate in


magnetic tape.

 Vulnerable to damage − Magnetic tapes are highly vulnerable


to damage from dust or careless handling.

 Non-human readable − Data stored on it is not in human-


Disk
 A hard disk is a memory storage device that looks like this:
Disk
Disk
 The entire disk is divided into platters.
 Each platter consists of concentric circles called as tracks.
 These tracks are further divided into sectors which are the
smallest divisions in the disk.
Disk
 A cylinder is formed
by combining the
tracks at a given
radius of a disk pack.
Disk
 There exists a mechanical arm called as Read / Write head.
 It is used to read from and write to the disk.
 Head has to reach at a particular track and then wait for the
rotation of the platter.
 The rotation causes the required sector of the track to come
under the head.
 Each platter has 2 surfaces- top and bottom and both the
surfaces are used to store the data.
 Each surface has its own read / write head.
Disk
Disk

Read-Write(R-W) head moves over the rotating hard disk. It is this


Read-Write head that performs all the read and writes operations
on the disk and hence, the position of the R-W head is a major
concern. To perform a read or write operation on a memory
location, we need to place the R-W head over that position.
Disk Performance Parameters
 The time taken by the disk to complete an I/O request is
called as disk service time or disk access time.
 Components that contribute to the service time are-
Disk
 Seek time – The time taken by the R-W head to reach the
desired track from its current position.
 Rotational latency – Time is taken by the sector to come
under the R-W head.
 Data transfer time – Time is taken to transfer the required
amount of data. It depends upon the rotational speed.
 Controller Overhead- The overhead imposed by the disk
controller is called as controller overhead. Disk controller is a
device that manages the disk.
 Controller time – The processing time taken by the controller.
 Average Access time – seek time + Average Rotational latency
+ data transfer time + controller time.
 Queuing Delay- The time spent waiting for the disk to
become free is called as queuing delay.
Disk: Important Formulas
1. Disk Access Time-
Disk access time= Seek time + Rotational delay + Transfer time +
Controller overhead + Queuing delay
2. Average Disk Access Time-
Average disk access time = Average seek time + Average
rotational delay + Transfer time + Controller overhead +
Queuing delay
3. Average Seek Time-
Average seek time = 1 / 3 x Time taken for one full stroke
Average seek time
= { Time taken to move from track 1 to track 1 + Time taken to
move from track 1 to last track } / 2
= { 0 + (k-1)t } / 2
= (k-1)t / 2
Disk: Important Formulas
4. Average Rotational Latency-
Average rotational latency = 1 / 2 x Time taken for one full rotation
Average rotational latency may also be referred as-
 Average rotational delay
 Average latency
 Average delay

5. Capacity Of Disk Pack-


Capacity of a disk pack
= Total number of surfaces x Number of tracks per surface x Number of
sectors per track x Storage capacity of one sector

6. Formatting Overhead-
Formatting overhead
= Number of sectors x Overhead per sector
Disk: Important Formulas
7. Formatted Disk Space-
Formatted disk space
= Total disk space or capacity – Formatting overhead

8. Recording Density Or Storage Density-


Storage density of a track
= Capacity of the track / Circumference of the track

9. Track Capacity-
Capacity of a track
= Recording density of the track x Circumference of the track
Disk: Important Formulas
10. Data Transfer Rate-
Data transfer rate
= Number of heads x Bytes that can be read in one full rotation x
Number of rotations in one second
OR
Data transfer rate
= Number of heads x Capacity of one track x Number of
rotations in one second
11. Tracks Per Surface-
Total number of tracks per surface
= (Outer radius – Inner radius) / Inter track gap
Disk
Note: Average Rotational latency is mostly 1/2*(Rotational latency).
 In questions, if the seek time and controller time are not
mentioned, take them to be zero.

 If the amount of data to be transferred is not given, assume that no


data is being transferred. Otherwise, calculate the time taken to
transfer the given amount of data.

 The average rotational latency is taken when the current position of


the R-W head is not given. Because the R-W may be already present
at the desired position or it might take a whole rotation to get the
desired sector under the R-W head. But, if the current position of
the R-W head is given then the rotational latency must be
calculated.
Disk: For Example

Consider a hard disk with:


 4 surfaces
 64 tracks/surface
 128 sectors/track
 256 bytes/sector
What is the capacity of the hard disk?
 Disk capacity = surfaces * tracks/surface * sectors/track *
bytes/sector
Disk capacity = 4 * 64 * 128 * 256
Disk capacity = 8 MB
Disk: For Example

The disk is rotating at 3600 RPM, what is the data transfer


rate?
 60 sec -> 3600 rotations
1 sec -> 60 rotations
Data transfer rate = number of rotations per second * track
capacity * number of surfaces (since 1 R-W head is used for
each surface)
Data transfer rate = 60 * 128 * 256 * 4
Disk

he disk is rotating at 3600 RPM, what is the average access


time?
 Since seek time, controller time and the amount of data to be
transferred is not given, we consider all three terms as 0.
Therefore, Average Access time = Average rotational delay
Rotational latency => 60 sec -> 3600 rotations
1 sec -> 60 rotations
Rotational latency = (1/60) sec = 16.67 msec.
Average Rotational latency = (16.67)/2
= 8.33 msec.
Average Access time = 8.33 msec.
Optical Storage
 Optical memory is an electronic storage medium that uses a
laser beam to store and retrieve the data. If we classify the
memory system then optical memory comes under the
external memory in the computer system. Optical memory
can be classified into many types.
Optical Storage

Types of Optical Memory:


 Compact Disk (CD) Technology
 CD-ROM
 CD-Recordable (CD-R)
 CD-Rewritable (CD-RW)
 Digital Versatile Disk (DVD)
Compact Disk (CD) Technology
The first practical application of this optical technology is a
compact disk i.e.CD. The compact disk is the nonerasable disk
and data is imprinted on the disk using a laser beam. Initially,
the CDs were designed to hold audio information from 60-75
minutes that can store about 3 GB of data. Since then the
development of low cost & higher-capacity devices started.
CD-ROM
Compact Disk Read-Only Memory (CD-ROM) is a read-only
memory that is used to store computer data. Earlier the CDs
were implemented to store audios and videos but as the CDs
store data in digital form they can be used to store computer
data.
CD-Recordable (CD-R)
CD-Recordable (CD-R)
 CD-Recordable i.e. (CD-R) was the first kind of compact disk
that could be easily recorded by any computer user. This disk
has a similar shiny spiral track as we can see in CD and CD-
ROM. This shiny track is cover with organic dye at the time of
manufacturing.
 To record the data on the CD-R the disk is inserted into the
CD-R drive and a laser beam is focused on the drive which
burns pits onto the dye. The burned spot become opaque
and the unburnt area still appears shiny. When the laser
beam with low power is focused on the disk to retrieve the
information. The opaque spots reflect light with less intensity
and the shiny parts reflect light with high intensity.
CD-Rewritable (CD-RW)
CD-Rewritable (CD-RW)
 This CD can be recorded multiple times which means the user can write and
erase data from the CD-RW multiple times. This is because instead of using the
organic dye an alloy is used which includes silver, indium, antimony, and
tellurium. The melting point of this alloy is 500o C.

 The alloy shows interesting behaviour when it is heated and cooled down.
When the alloy is heated above the melting point and cooled down it turns
into an amorphous state which is capable to absorb light.

 In case the alloy is heated at 200o C and maintained at that temperature for a
certain period a process annealing takes place which turns the alloy into
the crystalline state. At this state the alloy allows the light to pass through it.

 So here the pits can be created by heating the selected spots above the
melting point and the remaining parts between the pits are lands. The stored
Digital Versatile Disk (DVD)
Digital Versatile Disk (DVD)
 The DVD technology was first introduced in the year 1996 and
has the same appearance as that of the CD. The difference is
in their storage size, the DVD has much larger storage than
that of CD and this is done by implementing several changes
in the design of the DVD.
 The laser beam used imprint data in DVD has a shorter
wavelength as compared to the wavelength of laser beam we
use for CDs. The shorter wavelength of the laser beam helps
the light to focus on a smaller spot.
Cache Memory
 Cache Memory is a special very high-speed memory. The
cache is a smaller and faster memory that stores copies of
the data from frequently used main memory locations. There
are various different independent caches in a CPU, which
store instructions and data. The most important use of cache
memory is that it is used to reduce the average time to
access data from the main memory.
Characteristics of Cache Memory
 Cache memory is an extremely fast memory type that acts as
a buffer between RAM and the CPU.
 Cache Memory holds frequently requested data and
instructions so that they are immediately available to the CPU
when needed.
 Cache memory is costlier than main memory or disk memory
but more economical than CPU registers.
 Cache Memory is used to speed up and synchronize with a
high-speed CPU.
Cache Memory
Cache Memory
Cache Memory
Levels of Memory
 Level 1 or Register: It is a type of memory in which data is
stored and accepted that are immediately stored in the CPU.
The most commonly used register is Accumulator, Program
counter, Address Register, etc.
 Level 2 or Cache memory: It is the fastest memory that has
faster access time where data is temporarily stored for faster
access.
 Level 3 or Main Memory: It is the memory on which the
computer works currently. It is small in size and once power is
off data no longer stays in this memory.
 Level 4 or Secondary Memory: It is external memory that is not
as fast as the main memory but data stays permanently in this
memory.
Types of Caches

 L1 Cache : Cache built in the CPU itself is known as L1 or Level 1


cache. This type of cache holds most recent data so when, the data is
required again so the microprocessor inspects this cache first so it
does not need to go through main memory or Level 2 cache. The
main significance behind above concept is “Locality of
reference”, according to which a location just accessed by the CPU
has a higher probability of being required again.
Types of Caches

 L2 Cache : This type of cache resides on a separate chip next to the


CPU also known as Level 2 Cache. This cache stores recent used data
that cannot be found in the L1 Cache. Some CPU’s has both L1 and L2
Cache built-in and designate the separate cache chip as level 3 (L3)
Cache.
Cache Mapping
 Direct Mapping

 Associative Mapping

 Set-Associative Mapping
1. Direct Mapping
 The simplest technique, known as direct mapping, maps each
block of main memory into only one possible cache line. or In
Direct mapping, assign each memory block to a specific line
in the cache.
 If a line is previously taken up by a memory block when a new
block needs to be loaded, the old block is trashed. An address
space is split into two parts index field and a tag field.
 The cache is used to store the tag field whereas the rest is
stored in the main memory. Direct mapping`s performance is
directly proportional to the Hit ratio
1. Direct Mapping
1. Direct Mapping
 For purposes of cache access, each main memory address can
be viewed as consisting of three fields. The least significant w
bits identify a unique word or byte within a block of main
memory. In most contemporary machines, the address is at
the byte level.
 The remaining s bits specify one of the 2s blocks of main
memory. The cache logic interprets these s bits as a tag of s-r
bits (the most significant portion) and a line field of r bits.
This latter field identifies one of the m=2r lines of the cache.
Line offset is index bits in the direct mapping.
1. Direct Mapping
2. Associative Mapping
 In this type of mapping, associative memory is used to store
the content and addresses of the memory word. Any block
can go into any line of the cache.
 This means that the word id bits are used to identify which
word in the block is needed, but the tag becomes all of the
remaining bits.
 This enables the placement of any word at any place in the
cache memory.
 It is considered to be the fastest and most flexible mapping
form. In associative mapping, the index bits are zero.
2. Associative Mapping
3. Set-Associative Mapping
 This form of mapping is an enhanced form of direct mapping where
the drawbacks of direct mapping are removed. Set associative
addresses the problem of possible thrashing in the direct mapping
method. It does this by saying that instead of having exactly one line
that a block can map to in the cache, we will group a few lines
together creating a set.
 Then a block in memory can map to any one of the lines of a specific
set. Set-associative mapping allows each word that is present in the
cache can have two or more words in the main memory for the same
index address. Set associative cache mapping combines the best of
direct and associative cache mapping techniques. In set associative
mapping the index bits are given by the set offset bits. In this case,
the cache consists of a number of sets, each of which consists of a
number of lines.
3. Set-Associative Mapping
3. Set-Associative Mapping
3. Set-Associative Mapping
Replacement Algorithm
In an operating system that uses paging for memory
management, a page replacement algorithm is needed to decide
which page needs to be replaced when a new page comes
in. Page replacement becomes necessary when a page fault
occurs and there are no free page frames in memory.

However, another page fault would arise if the replaced page is


referenced again. Hence it is important to replace a page that is
not likely to be referenced in the immediate future. y. If no page
frame is free, the virtual memory manager performs a page
replacement operation to replace one of the pages existing in
memory with the page whose reference caused the page fault.
Replacement Algorithm

However, another page fault would arise if the


replaced page is referenced again. Hence it is
important to replace a page that is not likely to be
referenced in the immediate future. y.

If no page frame is free, the virtual memory manager


performs a page replacement operation to replace one
of the pages existing in memory with the page whose
reference caused the page fault.
Replacement Algorithm
Replacement Algorithm
Replacement Algorithm
 Page Fault: A page fault happens when a running program
accesses a memory page that is mapped into the virtual
address space but not loaded in physical memory.
 Since actual physical memory is much smaller than virtual
memory, page faults happen. In case of a page fault,
Operating System might have to replace one of the existing
pages with the newly needed page.
 Different page replacement algorithms suggest different ways
to decide which page to replace.
 The target for all algorithms is to reduce the number of page
faults.
Replacement Algorithm
1. First In First Out (FIFO)
2. Least Recently Used
3. Least Frequently used
4. Random Replacement Algorithm
First In First Out (FIFO)

FIFO (first-in-first-out) is also used as a cache replacement


algorithm and behaves exactly as you would expect. Objects are
added to the queue and are evicted with the same order. Even
though it provides a simple and low-cost method to manage the
cache but even the most used objects are eventually evicted
when they're old enough.
First In First Out (FIFO)
Least recently used (LRU)
The least recently used (LRU) algorithm is one of the most
famous cache replacement algorithms and for good reason!
As the name suggests, LRU keeps the least recently used objects
at the top and evicts objects that haven't been used in a while if
the list reaches the maximum capacity.
So it's simply an ordered list where objects are moved to the top
every time they're accessed; pushing other objects down.
LRU is simple and providers a nice cache-hit rate for lots of use-
cases.
Least recently used (LRU)
Least frequently used (LFU)
 the least frequently used (LFU) algorithm works similarly
to LRU except it keeps track of how many times an object
was accessed instead of how recently it was accessed.
 Each object has a counter that counts how many times it
was accessed. When the list reaches the maximum
capacity, objects with the lowest counters are evicted.
 LFU has a famous problem. Imagine an object was
repeatedly accessed for a short period only. Its counter
increases by a magnitude compared to others so it's very
hard to evict this object even if it's not accessed for a
long time.
Random Replacement (RR)
 This algorithm randomly selects an object when it
reaches maximum capacity. It has the benefit of not
keeping any reference or history of objects and being
very simple to implement at the same time.
Cache Performance
 When the processor needs to read or write a location in the
main memory, it first checks for a corresponding entry in the
cache.
 If the processor finds that the memory location is in the cache,
a Cache Hit has occurred and data is read from the cache.
 If the processor does not find the memory location in the
cache, a cache miss has occurred. For a cache miss, the cache
allocates a new entry and copies in data from the main
memory, then the request is fulfilled from the contents of the
cache.
 The performance of cache memory is frequently measured in
terms of a quantity called Hit ratio.
Cache Performance
Hit Ratio (h) :
The Hit ratio is nothing but a probability of getting hits out of some
number of memory references made by the CPU. So its range is 0
<= h <= 1.
 Hit Ratio (h) = Number of Hits / Total CPU references to memory
= Number of hits / ( Number of Hits + Number of Misses )

Miss Ratio:
The miss ratio is the probability of getting miss out of some number
of memory references made by the CPU.
Miss Ratio = Number of misses / Total CPU references to
memory = Number of misses/ (Number of hits + Number of
misses) Miss Ratio = 1 – hit ratio(h)
Cache Performance
Average Access Time ( tavg ) :
tavg = h X tc + ( 1- h ) X ( tc + tm ) = tc + ( 1- h ) X tm
 Let tc, h and tm denote the cache access time, hit ratio in cache
and and main access time respectively.
 Average memory access time = Hit Time + Miss Rate X Miss
Penalty

Miss Rate : It can be defined as he fraction of accesses that are


not in the cache (i.e. (1-h)).

Miss Penalty : It can be defined as the addition clock cycles to


service the miss, the extra time needed to carry the favored
information into cache from main memory in case of miss in
Improving Cache Performance

Techniques to Reduce Average Memory Access Time

We can minimize the average memory access time by


employing the following techniques −
 By increasing the hit ratio, and minimizing miss
penalty or miss rate.
 By reducing the product of miss penalty and miss
rate, i.e. miss penalty × miss rate.
Improving Cache Performance

Techniques to Minimize Hit Time

By utilizing the following techniques, we can


significantly reduce the hit time −
 By using smaller and simpler cache memory designs.
 By implementing trace caches and pipelined cache
access.
 By avoiding time loss in address translation.
Improving Cache Performance

Techniques for Minimize Miss Penalty

The miss penalty can be reduced by using the following


techniques −
 By implementing multi-level cache memories.
 By prioritizing read misses over write misses.
 By utilizing victim caches.
Improving Cache Performance

Techniques for Minimizing Miss Rate

The miss rates can be reduced by employing the


following techniques −
 By increasing the block size of the cache memory.
 By implementing higher associativity.
 By utilizing compiler optimization.
 By using sufficient cache sizes.
Improving Cache Performance

Techniques to Reduce Product of Miss Rate and Miss


Penalty

The product of miss rate and miss penalty can be


reduced by the following techniques −
 By implementing non-blocking caches.
 By using hardware prefetching.
 By employing compiler-controlled prefetching.
Virtual Memory
 Virtual memory is a memory management technique
where secondary memory can be used as if it were a
part of the main memory. Virtual memory is a common
technique used in a computer's operating system (OS).
 Virtual memory uses both hardware and software to
enable a computer to compensate for physical
memory shortages, temporarily transferring data from
random access memory (RAM) to disk storage.
Mapping chunks of memory to disk files enables a
computer to treat secondary memory as though it
were main memory.
Virtual Memory
 Virtual memory is a valuable concept in computer
architecture that allows you to run large,
sophisticated programs on a computer even if it has
a relatively small amount of RAM
Virtual Memory
Virtual Memory
How virtual memory works
 Virtual memory uses both hardware and software to
operate. When an application is in use, data from
that program is stored in a physical address using
RAM. A memory management unit (MMU) maps the
address to RAM and automatically translates
addresses. The MMU can, for example, map a logical
address space to a corresponding physical address.
How virtual memory works
 If,
at any point, the RAM space is needed for
something more urgent, data can be swapped
out of RAM and into virtual memory. The
computer's memory manager is in charge of
keeping track of the shifts between physical
and virtual memory. If that data is needed
again, the computer's MMU will use a context
switch to resume execution.
How virtual memory works
 While copying virtual memory into physical memory, the OS
divides memory with a fixed number of addresses into
either pagefiles or swap files. Each page is stored on a disk,
and when the page is needed, the OS copies it from the disk
to main memory and translates the virtual addresses into
real addresses.

 However, the process of swapping virtual memory to


physical is rather slow. This means using virtual memory
generally causes a noticeable reduction in performance.
Because of swapping, computers with more RAM are
considered to have better performance.
Benefits of using virtual memory

 It can handle twice as many addresses as main memory.


 It enables more applications to be used at once.
 It frees applications from managing shared memory and
saves users from having to add memory modules when RAM
space runs out.
 It has increased speed when only a segment of a program is
needed for execution.
 It has increased security because of memory isolation.
 It enables multiple larger applications to run simultaneously.
 Allocating memory is relatively inexpensive.
 It does not need external fragmentation.
 CPU use is effective for managing logical partition workloads.
 Data can be moved automatically.
Benefits of using virtual memory
Memory management hardware
Memory Hierarchy
Memory management hardware
 Our main concern here will be the computer’s main
or RAM memory. The cache memory is important
because it boost’s the speed of accessing memory,
but it is managed entirely by the hardware. The
rotating magnetic memory or disk memory is used
by the Virtual Memory Management.
Memory management hardware
Memory Management Unit
 As a program runs, the memory addresses that it uses to
reference its data is the logical address. The real time
translation to the physical address is performed in hardware
by the CPU’s Memory Management Unit (MMU). The MMU
has two special registers that are accessed by the CPU’s
control unit. A data to be sent to main memory or retrieved
from memory is stored in the Memory Data Register (MDR).
The desired logical memory address is stored in the Memory
Address Register (MAR). The address translation is also called
address binding and uses a memory map that is programmed
by the operating system.
Memory management hardware
The job of the operating system is to load the appropriate data
into the MMU when a processes is started and to respond to
the occasional Page Faults by loading the needed memory
and updating the memory map.
Memory management hardware
Before memory addresses are loaded on to the system bus, they
are translated to physical addresses by the MMU.

You might also like