COA Unit-4 - 1713428101
COA Unit-4 - 1713428101
INSTITUTIONS
Rupanshi Patidar
Assistant Professor
CI Department
Main memory
It is of two types:
RAM (Random Access Memory)
ROM (Read Only Memory)
Main memory
RAM
6. Formatting Overhead-
Formatting overhead
= Number of sectors x Overhead per sector
Disk: Important Formulas
7. Formatted Disk Space-
Formatted disk space
= Total disk space or capacity – Formatting overhead
9. Track Capacity-
Capacity of a track
= Recording density of the track x Circumference of the track
Disk: Important Formulas
10. Data Transfer Rate-
Data transfer rate
= Number of heads x Bytes that can be read in one full rotation x
Number of rotations in one second
OR
Data transfer rate
= Number of heads x Capacity of one track x Number of
rotations in one second
11. Tracks Per Surface-
Total number of tracks per surface
= (Outer radius – Inner radius) / Inter track gap
Disk
Note: Average Rotational latency is mostly 1/2*(Rotational latency).
In questions, if the seek time and controller time are not
mentioned, take them to be zero.
The alloy shows interesting behaviour when it is heated and cooled down.
When the alloy is heated above the melting point and cooled down it turns
into an amorphous state which is capable to absorb light.
In case the alloy is heated at 200o C and maintained at that temperature for a
certain period a process annealing takes place which turns the alloy into
the crystalline state. At this state the alloy allows the light to pass through it.
So here the pits can be created by heating the selected spots above the
melting point and the remaining parts between the pits are lands. The stored
Digital Versatile Disk (DVD)
Digital Versatile Disk (DVD)
The DVD technology was first introduced in the year 1996 and
has the same appearance as that of the CD. The difference is
in their storage size, the DVD has much larger storage than
that of CD and this is done by implementing several changes
in the design of the DVD.
The laser beam used imprint data in DVD has a shorter
wavelength as compared to the wavelength of laser beam we
use for CDs. The shorter wavelength of the laser beam helps
the light to focus on a smaller spot.
Cache Memory
Cache Memory is a special very high-speed memory. The
cache is a smaller and faster memory that stores copies of
the data from frequently used main memory locations. There
are various different independent caches in a CPU, which
store instructions and data. The most important use of cache
memory is that it is used to reduce the average time to
access data from the main memory.
Characteristics of Cache Memory
Cache memory is an extremely fast memory type that acts as
a buffer between RAM and the CPU.
Cache Memory holds frequently requested data and
instructions so that they are immediately available to the CPU
when needed.
Cache memory is costlier than main memory or disk memory
but more economical than CPU registers.
Cache Memory is used to speed up and synchronize with a
high-speed CPU.
Cache Memory
Cache Memory
Cache Memory
Levels of Memory
Level 1 or Register: It is a type of memory in which data is
stored and accepted that are immediately stored in the CPU.
The most commonly used register is Accumulator, Program
counter, Address Register, etc.
Level 2 or Cache memory: It is the fastest memory that has
faster access time where data is temporarily stored for faster
access.
Level 3 or Main Memory: It is the memory on which the
computer works currently. It is small in size and once power is
off data no longer stays in this memory.
Level 4 or Secondary Memory: It is external memory that is not
as fast as the main memory but data stays permanently in this
memory.
Types of Caches
Associative Mapping
Set-Associative Mapping
1. Direct Mapping
The simplest technique, known as direct mapping, maps each
block of main memory into only one possible cache line. or In
Direct mapping, assign each memory block to a specific line
in the cache.
If a line is previously taken up by a memory block when a new
block needs to be loaded, the old block is trashed. An address
space is split into two parts index field and a tag field.
The cache is used to store the tag field whereas the rest is
stored in the main memory. Direct mapping`s performance is
directly proportional to the Hit ratio
1. Direct Mapping
1. Direct Mapping
For purposes of cache access, each main memory address can
be viewed as consisting of three fields. The least significant w
bits identify a unique word or byte within a block of main
memory. In most contemporary machines, the address is at
the byte level.
The remaining s bits specify one of the 2s blocks of main
memory. The cache logic interprets these s bits as a tag of s-r
bits (the most significant portion) and a line field of r bits.
This latter field identifies one of the m=2r lines of the cache.
Line offset is index bits in the direct mapping.
1. Direct Mapping
2. Associative Mapping
In this type of mapping, associative memory is used to store
the content and addresses of the memory word. Any block
can go into any line of the cache.
This means that the word id bits are used to identify which
word in the block is needed, but the tag becomes all of the
remaining bits.
This enables the placement of any word at any place in the
cache memory.
It is considered to be the fastest and most flexible mapping
form. In associative mapping, the index bits are zero.
2. Associative Mapping
3. Set-Associative Mapping
This form of mapping is an enhanced form of direct mapping where
the drawbacks of direct mapping are removed. Set associative
addresses the problem of possible thrashing in the direct mapping
method. It does this by saying that instead of having exactly one line
that a block can map to in the cache, we will group a few lines
together creating a set.
Then a block in memory can map to any one of the lines of a specific
set. Set-associative mapping allows each word that is present in the
cache can have two or more words in the main memory for the same
index address. Set associative cache mapping combines the best of
direct and associative cache mapping techniques. In set associative
mapping the index bits are given by the set offset bits. In this case,
the cache consists of a number of sets, each of which consists of a
number of lines.
3. Set-Associative Mapping
3. Set-Associative Mapping
3. Set-Associative Mapping
Replacement Algorithm
In an operating system that uses paging for memory
management, a page replacement algorithm is needed to decide
which page needs to be replaced when a new page comes
in. Page replacement becomes necessary when a page fault
occurs and there are no free page frames in memory.
Miss Ratio:
The miss ratio is the probability of getting miss out of some number
of memory references made by the CPU.
Miss Ratio = Number of misses / Total CPU references to
memory = Number of misses/ (Number of hits + Number of
misses) Miss Ratio = 1 – hit ratio(h)
Cache Performance
Average Access Time ( tavg ) :
tavg = h X tc + ( 1- h ) X ( tc + tm ) = tc + ( 1- h ) X tm
Let tc, h and tm denote the cache access time, hit ratio in cache
and and main access time respectively.
Average memory access time = Hit Time + Miss Rate X Miss
Penalty