COA Unit - IV Notes
COA Unit - IV Notes
UNIT- IV
Memory Organization: Concept of hierarchical memory organization, Semiconductor memory technologies,
Cache memory, Virtual memory, Auxiliary memory, Direct Memory Access(DMA).
Memory Organization
A memory unit is the collection of storage units or devices together. The memory unit stores the binary
information in the form of bits. Generally, memory/storage is classified into 2 categories:
Volatile Memory: This loses its data, when power is switched off.
Non-Volatile Memory: This is a permanent storage and does not lose any data when power is switched
off.
Concept of Memory Hierarchy
The total memory capacity of a computer can be visualized by hierarchy of components. The memory hierarchy
system consists of all storage devices contained in a computer system from the slow Auxiliary Memory to fast
Main Memory and to smaller Cache memory.
Auxillary memory access time is generally 1000 times that of the main memory, hence it is at the bottom of the
hierarchy.
The main memory occupies the central position because it is equipped to communicate directly with the CPU
and with auxiliary memory devices through Input/output processor (I/O).
When the program not residing in main memory is needed by the CPU, they are brought in from auxiliary
memory. Programs not currently needed in main memory are transferred into auxiliary memory to provide space
in main memory for other programs that are currently in use.
The cache memory is used to store program data which is currently being executed in the CPU. Approximate
access time ratio between cache memory and main memory is about 1 to 7~10
Memory Access Methods
Each memory type, is a collection of numerous memory locations. To access data from any memory, first it must
be located and then the data is read from the memory location. Following are the methods to access information
from memory locations:
1. Random Access: Main memories are random access memories, in which each memory location has a
unique address. Using this unique address any memory location can be reached in the same amount of
time in any order.
2. Sequential Access: This methods allows memory access in a sequence or in order.
3. Direct Access: In this mode, information is stored in tracks, with each track having a separate read/write
head.
Main Memory
The memory unit that communicates directly within the CPU, Auxillary memory and Cache memory, is called
main memory. It is the central storage unit of the computer system. It is a large and fast memory used to store
data during computer operations. Main memory is made up of RAM and ROM, with RAM integrated circuit
chips holing the major share.
RAM: Random Access Memory
o DRAM: Dynamic RAM, is made of capacitors and transistors, and must be refreshed every
10~100 ms. It is slower and cheaper than SRAM.
o SRAM: Static RAM, has a six transistor circuit in each cell and retains data, until powered off.
o NVRAM: Non-Volatile RAM, retains its data, even when turned off. Example: Flash memory.
ROM: Read Only Memory, is non-volatile and is more like a permanent storage for information. It also
stores the bootstrap loader program, to load and start the operating system when computer is turned on.
PROM(Programmable ROM), EPROM(Erasable PROM) and EEPROM(Electrically Erasable PROM)
are some commonly used ROMs.
Auxiliary Memory
Devices that provide backup storage are called auxiliary memory. For example: Magnetic disks and tapes are
commonly used auxiliary devices. Other devices used as auxiliary memory are magnetic drums, magnetic bubble
memory and optical disks.
It is not directly accessible to the CPU, and is accessed using the Input/Output channels.
Cache Memory
The data or contents of the main memory that are used again and again by CPU, are stored in the cache memory
so that we can easily access that data in shorter time.
Whenever the CPU needs to access memory, it first checks the cache memory. If the data is not found in cache
memory then the CPU moves onto the main memory. It also transfers block of recent data into the cache and
keeps on deleting the old data in cache to accomodate the new one.
Hit Ratio
The performance of cache memory is measured in terms of a quantity called hit ratio. When the CPU refers to
memory and finds the word in cache it is said to produce a hit. If the word is not found in cache, it is in main
memory then it counts as a miss.
The ratio of the number of hits to the total CPU references to memory is called hit ratio.
Hit Ratio = Hit/(Hit + Miss)
Associative Memory
It is also known as content addressable memory (CAM). It is a memory chip in which each bit position can be
compared. In this the content is compared in each bit cell which allows very fast table lookup. Since the entire
chip can be compared, contents are randomly stored without considering addressing scheme. These chips have
less storage capacity than regular memory chips.
A device for storing digital information that is fabricated by using integrated circuit technology. Also known as
integrated-circuit memory; large-scale integrated memory; memory chip; semiconductor storage; transistor
memory.
Semiconductor memory
Semiconductor memory technology is an essential element of today’s electronics. Normally based around
semiconductor technology, memory is used in any equipment that uses a processor of one form or another.
With the rapid growth in the requirement for semiconductor memories there have been a number of technologies
and types of memory that have emerged. Names such as ROM, RAM, EPROM, EEPROM, Flash memory,
DRAM, SRAM, SDRAM, and the very new MRAM can now be seen in the electronics literature. Each one has
its own advantages and area in which it may be used.
Electronic semiconductor memory technology can be split into two main types or categories, according to the
way in which the memory operates:
Random Access Memory (RAM)is the best known form of computer memory. The Read and write (R/W)
memory of a computer is called RAM. The User can write information to it and read information from it.
The RAM is a volatile memory, it means information written to it can be accessed as long as power is on. As
soon as the power is off, it can not be accessed. so this mean RAM computer memory essentially empty.RAM
holds data and processing instructions temporarily until the CPU needs it. Scratchpad storage in memory space
is used for the temporary storage of data.
Read only memory (ROM) is an example of nonvolatile memory. ROM is a class of storage medium used in
computers and other electronic devices. Read Only Memory (ROM), also known as firmware, is an integrated
circuit programmed with specific data when it is manufactured. The instructions for starting the computer are
housed on Read only memory chip.
There is a large variety of types of ROM and RAM that are available. These arise from the variety of
applications and also the number of technologies available. This means that there is a large number of
abbreviations or acronyms and categories for memories ranging from Flash to MRAM, PROM to EEPROM, and
many more:
PROM: This stands for Programmable Read Only Memory. It is a semiconductor memory which can
only have data written to it once – the data written to it is permanent. These memories are bought in a blank
format and they are programmed using a special PROM programmer. Typically a PROM will consist of an
array of fuse able links some of which are “blown” during the programming process to provide the required data
pattern.
The PROM stores its data as a charge on a capacitor. There is a charge storage capacitor for each cell and this
can be read repeatedly as required. However it is found that after many years the charge may leak away and the
data may be lost. Nevertheless, this type of semiconductor memory used to be widely used in applications where
a form of ROM was required, but where the data needed to be changed periodically, as in a development
environment, or where quantities were low.
EPROM: This is an Erasable Programmable Read Only Memory. This form of semiconductor memory can
be programmed and then erased at a later time. This is normally achieved by exposing the silicon to ultraviolet
light. To enable this to happen there is a circular window in the package of the EPROM to enable the light to
reach the silicon of the chip. When the PROM is in use, this window is normally covered by a label, especially
when the data may need to be preserved for an extended period.
EEPROM: This is an Electrically Erasable Programmable Read Only Memory. Data can be written to it and
it can be erased using an electrical voltage. This is typically applied to an erase pin on the chip. Like other
types of PROM, EEPROM retains the contents of the memory even when the power is turned off. Also like other
types of ROM, EEPROM is not as fast as RAM.
EEPROM memory cells are made from floating-gate MOSFETS (known as FGMOS)
Flash memory: Flash memory may be considered as a development of EEPROM technology. Data can be
written to it and it can be erased, although only in blocks, but data can be read on an individual cell basis. To
erase and re-programme areas of the chip, programming voltages at levels that are available within electronic
equipment are used. It is also non-volatile, and this makes it particularly useful. As a result Flash memory is
widely used in many applications including memory cards for digital cameras, mobile phones, computer
memory sticks and many other applications.
Flash memory stores data in an array of memory cells. The memory cells are made from floating-gate
MOSFETS (known as FGMOS). These FG MOSFETs (or FGMOS in short) have the ability to store an
electrical charge for extended periods of time (2 to 10 years) even without a connecting to a power supply.
DRAM: Dynamic RAM is a form of random access memory. DRAM uses a capacitor to store each bit of
data, and the level of charge on each capacitor determines whether that bit is a logical 1 or 0. However these
capacitors do not hold their charge indefinitely, and therefore the data needs to be refreshed periodically. As a
result of this dynamic refreshing it gains its name of being a dynamic RAM. DRAM is the form of
semiconductor memory that is often used in equipment including personal computers and workstations where it
forms the main RAM for the computer.
Disadvantage: Need to refresh the capacitor charge every once in two milliseconds
SRAM: Static Random Access Memory. This form of semiconductor memory gains its name from the fact
that, unlike DRAM, the data does not need to be refreshed dynamically. It is able to support faster read and
write times than DRAM (typically 10 ns against 60 ns for DRAM), and in addition its cycle time is much
shorter because it does not need to pause between accesses. However it consumes more power, is less dense and
more expensive than DRAM. As a result of this it is normally used for caches, while DRAM is used as the main
semiconductor memory technology.
SDRAM: Synchronous DRAM. This form of semiconductor memory can run at faster speeds than
conventional DRAM. It is synchronised to the clock of the processor and is capable of keeping two sets of
memory addresses open simultaneously. By transferring data alternately from one set of addresses, and then
the other, SDRAM cuts down on the delays associated with non-synchronous RAM, which must close one
address bank before opening the next.
MRAM: This is Magneto-resistive RAM, or Magnetic RAM. It is a non-volatile RAM memory technology
that uses magnetic charges to store data instead of electric charges. Unlike technologies including DRAM,
which require a constant flow of electricity to maintain the integrity of the data, MRAM retains data even when
the power is removed. An additional advantage is that it only requires low power for active operation. As a result
this technology could become a major player in the electronics industry now that production processes have been
developed to enable it to be produced.
SRAM cell:
A typical SRAM cell is made up of six MOSFETs. Each bit in an SRAM is stored on four transistors
(M1, M2, M3, M4) that form two cross-coupled inverters. This storage cell has two stable states which are used
to denote 0 and 1. Two additional access transistors serve to control the access to a storage cell during read and
write operations. In addition to such six-transistor (6T) SRAM, other kinds of SRAM chips use 4, 8, 10 (4T, 8T,
10T SRAM), or more transistors per bit.
Four-transistor SRAM is quite common in stand-alone SRAM devices (as opposed to SRAM used for
CPU caches), implemented in special processes with an extra layer of polysilicon, allowing for very high-
resistance pull-up resistors. The principal drawback of using 4T SRAM is increased static power due to the
constant current flow through one of the pull-down transistors.
Four transistor SRAM provides advantages in density at the cost of manufacturing complexity. The resistors
must have small dimensions and large values.
This is sometimes used to implement more than one (read and/or write) port, which may be useful in certain
types of video memory and register files implemented with multi-ported SRAM circuitry.
Generally, the fewer transistors needed per cell, the smaller each cell can be. Since the cost of processing a
silicon wafer is relatively fixed, using smaller cells and so packing more bits on one wafer reduces the cost per
bit of memory.
Memory cells that use fewer than four transistors are possible – but, such 3T or 1T cells are DRAM, not SRAM
(even the so-called 1T-SRAM).
Access to the cell is enabled by the word line (WL in figure) which controls the two access transistors M5 and M6
which, in turn, control whether the cell should be connected to the bit lines: BL and BL. They are used to transfer
data for both read and write operations. Although it is not strictly necessary to have two bit lines, both the signal
and its inverse are typically provided in order to improve noise margins.
During read accesses, the bit lines are actively driven high and low by the inverters in the SRAM cell. This
improves SRAM bandwidth compared to DRAMs – in a DRAM, the bit line is connected to storage capacitors
and charge sharing causes the bit line to swing upwards or downwards.
DRAM cell:
DRAM stands for dynamic random access memory. Dynamic refers to the need to periodically refresh DRAM
cells so that they can continue to retain the stored bit. Because of the small footprint of a DRAM cell, DRAM
can be produced in large capacities. By packaging DRAM cells judiciously, DRAM memory can sustain large
data rates. For these reasons, DRAM is used to implement the bulk of main memory.
A Dram cell consists of a capacitor connected by a pass transistor to the bit line (or digit line or column line).
The digit line (or column line) is connected to a multitude of cells arranged in a column. The word line (or row
line) is also connected to a multitude of cells, but arranged in a row. (See Figure 2.) If the word line is
ascertained, then the pass transistor T1 in Figure 1 is opened and the capacitor C1 is connected to the bit line.
The DRAM memory cell stores binary information in the form of a stored charge on the capacitor. The
capacitor's common node is biased approximately at VCC/2. The cell therefore contains a charge of Q = ±VCC/2 •
Ccell, if the capacitance of the capacitor is Ccell. The charge is Q = +VCC/2 • Ccell if the cell stores a 1, otherwise the
charge is Q = -VCC/2 • Ccell. Various leak currents will slowly remove the charge, making a refresh operation
necessary.
If we open the pass transistor by asserting the word line, then the charge will dissipate over the digit line, leading
to a voltage change. The voltage change is given by (Vsignal observed voltage change in the digit line, Ccell the
capacitance of the DRAM cell capacitor, and Cline the capacitance of the digit line
For example, if VCC is 3.3V, then Vcell is 1.65V. Typical values for the capacitances are Cline = 300fF and Ccell =
50fF. This leads to a signal strength of 235 mV. When a DRAM cell is accessed, it shares its charge with the
digit line.
Cache Memory
The data or contents of the main memory that are used frequently by CPU are stored in the cache memory so that
the processor can easily access that data in a shorter time. Whenever the CPU needs to access memory, it first
checks the cache memory. If the data is not found in cache memory, then the CPU moves into the main memory.
Cache memory is placed between the CPU and the main memory. The block diagram for a cache memory can be
represented as:
The cache is the fastest component in the memory hierarchy and approaches the speed of CPU components.
When the CPU needs to access memory, the cache is examined. If the word is found in the cache, it is
read from the fast memory.
If the word addressed by the CPU is not found in the cache, the main memory is accessed to read the
word.
A block of words one just accessed is then transferred from main memory to cache memory. The block
size may vary from one word (the one just accessed) to about 16 words adjacent to the one just accessed.
The performance of the cache memory is frequently measured in terms of a quantity called hit ratio.
When the CPU refers to memory and finds the word in cache, it is said to produce a hit.
If the word is not found in the cache, it is in main memory and it counts as a miss.
The ratio of the number of hits divided by the total CPU references to memory (hits plus misses) is the hit
ratio.
There are various different independent caches in a CPU, which store instructions and data.
Levels of memory:
Level 1 or Register –
It is a type of memory in which data is stored and accepted that are immediately stored in CPU. Most
commonly used register is accumulator, Program counter, address register etc.
Level 2 or Cache memory –
It is the fastest memory which has faster access time where data is temporarily stored for faster access.
Level 3 or Main Memory –
It is memory on which computer works currently. It is small in size and once power is off data no longer
stays in this memory.
Level 4 or Secondary Memory –
It is external memory which is not as fast as main memory but data stays permanently in this memory.
Cache Performance:
When the processor needs to read or write a location in main memory, it first checks for a corresponding entry in
the cache.
If the processor finds that the memory location is in the cache, a cache hit has occurred and data is read
from cache
If the processor does not find the memory location in the cache, a cache miss has occurred. For a cache
miss, the cache allocates a new entry and copies in data from main memory, then the request is fulfilled
from the contents of the cache.
The performance of cache memory is frequently measured in terms of a quantity called Hit ratio.
We can improve Cache performance using higher cache block size, higher associativity, reduce miss rate, reduce
miss penalty, and reduce the time to hit in the cache.
1. Usually, the cache memory can store a reasonable number of blocks at any given time, but this
number is small compared to the total number of blocks in the main memory.
2. The correspondence between the main memory blocks and those in the cache is specified by a
mapping function.
Types of Cache –
o Primary Cache –
A primary cache is always located on the processor chip. This cache is small and its access time is
comparable to that of processor registers.
o Secondary Cache –
Secondary cache is placed between the primary cache and the rest of the memory. It is referred to
as the level 2 (L2) cache. Often, the Level 2 cache is also housed on the processor chip.
Locality of reference –
Since size of cache memory is less as compared to main memory. So to check which part of main
memory should be given priority and loaded in cache is decided based on locality of reference.
Cache Mapping-
Cache mapping defines how a block from the main memory is mapped to the cache memory in case of a
cache miss.
Cache mapping is a technique by which the contents of main memory are brought into the cache memory.
The following diagram illustrates the mapping process-
Main memory is divided into equal size partitions called as blocks or frames.
Cache memory is divided into partitions having same size as that of blocks called as lines.
During cache mapping, block of main memory is simply copied to the cache and the block is not
actually brought from the main memory.
1. Direct Mapping
2. Fully Associative Mapping
3. K-way Set Associative Mapping
1. Direct Mapping-
In direct mapping,
A particular block of main memory can map only to a particular line of the cache.
The line number of cache to which a particular block can map is given by-
Example-
In direct mapping,
There is no need of any replacement algorithm.
This is because a main memory block can map only to a particular line of the cache.
Thus, the new incoming block will always replace the existing block (if any) in that particular line.
Example-
Example-
Here,
k = 2 suggests that each set contains two cache lines.
Since cache contains 6 lines, so number of sets in the cache = 6 / 2 = 3 sets.
Block ‘j’ of main memory can map to set number (j mod 3) only of the cache.
Within that set, block ‘j’ can map to any cache line that is freely available at that moment.
If all the cache lines are occupied, then one of the existing blocks will have to be replaced.
Set associative mapping is a combination of direct mapping and fully associative mapping.
It uses fully associative mapping within each set.
Thus, set associative mapping requires a replacement algorithm.
If k = Total number of lines in the cache, then k-way set associative mapping becomes fully associative
mapping.
Virtual Memory
Virtual Memory is a storage scheme that provides user an illusion of having a very big main memory. This is
done by treating a part of secondary memory as the main memory.
In this scheme, User can load the bigger size processes than the available main memory by having the illusion
that the memory is available to load the process.
Instead of loading one big process in the main memory, the Operating System loads the different parts of more
than one process in the main memory.
By doing this, the degree of multiprogramming will be increased and therefore, the CPU utilization will also be
increased.
Demand Paging
Demand Paging is a popular method of virtual memory management. In demand paging, the pages of a process
which are least used, get stored in the secondary memory.
A page is copied to the main memory when its demand is made or page fault occurs. There are various page
replacement algorithms which are used to determine the pages which will be replaced. We will discuss each one
of them later in detail.
Drawbacks of Paging
1. Size of Page table can be very big and therefore it wastes main memory.
2. CPU will take more time to read a single word from the main memory.
In translation look aside buffers, there are tags and keys with the help of which, the mapping is done.
TLB hit is a condition where the desired entry is found in translation look aside buffer. If this happens then the
CPU simply access the actual location in the main memory.
However, if the entry is not found in TLB (TLB miss) then CPU has to access page table in the main memory
and then access the actual frame in the main memory.
Therefore, in the case of TLB hit, the effective access time will be lesser as compare to the case of TLB miss.
If the probability of TLB hit is P% (TLB hit rate) then the probability of TLB miss (TLB miss rate) will be (1-P)
%.
Therefore, the effective access time can be defined as;
1. EAT = P (t + m) + (1 - p) (t + k.m + m)
Where, p → TLB hit rate, t → time taken to access TLB, m → time taken to access main memory k = 1, if the
single level paging has been implemented.
By the formula, we come to know that
1. Effective access time will be decreased if the TLB hit rate is increased.
2. Effective access time will be increased in the case of multilevel paging.
Auxiliary memory:
Auxiliary memory (also referred to as secondary storage) is the non-volatile memory lowest-cost, highest-
capacity, and slowest-access storage in a computer system. It is where programs and data kept for long-term
storage or when not in immediate use.
Such memories tend to occur in two types-sequential access (data must access in a linear sequence) and direct
access (data may access in any sequence). The most common sequential storage device is the hard disk drives,
whereas direct-access devices include rotating drums, disks, CD-ROMs, and DVD-ROMs.It used as permanent
storage of data in mainframes and supercomputers.
Auxiliary memory may also refer to as auxiliary storage, secondary storage, secondary memory, external
storage or external memory. Auxiliary memory is not directly accessible by the CPU; instead, it stores
noncritical system data like large data files, documents, programs and other back up information that supplied to
primary memory from auxiliary memory over a high-bandwidth channel, which will use whenever necessary.
Auxiliary memory holds data for future use, and that retains information even the power fails.
As we know that the Main Memory Stores the data in a Temporary Manner means all the data will be loss when
the Power switched off. And all the data will be loss when the power goes switched off.
So that we uses the Secondary Storage devices those are used for Storing the data in a Permanent Manner means
all the Data will remain Stored whether the Power is Switched on or Switched off means the Power Will never
effect on the System. For storing the data in a Permanent Manner we uses the Magnetic Storage Devices. There
are also Some Advantages of Secondary Storage Devices.
1) Non-Volatile Storage Devices: The Non-Volatile Storage Devices are Non-Volatile in the nature means them
never loss their data when the Power goes switched off. So that data which is Stored into the Non-Volatile
Storage Devices will never be Loosed When the Power Switched off.
2) Mass Storage: The Capacity of these Devices is very high means we can Store the Huge Amount of data into
the Secondary Storage Devices. We can Store data into the Secondary Storage Devices in the form of Giga Bytes
and Tera Bytes.
3) Cost Effective: The Cost of Secondary Storage Devices is very lower in compare to the Main Memory So that
they are also called as the more cost effective and they are very small and couldn’t easily damage. And the data
can’t be easily loss from these Disks.
4) Re-usability: As Memory Contains the Data in the Temporary as well as Permanent Manner. But the
Secondary Storage Devices are always Reusable means they can be erased and stored any Time. Means we can
add or Remove the Contents from these Disks when we Requires.
There are Many Types of Storage Devices those are based on the Sequential and Random Access Means the data
which is Stored into the Secondary Storage devices can be Read either from the First Location which is also
known as the Sequential Access or Sequential Manner and the Data can be Read from these Disks and also from
any Locations. So if any Disk provides this Utility then this is called as the Direct Access Mechanism. There are
Many Storage Devices those are either based on the SASD or Some are DASD.
1) Magnetic Tapes: The Magnetic Tapes is the Type of Secondary Storage Device and this Device is used for
taking back up of data and this Tape contains some magnetic fields and the Magnetic Tapes are used Accessing
the data into the Sequential Form and the Tape Also Contains a Ribbon which is coated on the Single Side of the
Tape and also contains a head which reads the data which is Recorded on to the Tape. And when we are reading
the information from the disk then we can also read backward information means we can also back the Tape for
Reading the Previous information. And For inserting the Tape into the System we also Requires Some Tape
Drives Which Contains Tape and which is Responsible for Reading the contents from the Tapes.
They can Store huge Amount of data into the Tape Drive , But the Main Limitation of the Tape Drive is that we
cant Access the Data from the Disks directly means if we wants to 100th Record from the Tape then we must
have to move all the Previous i.e. 99th Records first. And the Tapes are also easily damaged due to the Human
Errors.
2) Magnetic Disks : – This is also called as the hard disk and this is made from the thin metal platter which is
coated on the both sides of the magnetic Disks. And the there are Many Plates or Platters into a single Hard Disk
and all the Plates are Made from the Magnetic Materials and all the Disks are Rotate from the 700 to 3600 rpm
means Rotation per Minute and the Hard Disk also Contains a head which is used for both Reading and Writing
the Data from the Hard Disks.
The Plate of Disk is Divided into the Tracks and sectors and the collection of Tracks makes a Cylinder means all
the Tracks of the Disk which a Consecutive Areas makes a Cylinder.
The Disk is first divided into the Number of Tracks and the Tracks are further divided into the sectors and the
Number of Tracks Makes a Cylinder. All the data is Stored into the disk by using Some Sectors and each sectors
belongs to a Tracks. The Data is accessed from the Disk by using the heads, all the heads have Some Arm those
are used for Reading the Data from the Particular Tracks and sector. When the Disk Rotates very high Speed
then the Head also Moves, For Reading the data from the Disk the ARM touches with the Particular Track and
read the data from that Location.
For Locating a Particular data from the Disk the head Moves Around the Disk very Fastly and data which a user
wants to Access must have an Address So that Arm of the head just use that Address Means the Number of
Cylinder, Number of Track and Number of Sectors from which user wants to read the data. With the Help of
these Read and Write heads we can also Read the Data from the Disk and we can also Stores some data onto the
Disk. Some Time Considerations are also used when we are accessing or storing the data onto the hard disk.
1) Seek Time: – The Total Time which is Taken to Move on the Desired track is known as the seek Time. And
time is always measured by using the Milliseconds.
2) Latency Time. : The time required to Bring the Particular Track to the Desired Location Means the Total
Time to bring the Correct the Sector for Reading or for the read and Write head. This is also called as the
Average Time.
3) Data Transfer Time: The Total Time which is required for Reading and Writing the data into the Disk is
known as the Data transfer Time.
When we are Taking About the Magnetic Tapes then we can say that the Storage Capacity of the disk is Measure
in the Form of Mega Bytes and when are talking about the Hard Disk then the Measurement will be in the Form
of Giga Bytes. Means the Capacity of t the Hard Disk will be Read by using the giga Bytes. The Magnetic Tapes
are Sequential Access Device and the Hard Disk is the Direct Access Device means the data of this Disk will be
Read from Any Location and the Data can be Read from the Disk by using the Read Write Heads. But hard
Disks are Costlier than the Simple Magnetic Tapes. But the capacity of the Hard Disk is very high in compare to
the Tapes.
3) Floppy Diskette: Floppy disk is a kind of storage device that can be used to carried around? The Floppy Disk
is also a Secondary Storage device which is used for storing the data in a Permanent Manner. The floppy is made
up of Rigid Mylar Plastic and also contains a Magnetic black disk inside the Plastic Cover.
The Floppy Disk also Stores all the Data into the Form of Tracks and Sectors and the floppy Disk provides both
Reading and Writing the data into the Disk. The Floppy Disk is also called as Reusable Disk means the Floppy
Disk Provides us the Facility to Read and Writes the Data into disk as and When Necessary and Also Many
Times. We can Read and Write the data from the Disk.
The Main Advantage of the Floppy Disk is that the Data can be Stored many Times but the Main Limitation of
the floppy Disk is that floppy Disk have a Small capacity and the Floppy Disk also doesn’t have Reliability
means the Data Stored into the Disk may not be used for Long Time because the floppy Disk is very Sensitive
Thing when we Move the Head of the Disk Again and Again then the floppy disk gets Damaged. So that we can
say that Floppy Disk is not a Reliable thing. And I the Other side the Cost of floppy Disk is also high means with
the Comparison of the Other Storage Media’s Floppy Disk have some more cost.
But the Main Advantage of the Floppy Disk is that floppy Disk is used for Moving the data from one Computer
to Another With the Advent of the Floppy Disk we can Store the Data Into the Floppy Disk and after that we can
Easily Remove that Disk from the System and Also Put the Disk into the Another System for Taking the Data.
But we can not Start or Run the System without the Hard Disk So that floppy Disk is used to Transfer the Files
from one System into the. There are Two Types of floppy Disk Available first is the 3.5 and second is the 5.2.
But for inserting the Floppy Disk into the System we must have to use the Floppy Disk Drive in the System.
For Reading the data from the Disk there are also Some Read and Write heads those are too used. And the Head
will touch the Surface of the floppy Disk So that this will lead to the Damage of the Disk So Quickly because
when the Head Directly Touch the Surface of the Disk, then this will lead to the Scratches on the Disk and also
cause Damage of the Disk. And the Drive can take only one Disk Means we can insert only one Floppy Disk at a
Time into the Floppy Drive. The capacity of the floppy Disk is 1.44 MB. So that we can Floppy Disks as rare as
Possible.
Floppy Disk Contains a Notch which Specify Whether the data will be Read or Write Means to Say if we wants
to Protect our data then we can set the Notch of the Floppy Disk as a Read Only.
4) Optical Disks: The Optical Disks are also called as the CD-ROM’s means Compact Disk Read Only Memory
which is also used for Storing the data into the Disk and this is called as the Optical Disk because the CD-ROM
‘s are made up of the Golden or Aluminum Material and the data is Stored on the Disk in the Form of the Tracks
and Sectors. The Whole Disk is Divided into the Number of Tracks and the Single Track is Divided into the
Number of Sectors and the Data is Stored into the Sectors and the Disk is Divided into the Sectors as the first
Track Contains the Sectors in the huge Size and the Other Tracks contains the Sectors in a Small Manner. So that
as the Disk grows the Disk is Divided into the Small Number of Tracks and the Sectors.
CD-ROM Contains the data Which is truly Read able means we cant edit the contents of the CD-ROM Means
once Data has been Written into the CD , we can be able to Change the Contents of the Disk and the Data which
is Stored on the Disk can be Any Time Read by the user. The CD-ROM provides us the Large Capacity in
compare to the Floppy Disks and the CDROM can Store the Data from 650 MB to 800 MB means the data can
be Store up to this Space.
There are Many Disks that cant be Erased once Written So they are also called as the WORM Disks Means the
Write Once and Read Many Mane a user can just Write the data only one Time and then after that he can use that
Disk Many Times but a user cant Edit or Change those Contents after they are Written into the File. So that these
Disks are not Reusable. So that these Types of Optical Disks are also called as the CD-ROM and also Some
Times they are known as the CD-R Means the Read Only Disks because the data which is Written into these
Types of Disks is never to be Erased.
Now these Days there are also Some CDs Available those are also called as the CD-RW or Read Writable Disks.
As the Name Suggest these Disks Provides the feature to the user to Read and Write the Contents from the Disk
as they feel Necessary So that the CD-RW are now Most Popular because a user can any Time Remove the
Contents from the Disk and also he can store the new Contents into the Disk.
The CD-R and CD-RW both have Same Capacity and both these can be used for Transferring the Files from one
System to another but the Main difference is that the cost. The CD-RW has Some More Cost in compare to the
Simple CD or in Compare to the CD-R.
Winchester Disk: Another term for hard disk drive. The term Winchester comes from an early type of disk
drive developed by IBM that had 30MB of fixed storage and 30MB of removable storage; so its inventors called
it a Winchester in honor of its 30/30 rifle. Although modern disk drives are faster and hold more data, the basic
technology is the same, so Winchester has become synonymous with hard.
Magnetic Drum: A direct-access, or random-access, storage device. A magnetic drum, also referred to as drum,
is a metal cylinder coated with magnetic iron-oxide material on which data and programs can be stored.
Magnetic drums were once used as a primary storage device but have since been implemented as auxiliary
storage devices.
The tracks on a magnetic drum are assigned to channels located around the circumference of the drum, forming
adjacent circular bands that wind around the drum. A single drum can have up to 200 tracks. As the drum rotates
at a speed of up to 3,000 rpm, the device’s read/write heads deposit magnetized spots on the drum during the
write operation and sense these spots during a read operation. This action is similar to that of a magnetic tape or
disk drive.
Unlike some disk packs, the magnetic drum cannot be physically removed. The drum is permanently mounted in
the device. Magnetic drums are able to retrieve data at a quicker rate than tape or disk devices but are not able to
store as much data as either of them.
Direct Memory Access (DMA) transfers the block of data between the memory and peripheral devices of the
system, without the participation of the processor. The unit that controls the activity of accessing memory
directly is called a DMA controller.
The processor relinquishes the system bus for a few clock cycles. So, the DMA controller can accomplish the
task of data transfer via the system bus. In this section, we will study in brief about DMA, DMA controller,
registers, advantages and disadvantages. So let us start.
Direct memory access (DMA) is a mode of data transfer between the memory and I/O devices. This happens
without the involvement of the processor. We have two other methods of data transfer, programmed I/O and
Interrupt driven I/O. Let’s revise each and get acknowledge with their drawbacks.
In programmed I/O, the processor keeps on scanning whether any device is ready for data transfer. If an I/O
device is ready, the processor fully dedicates itself in transferring the data between I/O and memory. It transfers
data at a high rate, but it can’t get involved in any other activity during data transfer. This is the major
drawback of programmed I/O.
In Interrupt driven I/O, whenever the device is ready for data transfer, then it raises an interrupt to processor.
Processor completes executing its ongoing instruction and saves its current state. It then switches to data transfer
which causes a delay. Here, the processor doesn’t keep scanning for peripherals ready for data transfer. But, it is
fully involved in the data transfer process. So, it is also not an effective way of data transfer.
The above two modes of data transfer are not useful for transferring a large block of data. But, the DMA
controller completes this task at a faster rate and is also effective for transfer of large data block.
1. Burst Mode: Here, once the DMA controller gains the charge of the system bus, then it releases the
system bus only after completion of data transfer. Till then the CPU has to wait for the system buses.
2. Cycle Stealing Mode: In this mode, the DMA controller forces the CPU to stop its operation and
relinquish the control over the bus for a short term to DMA controller. After the transfer of every
byte, the DMA controller releases the bus and then again requests for the system bus. In this way, the
DMA controller steals the clock cycle for transferring every byte.
3. Transparent Mode: Here, the DMA controller takes the charge of system bus only if the processor does
not require the system bus.
DMA controller is a hardware unit that allows I/O devices to access memory directly without the participation
of the processor. Here, we will discuss the working of the DMA controller. Below we have the diagram of DMA
controller that explains its working:
1. Whenever an I/O device wants to transfer the data to or from memory, it sends the DMA request ( DRQ)
to the DMA controller. DMA controller accepts this DRQ and asks the CPU to hold for a few clock
cycles by sending it the Hold request (HLD).
2. CPU receives the Hold request (HLD) from DMA controller and relinquishes the bus and sends the Hold
acknowledgement (HLDA) to DMA controller.
3. After receiving the Hold acknowledgement (HLDA), DMA controller acknowledges I/O device (DACK)
that the data transfer can be performed and DMA controller takes the charge of the system bus and
transfers the data to or from memory.
4. When the data transfer is accomplished, the DMA raise an interrupt to let know the processor that the
task of data transfer is finished and the processor can take control over the bus again and start processing
where it has left.
Now the DMA controller can be a separate unit that is shared by various I/O devices, or it can also be a part of
the I/O device interface.
After exploring the working of DMA controller, let us discuss the block diagram of the DMA controller. Below
we have a block diagram of DMA controller.
Whenever a processor is requested to read or write a block of data, i.e. transfer a block of data, it instructs the
DMA controller by sending the following information.
1. The first information is whether the data has to be read from memory or the data has to be written to the
memory. It passes this information via read or write control lines that is between the processor and
DMA controllers control logic unit.
2. The processor also provides the starting address of/ for the data block in the memory, from where the
data block in memory has to be read or where the data block has to be written in memory. DMA
controller stores this in its address register. It is also called the starting address register.
3. The processor also sends the word count, i.e. how many words are to be read or written. It stores this
information in the data count or the word count register.
4. The most important is the address of I/O device that wants to read or write data. This information is
stored in the data register.
Advantages:
1. Transferring the data without the involvement of the processor will speed up the read-write task.
2. DMA reduces the clock cycle requires to read or write a block of data.
3. Implementing DMA also reduces the overhead of the processor.
Disadvantages
Key points
Thus the DMA controller is a convenient mode of data transfer. It is preferred over the programmed I/O and
Interrupt-driven I/O mode of data transfer.