0% found this document useful (0 votes)
28 views

CS6303 - CA - Question Bank

The document is a question bank for the CS6303 - Computer Architecture course at Panimalar Institute of Technology, covering various topics in computer architecture. It includes questions and answers related to basic computer structures, arithmetic operations, addressing modes, and performance metrics. The content is organized into units and parts, with a focus on key concepts such as cache memory, ALU functions, instruction formats, and performance equations.

Uploaded by

cs0814
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views

CS6303 - CA - Question Bank

The document is a question bank for the CS6303 - Computer Architecture course at Panimalar Institute of Technology, covering various topics in computer architecture. It includes questions and answers related to basic computer structures, arithmetic operations, addressing modes, and performance metrics. The content is organized into units and parts, with a focus on key concepts such as cache memory, ALU functions, instruction formats, and performance equations.

Uploaded by

cs0814
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 48

PANIMALAR INSTITUTE OF TECHNOLOGY

(JAISAKTHI EDUCATIONAL TRUST)


CHENNAI 600 123

DEPARTMENT OF CSE

II YEAR – III SEMESTER

CS6303 - COMPUTER ARCHITECTURE

QUESTION BANK WITH ANSWERS


UNIT-I

BASIC STRUCTURE OF COMPUTERS

PART-A

1. What is cache memory?

The small and fast RAM units are called as caches.when the execution of an instruction
calls for data located in the main memory,the data are fetched and a copy isplaced in the
cache.Later if the same data is required it is read directly from the cache.

2. What is the function of ALU?

Most of the computer operations(arithmetic and logic) are performed in ALU.The


data required for the operation is brought by the processor and the operation is
performed by the ALU.

3. What is the function of CU?

The control unit acts as the nerve center,that coordinates all the computer operations. It
issues timing signals that governs the data transfer.

4. What are basic operations of a computer?

The basic operations are READ and WRITE.

5. What are the registers generally contained in the processor? (Nov/Dec 2012)

 MAR-Memory Address Register


 MDR-Memory Data Register
 IR-Instruction Register
 R0-Rn-General purpose Registers
 PC-Program Counter

6. Distinguish between auto increment and auto decrement addressing mode. (Apr/May 2010)

It is generally used to increment or decrement the array pointer. For example while
executing a loop the processor may require to increment or decrement the pointer to the adjacent
address at each iteration. So it can be used to increment or decrement file pointers, or it can be used
to implemet stack in which the top can be incremented (TOP++) or decremented(TOP--).

7. Compare RISC with CISC Architecture. (Apr/May 2010) (Nov/Dec 2013)

RISC
• Simple instructions, few in number
• Fixed length instructions
• Complexity in compiler
• Only LOAD/STORE instructions access memory
• Few addressing modes

CISC
• Many complex instructions
• Variable length instructions
• Complexity in microcode
• Many instructions can access memory
• Many addressing modes

8. What are the steps in executing a program?

1.Fetch
2.Decode
3.Execute
4.Store

9. Define interrupt and ISR?

An interrupt is a request from an I/O device for service by the processor. The processor
provides the requested service by executing the interrupt service routine.

10. Define Bus. ( NOV/DEC 2006)


A group of lines that serves as a connecting path for several devices is called a bus.

11. What is the use of buffer register?

The buffer register is used to avoid speed mismatch between the I/O device and the
processor.

12. Compare single bus structure and multiple bus structure?

A system that contains only one bus(i.e only one transfer at a time) is called as a single bus
structure. A system is called as multiple bus structure if it contains multiple buses.

13. What is System Software? Give an example?

It is a collection of programs that are executed as needed to perform functions such as

o Receiving and interpreting user commands

o Entering and editing application programs and storing them as files in secondary
storage devices. Ex: Assembler, Linker, Compiler etc

14.What is Application Software? Give an example?

Application programs are usually written in a high-level programming language, in which


the programmer specifies mathematical or text-processing operations. These operations are
described in a format that is independent of the particular computer used to execute the program.
Ex: C, C++, JAVA

15. What is a compiler?

A system software program called a compiler translates the high-level language program
into a suitable machine language program containing instructions such as the Add and Load
instructions.

16. What is text editor?

It is used for entering and editing application programs. The user of this program
interactively executes command that allow statements of a source program entered at a keyboard to
be accumulated in a file.

17. Discuss about OS as system software?

OS is a large program, or actually a collection of routines, that is used to control the sharing
of and interaction among various computer units as they execute application programs. The OS
routines perform the tasks required to assign computer resources to individual application
programs.

18. What is an Opcode? (Apr/May 2011)

Opcode is the portion of a machine language instruction that specifies the operation to be
performed. Their specification and format are laid out in the instruction set architecture of the
processor.

19. What is multiprogramming or multitasking?

The operating system manages the concurrent execution of several application programs to
make the best possible uses of computer resources. This pattern of concurrent execution is called
multiprogramming or multitasking.

20. What is elapsed time of computer system?

The total time to execute the total program is called elapsed time.it is affected by the speed
of the processor, the disk and the printer.

21. What is processor time of a program?

The periods during which the processor is active is called processor time of a programIt depends on
the hardware involved in the execution of individual machine instructions.

22. Define clock rate?

The clock rate is given by,

R=1/P, where P is the length of one clock cycle.


23. Write down the basic performance equation?

T=N*S/R
Where,
T=processor time
N=no of instructions
S=no of steps
R=clock rate

24. What is pipelining?

The overlapping of execution of successive instructions is called pipelining.

25. What is byte addressable memory?

The assignment of successive addresses to successive byte locations in the memory is called
byte addressable memory.

26. What is big endian and little endian format? (Nov/Dec 2014)

The name big endian is used when lower byte addresses are used for the more significant of
the word. The name little endian is used for the less significant bytes of the word.

27. What is a branch instruction?

Branch instruction is a type of instruction which loads a new value into the program counter.

28. What is branch target?

As a result of branch instructions , the processor fetches and executes the instruction at a
new address called branch target, instead of the instruction at the location that follows the branch
instruction in sequential address order.

29. What are condition code flags?

The processor keeps track of information about the results of various operations for use by
subsequent conditional branch instructions. This is accomplished by recording the required
information in individual bits, often called condition code flags.

30. Define addressing mode. (APRIL/MAY 2009) (May/June 2013)

The different ways in which the location of an operand is specified in an instruction are
referred to as addressing modes.

31. What is Relative addressing mode? When it is used?(May/ June 2012) (Nov/Dec 2014)

Relative addressing mode is used by branch instructions (e.g. BEQ, BNE, etc.) which
contain a signed 8 bit relative offset (e.g. -128 to +127) which is added to program counter if the
condition is true. As the program counter itself is incremented during instruction execution by two
the effective address range for the target instruction must be with -126 to +129 bytes of the branch.

32. Define various addressing modes.

The various addressing modes are

1.Absolute addressing mode


2.Register addressing mode
3.Indirect addressing mode
4.Index addressing mode
5.Immediate addressing mode
6.Relative addressing mode
7.Autoincrement addressing mode
8.Autodecrement addressing
mode

33. What is a pointer?

The register or memory location that contains the address of an operand is called a pointer.

34. What is index register?

In index mode the effective address of the operand is generated by adding a constant value
to the contents of a register. The register used may be either a special register or may be any one of
a set of general purpose registers in the processor. This register is referred to as an index register.

35. What is assembly language?

A complete set of symbolic names and rules for the use of machines constitute a
programming language, generally referred to as an assembly language.

36. What is assembler directive?

SUM EQU 200

Assembler directives are not instructions that will be executed .It simply informs the
assembler that the name SUM should be replaced by the value 200 wherever it appears in the
program; such statements are called as assembler directives.

37. What is loader ?

Loader is a system software which contains a set of utility programs. It will load the object
program to the memory.

38. Define device interface.

The buffer registers DATAIN and DATAOUT and the status flags SIN and SOUT are part
of circuitry commonly known as a device interface.
39. What are the basic functional units of a computer?

Input, memory, arithmetic and logic unit, output and control units are the basic functional
units of a computer

40. Define Response time and Throughput.

Response time is the time between the start and the completion of the event. Also referred
to as execution time or latency. Throughput is the total amount of work done in a given amount of
time

41. Suppose that we are considering an enhancement to the processor of a server system
used for web serving. The new CPU is 10 times faster on computation in the web serving
application than the original processor. Assuming that the original CPU is busy with
computation 40% of the time and is waiting for I/O 60% of the time. What is the overall
speedup gained by incorporating the enhancement?

Fractionenhanced = 0.4
Speedupenhanced = 10
Speedupoverall = 1/(0.6+0.4/10) =1/0.64 = 1.56

42. Explain the different types of locality.

Temporal locality, states that recently accessed items are likely to be accessed in the near
future. Spatial locality, says that items whose addresses are near one another tend to be referenced
close together in time.

43. How will you compute the SPEC rating?(May/June 2012)

SPEC stands for system performance Evaluation Corporation

Running time on the reference computer


SPEC rating=
Running time on the computer under test

44. State Amdahl’s Law? (Nov / Dec 2014)

Amdahl‟s law states that in parallelization, if P is the proportion of a system or program that
can be made parallel, and 1-P is the proportion that remains serial, then the maximum
speedup that can be achieved using N number of processors is 1/((1-P)+(P/N).

If N tends to infinity then the maximum speedup tends to 1/(1-P).

Speedup is limited by the total time needed for the sequential (serial) part of the program.
For 10 hours of computing, if we can parallelize 9 hours of computing and 1 hour cannot be
parallelized, then our maximum speedup is limited to 10x.
45. List the Eight Great Ideas in Computer Architecture? (April/May 2015)

 Design for Moore‟s Law


 Use Abstraction to Simplify Design
 Make the Common Case Fast
 Performance via Parallelism
 Performance via Pipelining
 Performance via Prediction
 Hierarchy of Memories
 Dependability via Redundancy

PART B

1. Explain about functional units of computer ? Discuss each with neat


diagram? (APRIL/MAY 2009)
2. Write notes on Instruction formats. (NOV/DEC 2007) ( NOV/DEC 2006)
3. Explain in detail the different instruction formats with examples? (Nov / Dec
2013) (April/May 2015)
4. Explain about Instruction & Instruction Sequencing? ( NOV/DEC 2006)
(May/June2012) (Nov/Dec 2012) (May/June 2013) (Nov / Dec 2013)
5. Explain in detail the different Instruction types and Instruction Sequencing.
6. What is the need for addressing modes? Explain any two types of addressing modes
in detail? (Nov / Dec 2013) (April/May 2015) (Nov/Dec 2015)
7. (b) Explain the different types of Addressing modes with suitable examples.
(APRIL/MAY 2008&2012) (Nov/Dec 2012)
8. Registers R1 and R2 of a computer contains the decimal values 1200 and 2400
respectively. What is the effective address of the memory operand in each of the following
instructions?
i. Load 20(R1), R5
ii. Add –(R2) , R5
iii. Move #3000, R5
iv. Sub (R1)+, R5 (MAY/JUNE 2006)

9. What are the special registers in a typical computer? Explain their purposes in detail.
(Apr/May 2010)
10. Explain about RISC & CISC?(May/June 2012)
11. With examples explain the data transfer, logic and Program control
instructions? (Apr/May 2011)
12. Explain the Components of a Computer System? (Nov/Dec 2014) (Nov/Dec 2015)
13. State the CPU Performance Equation and discuss the factors that affect
Performance? (Nov/Dec 2014)

UNIT II

ARITHMETIC OPERATIONS

PART-A

1. Write the overflow conditions for addition and subtraction. (April/May 2015)
Overflow will occur
Case 1. When Adding Operands with Same Signs
Case 2. When Subtracting operands with Different Signs

2. Draw the Multiplication hardware diagram

3. List the steps of multiplication algorithm


There are 3 basic steps needed to perform the multiplication of each bit. They are
1. The LSB bit of the multiplier determines whether the multiplicand is added to the
product register.
2. Shift the multiplicand by 1-bit to the left, that will have a effect of moving the
intermediate products to the left.
3. Shift the multiplier right by 1-bit that will give the next bit of the multiplier to be
examined.
These 3 steps are repeated for 32 times to obtain the product.

4. What is fast multiplication?


Faster multiplications are possible by essentially providing one 32-bit adder for each bit of
the multiplier: one input is the multiplicand AND‟ed with a multiplier bit, and the other is the
output of a prior adder.
5. List the steps of division algorithm?
There are 3 basic steps needed to perform the first division algorithm. They are
1. Test divisor < dividend. i.e., Subtract the divisor register from the remainder (dividend)
register and place the result in the remainder register.
2. Check the MSB bit of the remainder
a. If remainder ≥ 0 (i.e., divisor < dividend), then shift the quotient register to the left,
setting the new rightmost bit to 1.
b. If remainder < 0 (i.e., divisor > dividend), then restore the original value by adding
the divisor to the remainder register (dividend) and placing the sum in the remainder register
and also shift the quotient register to the left, setting the new least significant bit to 0.
3. Shift the divisor to the right by 1-bit position.
These 3 steps are repeated for 32 times to obtain the remainder and quotient result.

6. What is scientific notation and normalization? Give an example


The alternative notation for the last two cases is called as the scientific notation. It is a
notation that renders numbers with a single digit to the left of the decimal point.
A number in floating-point notation that has no leading 0‟s is called a normalized number.

7. Give the representation of single precision floating point number


31 30 23 22 0
S Exponent Fraction
1-bit 8-bits 23-bits
Where,

S – is the sign of the floating-point number (1 – Negative & 0 – Positive)


Exponent – is the value of the 8-bit exponent field
Fraction – is the 23-bit number

8. Give the representation of double precision floating point number (Nov/Dec 2015)
31 30 20 19 0
S Exponent Fraction
1-bit 11-bits 20-bits

63 32
Fraction (continued here)
32-bits

9. What are the floating point instructions in MIPS?


 Floating-point addition, single (add.s) and addition. Double (add.d)
 Floating-point subtraction, single (sub.s) and subtraction, doulble (sub.d)
 Floating-point multiplication, single (mul.s) and multiplication, double (maul.d)
 Floating-point division, single (div.s) and division, double (div.d)
 Floating-point comparison, single (c.x.s) and comparison double (c.x.d)
10. What are the steps of floating point addition?
Step 1: The first step shifts the significand of the smaller number to the right until its
corrected exponent matches that of the larger number.
Step 2: Add/Subtract the significands, depending on sign
Step 3: Normalize the sum by adjusting exponent.
Step 4: Check for overflow
Step 5: Rounding off to appropriate number of bits
Step 6: Result may need further normalization; then goto step 3
Step 7: Set the sign, sign is determined by addition of the significand.

11. List the steps of floating point multiplication


Step 1: Calculate the new exponent E3, by adding two exponents E1 = E1r + bias and E2 =
E2r + bias; the resulting exponent bias must be subtracted once.
Step 2: Multiply the significands M1 x M2
Step 3: Normalize the product.
Step 4: Check for overflow/underflow. If the resulting exponent E3 is larger than Emax or
smaller than Emin – overflow/ underflow is said to have occurred.
Step 5: Rounding off to appropriate number of bits
Step 6: Result may need further normalization, then goto step 3.
Step 7: Set the sign of the product to positive, if sign of both operand are positive otherwise
set the sign of the product to negative, if both operands are different.

12. Define – Guard Bit and Rounding


It is the first of two extra bits kept on the right during intermediate calculations of floating
point numbers; used to improve the accuracy of rounding.

Rounding is a method to make the intermediate floating point result fit into the floating
point format; the goal is to find the nearest number that can be represented in the format.

13. What is meant by sub-word parallelism? (April/May 2015)


Parallelism will improve the performance of the computer. Many applications need high
performance when compared to small applications.

Every processor has its own graphic display. Many graphic systems originally used 8 bit to
represent each of the three primary colors (RGB) plus 8 bit for a location of a pixel. In addition to
graphic display, speakers and microphones for teleconferencing and video games suggested support
of sound as well. Audio samples need more than 8 bits of precision, but 16 bits are sufficient.

14. What is meant by Sticky Bits?

The goal of the extra rounding bits is to allow the computer to get the same results as if the
intermediate results were calculated to infinite precision and then rounded. To support this goal and
round to the nearest even, a bit is used in rounding in addition to guard and round that is set
whenever there are nonzero bits to the right of the round bit. This sticky bit allows the computer to
see the difference between 0.50….0010 and 0.50….0110 when rounding.

15. What is meant by Carry-Propagation Delay?


In an n-bit adder, the carry output of each full-adder stage is connected to the carry input of
the next higher-order stage. Therefore, the sum and carry outputs of any stage cannot be produced
until the input carry occurs, this leads to a time delay in the addition process. This delay is known
as carry propagation delay.

16. What is meant by Carry Generate and Carry Propagate?

Carry look-ahead adder uses two functions: carry generate and carry propagate

Pi = Ai Bi

Gi = Ai Bi

Here Gi is called carry generate and it produces a carry when both Ai and Bi are one,
regardless of the input carry. Pi is called carry propagate because it is the term associated with the
propagation of the carry from Ci to Ci+1. Now Ci+1 can be expressed as a sum of products function
of the P and G outputs of all the preceding stages.

17. State Booth’s Recoded Scheme?


“-1 times the shifted multiplicand is selected when moving from 0 to 1, +1 times the shifted
multiplicand is selected when moving from 1 to 0, and 0 times the shifted multiplicand is selected
for none of the above case, as the multiplier is scanned from right to left”.

18. Recode the multiplier 0 1 1 0 0 1 for Booth’s multiplication

Multiplier = 0 1 1 0 01 0  Implied Zero

Recoded Multiplier = +1 0 -1 0 +1 -1

19. Multiply 010011 x 001100 using Booth’s multiplication.

Multiplicand = 010011
Multiplier = 001100

Recoded Multiplier = 0 +1 0 -1 0 0

Booth‟s Multiplication:

0 1 0 0 1 1
0 +1 0 -1 0 0
---------------------------------------------------
00000000000 0
00000000000
1 1 1 1 1 0 1 1 0 1 ----- 2‟s compliment of the multiplicand
00000000 0
0 0 0 1 0 0 1 1
0 0 0 0 0 0 0 0
---------------------------------------------------
00001110010 0
---------------------------------------------------
20. State the Bit Pair Recoding Table?

21. Find the bit-pair code for the multiplier 11010.

Multiplier: Sign Extension  1) 1 1 0 1 0 (0  Implied Zero to right of LSB


---------- --------- --------
Recoded Multiplier: 0 -1 -2

Therefore, Recoded Multiplier of 11010 = 0 -1 -2

22. Multiply the given signed 2’s compliment numbers using bit-pair recoding technique.

A = 110101  Multiplicand (-11)


B = 011011  Multiplier (+27)
23. What is meant by CSA Technique?
The designer can be able to design a circuit much faster than five add times with the use of
carry-save adders (CSA), because it is easy to pipeline such a design to be able to support many
multiplies simultaneously. Thus the carry-save addition reduces the time needed to add the
summands.

24. Compare Restoring and Non-Restoring Division Technique?

Sl. No Restoring Division Non-Restoring Division

Needs restoring of register A if the


1 Does not need restoring
result of subtraction is negative

In each cycle, content of register A


In each cycle, content of register A is first shifted left and then divisor is
2 is first shifted left and then divisor added or subtracted with the content
is subtracted from it. of register A depending on the sign
of A.

Does need restoring of Needs restoring of remainder if the


3
remainder is negative
not
remainder
4 Slower algorithm Faster algorithm

25. Write short note on SRT Division Technique?


There are some techniques which produces more than one bit of the quotient per step. The
SRT division technique tries to guess several quotient bits per step, using a table lookup based on
the upper bits of the dividend and remainder. It relies on subsequent steps to correct wrong guesses.
A typical value today is 4 bits. The key is guessing the value to subtract. With binary division, there
is only a single choice. These algorithms use 6 bits from the remainder and 4 bits from the divisor
to index a table that determines the guess for each step. The accuracy of this fast method depends
on having proper values in the lookup table.

26. State the Advantages of Floating Point Representations?


 It simplifies exchange of data that includes floating-point numbers.
 It simplifies the floating-point arithmetic algorithms to know that numbers will always be in
this form.
 It increases the accuracy of the numbers that can be stored in a word, since the unnecessary
leading 0‟s are replaced by real digits to the right of the binary point.

27. When do data hazards arise?

Data hazards arise when an instruction depends on the results of a previous instruction in a way that
is expressed by the overlapping of instructions in the pipeline.

28. What are the 2 ways to detect overflow in an n-bit adder?

Overflow can occur when the signs of two operands are the same. Overflow occurs when the carry
bits Cn and Cn-1 are different.
29. What is the delay encountered for Cn-1, Sn-1 and Cn in the FA for a single stage

Cn-1 – 2(n-1)

Sn-1 – 2(n-1)+1

Cn – 2n

30. What is the delay encountered for all the sum bits in n-bit binary addition/subtraction
logic unit?

The gate delays with and without overflow logic are 2n+2 and 2n respectively

31. Write down the basic generate and propagate functions for stage i

Gi = XiYi, Pi=Xi xor with Yi

32. Write down the general expression for Ci+1 using first level generate and propagate
function

Ci+1 = Gi+PiGi-1+PiPi-1Gi-2+…+PiPi-1…P1G0+PiPi-1…P0G0

50. What are the two approaches to reduce delay in adders

 Fastest electronic technology in implementing the ripple carry logic design


 Augmented logic gate network

33. What is the delay encountered in the path in an n x n array multiplier

The delay encountered in the path in an n x n array multiplier is 6(n-1)-1

34. What is skipping over of one’s in Booth decoding?

The Transformation 011… 110= +100…0 – 10 is called skipping over one‟s. In his case
multiplier has its ones grouped into a few contiguous blocks.

35. What are the two attractive features of Booth algorithm

 It handles both positive and negative multipliers uniformly


 It achieves some efficiency in the number of additions required when the multiplier has a
few large blocks of ones

36. Give an example for the worst case of Booth algorithm

In the worst case each bit of the multiplier selects the summands. This results in more
number of summands.
37. What are the two techniques for speeding up the multiplication operation?

 Bit Pair recoding


 CSA

38. How bit pair recoding of multiplier speeds up the multiplication process?

It guarantees that the maximum number of summands that must be added is n/2 for nbit
operands.

39. How CSA speeds up multiplication?

It reduces the time needed to add the summands. Instead of letting the carries ripple along
the rows, they can be saved and introduced into the next row, at the correct waited position.

40. Write down the levels of CSA steps needed to reduce k summands to two

vectors in CSA

The number of levels can be shown by 1.7log2k-1.7

41. Write down the steps for restoring division and non-restoring division

Non Restoring:
Step1: Do the following n times
1. If the sign of A is 0, shift A and Q left one bit position and subtract M from A
otherwise shift A and Q left and add M to A.
2. Now if the sign of A is 0, set Q0 to 1; otherwise set Q0 to 0
Step 2: If the sign of A is 1, add M to A

Restoring:
Shift A and Q left one binary position
Subtract M from A
If the sign of A is one , set Q0 to 0, add M back to A otherwise set Q0 to 1

42. What is the advantage of non-restoring over restoring division?

Non restoring division avoids the need for restoring the contents of register after an
successful subtraction.

43. What is the need for adding binary 8 value to the true exponential in floating
point numbers? (May/June 2013)

This solves the problem of negative exponent. Due to this the magnitude of the numbers can
be compared. The excess-x representation for exponents enables efficient comparison of the relative
sizes of the two floating point numbers.

44. Briefly explain the floating point representation with an example?(Nov/Dec 2012)
The floating point representation has 3 fields
1.sign bit
2.significiant bits
3.exponent

For example consider 1.11101100110 x 10^5,


Mantissa=11101100110
Sign=0
Exponent=5

45.What are the 2 IEEE standards for floating point numbers?

1. Single-precision format
2. Double-precision format

46. What is overflow, underflow case in single precision(sp)?

Underflow-In SP it means that the normalized representation requires an exponent


less than -126.
Overflow- In SP it means that the normalized representation requires an exponent
greater than +127.

47.What are the exceptions encountered for FP operation?

The exceptions encounter for FP operation are overflow, underflow,/0,inexact and invalid
values.

48. What is a guard bit?

Guard bits are extra bits which are produced during the intermediate steps to yield
maximum accuracy in the final results.

49. What are the ways to truncate guard bits?

1. Chopping
2 .Von Neumann rounding
3. Rounding procedure

50. What is register indirect addressing mode? When it is used? (Nov / Dec 2013)

(Effective PC address = contents of register 'reg')

The effective address for a Register indirect instruction is the address in the specified
register. For example, (A7) to access the content of address register A7.

The effect is to transfer control to the instruction whose address is in the specified register.
PART – B

1. Design a 4-bit Carry-Look ahead Adder and explain its operation


with an example. (APRIL/MAY 2008) & (NOV/DEC 2007).
2. What are the disadvantages in using a ripple carry adder? ( NOV/DEC 2006)
3. Design a binary multiplier using sequential adder. Explain its operation. (NOV/DEC
2007)
4. Draw the diagram of a carry look a head adder and explain the carry look ahead
principle. ( NOV/DEC 2006) (Nov/Dec 2014)
5. Design a 4-bit binary adder/ subtractor and explain its functions. (APRIL/MAY 2008)
6. Give the algorithm for multiplication of signed 2’s complement numbers and
illustrate with an example. (APRIL/MAY 2008) (May/June 2013) (Nov/Dec 2015)
7. Write the algorithm for division of floating point numbers and illustrate with an example.
(APRIL/MAY 2008) (Nov/Dec 2015)
8. Write about the CSA method of fast multiplication. Prove how it is faster with
an example. (NOV/DEC 2007) & (MAY/JUNE 2006)
9. Draw the circuit for integer division and explain. (NOV/DEC 2007)
10. Perform the division on the following 5-bit unsigned integer using non-restoring division:
10101 / 00101. (MAY/JUNE 2006)
11. Explain the working of a floating point adder/ subtractor. With a detailed flow chart
explain how floating point additional/ subtraction is performed. (MAY/JUNE 2006) &
( NOV/DEC 2006) & (APRIL/MAY 2009)(May/June 2012)
12. Multiply the following pair of signed 2’s complements numbers using bit-pair-recoding of
the multipliers: A= 010111, B=101100. (MAY/JUNE 2006)
13. Give the IEEE standard double precision floating point format. ( NOV/DEC 2006)
14. Explain the representation of floating point numbers in detail. (MAY/JUNE 2007)
15. Design a multiplier that multiplies two 4-bit numbers. (MAY/JUNE 2007)
16. Explain the algorithm for integer division with suitable example. (MAY/JUNE 2007)
17. Give the block diagram of the hardware implementation of addition and subtraction
of signed number and explain the operations with flowchart. (MAY/JUNE 2007)
(April/May 2015)
18. Explain the Design of ALU in detail? (May/June 2013) (Nov / Dec 2013)
19. Multiply the following pair of signed nos. using Booth’s bit-pair recoding of the
multiplier A= +13 (Multiplicand) and B= -6 (Multiplier) (Nov/Dec 2014)
20. Divide 12 by 3 using the Restoring and Non-Restoring division algorithm with step by
step intermediate results and explain? (Nov/Dec 2014)
21. Explain the sequential version of multiplication algorithm and its hardware? (April/May
2015)

UNIT III

PIPELINING
PART-A

1. List the characteristics of a MIPS Processor?


Load/Store architecture
General purpose register machine (32Registers)
ALU operations have 3 register operands (2 source + 1 destination)
16-bit constants for immediate mode
Simple instruction set
Uniform encoding
Designed for pipelining efficiency, including a fixed instruction set encoding

2. List the Registers used in MIPS?


 MIPS64 (for 64-bit implementations) has 32 64-bit General Purpose Registers (GPR) named
R0, R1, R2, …., R31. GPR‟s are also sometimes known as integer registers.
 Additionally there is a set of 32 floating point registers (FPR) named F0, F1, F2, ….., F31,
which can hold 32-single precision (32-bit) values or 32–double precision (64-bit) values.
 Both single and double-precision floating-point operations (32-bit and 64-bit) are provided.
 MIPS also include instructions that operate on two single-precision operands in a single 64-
bit floating-point register.

3. List the Different Addressing Modes of MIPS?


 Immediate Addressing
 Register Addressing
 Base Addressing or Displacement Addressing
 PC – Relative Addressing
 Pseudo Direct Addressing

4. List the 5 stages of MIPS Processor?


Step 1 – Fetch the Instruction
Step 2 – Instruction Decode and Read Registers
Step 3 – ALU operation, Branch Address Computation
Step 4 – LW / STORE in data memory
Step 5 – Register write
5. What is meant by State Elements and Combinational Elements?

Combinational Elements:
The elements that operate on data values are all combinational, which means that their
outputs depend only on the current inputs. Given the same input, a combinational element
always produces the same output. ALU is an example for a combinational element, since, given
a set of inputs, it always produces the same output because it has no internal storage.

State Elements:
An element contains state if it has some internal storage. We call these elements state
elements because, if we pulled the power plug on the computer, we could restart it by loading
the state elements with the values they contained before we pulled the plug. Furthermore, if we
saved and restored the state elements, it would be as if the computer had never lost power. Thus,
these state elements completely characterize the computer. The instruction and data memories,
as well as the registers, are all examples for state elements.

6. What is meant by Edge-Triggered Clocking Methodology?


An edge-triggered clocking methodology means that any values stored in a sequential logic
element are updated only on a clock edge. Because only state elements can store a data value,
any collection of combinational logic must have its inputs come from a set of state elements and
its outputs written into a set of state elements. The inputs are values that were written in a
previous clock-cycle, while the outputs are values that can be used in a following clock-cycle.

7. What is meant by data path element?


It is a unit which is used to operate on or hold data within a processor. In the MIPS
implementation, the datapath elements include the instruction and data memories, the register
file, the ALU, and adders.

8. What is meant by Instruction Memory?


An instruction memory unit which is used to store the instructions of a program and supply
instructions given an address. It is a combinational element because it will perform only read
operation and the output at any time reflects the contents of the location specified by the input
address.

9. What is the use of PC register?


Program counter (PC) is a register containing the address of the instruction in the program
being executed. The PC is a 32-bit register that is written at the end of every clock cycle and
thus it does not need a write control signal.

10. What is meant by register file?


The processor‟s 32 general-purpose registers are stored in a structure called a register file.
A register file is a collection of registers in which any register can be read or written by
specifying the number of the register in the file. The register file contains the register state of
the computer. In addition, we will need an ALU to operate on the values read from the registers.
11. Draw the diagram of portion of data path used for fetching instruction.

12. Define – Sign Extension Unit?


We will need a unit to sign-extend the 16-bit offset field in the instructions to a 32-bit
signed value. To increase the size of a data item by replicating the high-order sign bit of the
original data item in the high-order bits of the larger, destination data item.

13. What is meant by branch target address?


It is the address specified in a branch, which becomes the new program counter (PC) if the
branch is taken. In the MIPS architecture the branch target is given by the sum of the offset field
of the instruction and the address of the instruction following the branch.

14. Define Pipelining?


Pipelining is an implementation technique in which multiple instructions are overlapped in
execution. It is an important technique used to make fast CPUs which takes much less time to
complete the execution when compared to sequential execution technique.
A substantial improvement in performance can be achieved by overlapping the execution of
successive instructions using a technique called pipelining. Pipelining is an effective way of
organizing concurrent activity in a computer system.

15. What is meant by Pipeline Performance?


The potential increase in performance resulting from pipelining is proportional to the
number of pipeline stages. If all the stages take about the same amount of time and there is
enough work to do, then the speed-up due to pipelining is equal to the number of stages in the
pipeline.
16. Define Hazard? List its Types? (Nov/Dec 2015)
There are situations in pipelining when the next instruction cannot execute in the following
clock cycle. These events are called hazards.

For a variety of reasons, one of the pipeline stages may not be able to complete the
processing task for a given instruction in the time allotted. Any condition that causes the pipeline to
stall (delay) is called a hazard.
Types of Hazards
There are three types of hazards. They are
 Structural Hazards
 Data Hazards
 Control Hazards (or) Instruction Hazards

17. Define Structural Hazard?


When a processor is pipelined, the overlapped execution of instructions requires pipelining
of functional units and duplication of resources to allow all possible combinations of
instructions in the pipeline. If some combinations of instructions cannot be accommodated
because of resources conflicts, the processor is said to have a structural hazard.

18. What is data hazard?


When a planned instruction cannot execute in the proper clock cycle because data that is
needed to execute the instruction is not yet available.
A data hazard is any condition in which either the source or the destination operands of an
instruction are not available at the time expected in the pipeline. As a result, some operation has
to be delayed and the pipeline stalls.

19. What is control Hazard?


When the proper instruction cannot execute in the proper pipeline clock cycle because the
instruction that was fetched is not the one that is needed; that is, the flow of instruction
addresses is not what the pipeline expected.
The pipeline may also be stalled because of a delay in the availability of an instruction.
Control hazard arises because of the need to make a decision based on the results of one
instruction while others are executing.

20. What is meant by branch prediction? List its types? (Nov/Dec 2015)
Branch prediction is a technique for predicting branch penalty associated with conditional
branches is to attempt to predict whether or not a particular branch will be taken or not. There are
two forms of branch prediction techniques. They are
Static Branch Prediction
Dynamic Branch Prediction

21. What is meant by delayed branch?


In a five-stage pipeline we can make the control hazard a feature by redefining the branch. A
delayed branch always executes the following instruction, but the second instruction following
the branch will be affected by the branch.
The location following a branch instruction is called a branch delay slot. Compilers and
assemblers try to place an instruction that always executes after the branch in the branch delay
slot. The job of software is to make the successor instructions valid and useful. Figure 3.63
shows the three ways in which the branch delay slots can be scheduled.

22. Define static branch prediction?


The simplest form of branch prediction is to assume that the branch will not take place and
to continue to fetch instructions in sequential address order, until the branch condition is
evaluated, instruction execution along the predicted path must be done on a speculative basis.
23. What is meant by dynamic branch prediction?
With more hardware it is possible to try to predict branch behavior during program
execution. One approach is to look up the address of the instruction to see if a branch was taken
the last time this instruction was executed, and if so, to begin fetching new instructions from the
same place as the last time. This technique is called dynamic branch prediction.

24. What is meant by forwarding?


we must pass the operand register numbers from the ID stage via the ID/EX pipeline register to
determine whether to forward values. We already have the rt field. Before forwarding, the ID/EX
register had no need to include space to hold the rs field. Hence, rs is added to ID/EX.

25. What are the 5 pipeline stages?


IF: Instruction Fetch
ID: Instruction decode and register file read
EX: Execution or Address Calculation
MEM: Data Memory Access
WB: Write Back

26. What are exceptions and interrupts?


In MIPS convention, we use the term exception to refer to any unexpected change in control
flow without distinguishing whether the cause is internal or external; we use the term interrupt
only when the event is externally caused.

PART B

1. Explain the basic MIPS implementation of instruction set?


2. Explain the basic MIPS implementation with necessary multiplexers and control lines
(Nov/Dec 2015)
3. What is a control hazard? Explain the methods for dealing with the control hazards.
4. Discuss the influence of pipelining in detail
5. What is Hazard? Explain its types with suitable examples? (Nov/Dec 2014)
6. Explain how the instruction pipeline works. What are the various situations where
an instruction pipeline can stall? What can be its resolution? (Nov/Dec 2015)
7. What is data hazard? How do you overcome it? What are its side effects?
8. Discuss the data and control path methods in pipelining (Nov/Dec 2014)
9. Explain dynamic branch prediction
10. How exceptions are handled in MIPS
11. Explain in detail about building a data path
12. Explain in detail about control implementation scheme
13. Design a 4-stage instruction pipeline and show how its performance is improved
over sequential execution. (NOV/DEC 2007) & ( NOV/DEC 2006) (May/June 2013)
14. Explain the function of a six segment pipelines and draw a space diagram for a six
segment pipeline showing the time it takes to process eight tasks. (MAY/JUNE 2007)
15. Explain the performance of the instruction pipeline can be improved. (MAY/JUNE
2007) (Nov/Dec 2012)
16. Explain about Data Hazards with its representation.(APRIL/MAY 2008) (May/June 2012)
(Nov/Dec 2012)
17. Explain about instruction Hazards. (May/June 2012)
18. Highlight the solutions of instruction Hazards. (NOV/DEC 2007)
19. What are branch hazards? Describe the methods for dealing with the branch hazards.
(May/June 2013)
20. Explain how pipelining helps to speed-up the processor. Discuss the hazards that have
to be taken care of in a pipelined processor. (MAY/JUNE 2006)
21. Explain about Data path & Control Consideration. (May/June 2012) (Nov / Dec 2013)
22. Explain about Superscalar Operation.
23. What are the hazards of conditional branches in pipelines? How it can be
resolved? (Apr/May 2011)
24. Describe the role of cache memory in pipelined system. (Apr/May 2010)
25. Discuss the influence of pipelining on instruction set design. (Apr/May 2010)
26. Describe the procedure to fetch a word from the memory and store a word into
the memory (APRIL/MAY 2009)
27. Explain Dynamic Branch Prediction technique in detail? (May/June 2013)
28. Briefly explain the speedup performance models for pipelining? (Nov / Dec 2013)
29. What is instruction hazard? Explain in detail how to handle the instruction hazards
in pipelining with relevant examples? (Nov / Dec 2013)
30. Write short notes on exception handling? (Nov / Dec 2013)
31. Explain the different types of pipeline hazards with suitable examples? (April/May 2015)
32. Explain in detail how exceptions are handled in MIPS architecture? (April/May 2015)
UNIT - IV

PARALLELISM

PART-A

1. What is Flynn’s Classification? (Nov/Dec 2014)


In 1996, Michael J. Flynn has made an informal and widely used classification of processor
parallelism based on the number of simultaneous instruction and data streams seen by the processor
during program execution.
The classification made by Michael J. Flynn divides computers into four major groups:

 Single Instruction Stream – Single Data Stream (SISD)


 Single Instruction Stream – Multiple Data Stream (SIMD)
 Multiple Instruction Stream – Single Data Stream (MISD)
 Multiple Instruction Stream – Multiple Data Stream (MIMD)

2. Brief about Multithreading? (Nov/Dec 2014)


Multithreading is a higher-level parallelism called thread-level parallelism (TLP) because it
is logically structured as separate threads of execution.
In multithreading, the instruction stream is divided into several smaller streams, called threads, such
that the threads can be executed in parallel. Here, a high degree of instruction-level parallelism can
be achieved without increasing the circuit complexity or power consumption.

3. What is meant by ILP? (Nov/Dec 2015)

ILP is a measure of how many operations in a computer program can be performed


simultaneously. The potential overlap among instructions is called instruction level parallelism. It is
a technique which is used to overlap the execution of instructions to improve performance.
Pipelining is a technique that runs programs faster by overlapping the execution of instructions.
Pipelining is an example of instruction level parallelism.

4. What is multiple issue? Write any two approaches.


Multiple issue is a technique which replicates the internal components of the computer so
that it can launch multiple instructions in every pipeline stage. Launching multiple instructions per
stage will allow the instruction execution rate to exceed the clock rate or the CPI to be less than 1.
Types of Multiple issues
There are two major ways to implement a multiple issue processor such as,

 Static multiple Issues – It is an approach to implement a multiple issue processor where


many decisions are made by the compiler before execution.

 Dynamic Multiple Issues – It is an approach to implement a multiple issue processor


where many decisions are made during execution by the processor.

5. What is meant by speculation?


Speculation is an approach whereby the compiler or processor guesses the outcome of an
instruction to remove it as dependence in executing other instructions.
6. Define – Static Multiple Issue
It is an approach to implement a multiple issue processor where many decisions are made by
the compiler before execution.

7. Define – Issue Slots and Issue Packet


In a static issue processor, the set of instructions issued in a given clock cycle is called as
issue packet. The packet may be determined statically by the compiler or dynamically by the
processor.

8. Define – VLIW
VLIW is a style of instruction set architecture that launches many operations that are
defined to be independent in a single wide instruction, typically with many separate opcode fields.

9. Define – Superscalar Processor (Nov/Dec 2015)


Superscalar is an advanced pipelining technique that enables the processor to execute more
than one instruction per clock cycle by selecting them during execution.

10. What is meant by loop unrolling?


An important compiler technique to get more performance from loops is loop unrolling,
where multiple copies of the loop body are made. After unrolling, there is more ILP available by
overlapping the instructions from different iterations.
Loop unrolling is a technique to get more performance from loops that access arrays, in
which multiple copies of the loop body are made and instructions from different iterations are
scheduled together.

11. What is meant by anti-dependence? How is it removed?


It is an ordering forced by the reuse of a name, typically a register, rather than by a true
dependence that carries a value between two instructions.

12. Differentiate in-order execution from out-of-order execution.


In-order commit is a commit in which the results of pipelined execution are written to the
programmer-visible state in the same order that instructions are fetched.
Out-of-order execution is a situation in pipelined execution when an instruction blocked from
executing does not cause the following instructions to wait.

13. What is meant by hardware multithreading?


Hardware multithreading allows multiple threads to share the functional units of a single
processor in an overlapping fashion. To permit this sharing, the processor must duplicate the
independent state of each thread. For example, each thread would have a separate copy of the
register file and the PC. The memory itself can be shared through the virtual memory mechanisms,
which already support multiprogramming.

14. What are the two main approaches to hardware multithreading?


There are two main approaches to hardware multithreading. They are
 Fine-grained multithreading
 Coarse-grained multithreading

15. What is SMT?


Simultaneous multithreading is a version of multithreading that lowers the cost of
multithreading by utilizing the resources needed for multiple issue, dynamically scheduled micro-
architecture. The wide superscalar instruction is executed by executing multiple threads
simultaneously using multiple execution units of a superscalar processor.

16. Compare Vector Architecture and Multimedia Extensions?

Sl. No. Vector Architecture Multimedia Extensions


1 It specifies dozens of operations It specifies a few operations
Number of elements in a vector Number of elements in a multimedia
2
operation is not in the opcode extension operation is in the opcode
In vectors, data transfers need not In multimedia extensions, data
3
be contagious transfers need to be contagious
4 It specifies multiple operations It also specifies multiple operations
It easily captures the flexibility in It also easily captures the flexibility
5
data widths in data widths
6 It is easier to evolve over time It is complex to evolve over time.

17. What are the three multithreading options?


 A superscalar with coarse-grained multithreading
 A superscalar with fine-grained multithreading
 A superscalar with simultaneous multithreading

18. Define – SMP


SMP is a parallel processor with a single address space, implying implicit communication
with loads and stores. If offers the programmer a single address space across all processors,
although a more accurate term would have been shared-address multiprocessor. Such systems can
still run independent jobs in their own virtual address space, even if they all share a physical
address space. Processors communicate through shared variables in memory, with all processors
capable of accessing any memory location via loads and stores.

19. Differentiate UMA from NUMA. (April/May 2015)


This multiprocessor takes about the same time to access main memory no matter which
processor requests it and no matter which word is requested. Such machines are called uniform
memory access (UMA) multiprocessors.
In this multiprocessor, some memory accesses are much faster than others, depending on which
processor asks for which word. Such machines are called non-uniform memory access (NUMA)
multiprocessors.
The programming challenges are harder for a NUMA multiprocessor than for a UMA
multiprocessor, but NUMA machines can scale to larger sizes and NUMAs can have lower latency
to nearby memory.

20. What is a multicore microprocessor?


A multicore design takes several processor cores and packages them as a single processor.
The goal is to enable the system to run more tasks simultaneously and thereby achieve greater
overall system performance.
21. Define a cluster
Cluster is a collection of independent Uniprocessor SMP, which may be interconnected to
form a cluster. Communication among the computers is either via fixed paths or via some network
facility.

22. Define Multiprocessors?


A computer system with at least two or more processors is called multiprocessor system.
The multiprocessor software must be designed to work with a variable number of processors.
Replacing the large inefficient processors with many smaller , efficient processors can deliver better
performance per watt , if software can efficiently use them.

23. What are the advantages of increasing the depth of pipeline?


By increasing the depth of the pipeline, more instructions can be executed in parallel
simultaneously. The amount of parallelism being exploited is higher, since there are more
operations being overlapped. Performance is potentially greater since the clock cycle can be shorter.

24. What is meant by Use-Latency?


It is defined as the number of clock cycles between a load instruction and an instruction that
can use the result of the load without stalling the pipeline.

25. What is meant by reordering the instructions? Give example?


In a static two-issue processor, the compiler attempts to reorder instructions to avoid stalling
the pipeline when branches or data dependencies between successive instructions occur. In doing
so, the compiler must ensure that reordering does not cause a change in the outcome of a
computation. The objective is to place useful instructions in these slots. If no useful instructions can
be placed in the slots, then these slots must be filled with „nop‟ instructions. The dependency
introduced by the condition-code flags reduces the flexibility available for the compiler to reorder
instructions.

26. What is meant by Loop Unrolling? Give example?


Loop unrolling is a technique to get more performance from loops that access arrays, in
which multiple copies of the loop body are made and instructions from different iterations are
scheduled together.

Example

Loop: lw $t0, 0($s1);


addu $t0, $t0, $s2;
sw $t0, 0($s1);
addi $s1, $s1, -4;
bne $s1, $zero, Loop;
27. What is meant by Register Renaming?
It is the process of renaming the registers by the compiler or hardware to remove
antidependences.

Consider how the unrolled code would look using only $t0. There would be repeated
instances of lw $t0, 0($s1), addu $t0, $t0, $s2 followed by sw $t0, 4($s1), but these sequences,
despite using $t0, are actually completely independent – no data values flows between one pair of
these instructions and the next pair. This is what is called an antidependences or name dependence,
which is an ordering forced purely by the reuse of a name, rather than a real data dependence which
is also called a true dependence.

28. Write the Advantages of Register Renaming?


Renaming the registers during the unrolling process allows the compiler to move these
independent instructions subsequently so as to better schedule the code. The renaming process
eliminates the name dependences, while preserving the true dependences.

29. What is meant by Reservation Stations?


Reservation station is a buffer within a functional unit that holds the operands and the
operation.

30. What is Reorder Buffer?


The buffer in the commit unit is often called the reorder buffer, is also used to supply
operands, in much the same way as forwarding logic does in a statically scheduled pipeline. Once a
result is committed to the register file, it can be fetched directly from there, just as in a normal
pipeline. It is a buffer which holds the results in a dynamically scheduled processor until it is safe to
store the results to memory or a register.

31. List the Advantages of Dynamic Scheduling?


 Dynamic scheduling is often extended by including hardware-based speculation, especially for
branch outcomes. By predicting the direction of a branch, a dynamically scheduled processor
can continue to fetch and execute instructions along the predicted path.
 Not all stalls are predictable; in particular, cache misses can cause unpredictable stalls.
Dynamic scheduling allows the processor to hide some of those stalls by continuing to execute
instructions while waiting for the stall to end.
 If the processor speculates on branch outcomes using dynamic branch prediction, it cannot
know the exact order of instructions at compile time, since it depends on the predicted and
actual behavior of branches.
 As the pipeline latency and issue width change from one implementation to another, the best
way to compile a code sequence also changes.
 Old code will get much of the benefit of a new implementation without the need for
recompilation.

32. List out the Parallel Processing Challenges?


 Must get better Performance & Efficiency
 Scheduling
 Load Balancing Time for Synchronization
 Communication Overhead
 Amdahl‟s law
33. What is meant by SISD?
A single processor executes a single instruction stream to operate on data stored in a single
memory. Uniprocessors falls into this category. Most conventional machines with one CPU
containing a single arithmetic logic unit (ALU) capable of doing only scalar arithmetic fall into this
category. SISD computers and sequential computers are thus synonymous. In SISD computers,
instructions are executed sequentially but may overlap in their execution stages. They may have
more than one functional unit, but all functional units are controlled by a single control unit.

34. What is meant by SIMD?


SIMD computers exploit data level parallelism by applying the same operations to multiple
items of the data in parallel. Each processor has its own data memory but there is a single
instruction memory and control processor, which fetches and dispatches instructions. For
applications that display significant data-level parallelism, the SIMD approach can be very efficient.
Vector architecture are the largest class of SIMD architecture.

35. State the Advantages & Disadvantages of SIMD?


Advantages of SIMD
 Reduces the cost of control unit over dozens of execution units.
 It has reduced instruction bandwidth and program memory.
 It needs only one copy of the code that is being executed simultaneously.
 SIMD works best when dealing with arrays in „for‟ loops. Hence, for parallelism to work
in SIMD, there must be a great deal of identically structured data, which is called data-level
parallelism.

Disadvantages of SIMD
 SIMD is at its weakest in case or switch statements, where each execution unit must
perform a different operation on its data, depending on what data it has.
 Execution units with the wrong data are disabled, so that units with proper data may
continue. Such situation essentially run at 1/nth performance, where „n‟ is the number of
cases.

36. Write short note on MISD?


Short for multiple instruction, single data. A type of parallel computing architecture that is
classified under Flynn's taxonomy. Each processor owns its control unit and its local memory,
making them more powerful than those used in SIMD computers. Each processor operates under
the control of an instruction stream issued by its control unit: therefore the processors are
potentially all executing different programs on different data while solving different sub-problems
of a single problem. This means that the processors usually operate asynchronously.

37. What is meant by MIMD?


In MIMD, each processor fetches its own instructions and operates on its own data. MIMD
computers exploit thread-level parallelism, since multiple threads operates in parallel. MIMDs
offers flexibility with correct hardware and software support, MIMDs can function as single-user
processors focusing on high performance for one application, as multi-programmed multiprocessors
running many tasks simultaneously.

38. Compare Shared Memory and Distributed Memory Architectures?


If the processors share a common memory then each processor accesses programs and data
stored in the shared memory and processors communicate with each other via that memory.
In distributed memory, the processor share memory or memory is distributed to all the systems
which are connected in the network.

39. Define Process?


A process is an instance of a program running on a computer. The process image is the
collection of program data, stack and attributes that define the process.

40. What is meant by Process Switch?


A process switch is an operation that switches the process or control from one process to
another. It first saves all the process control data, registers, and other information and then replaces
them with the process information for the second.

41. Define Thread?


A thread is a separate process with its own instructions and data. A thread may represent a
process that is part of a parallel program consisting of multiple processes, or it may represent an
independent program on its own. A thread includes the program counter, stack pointer and its own
area for a stack. It executes sequentially and can be interrupted to transfer control to another thread.

42. What is meant by Thread Switch?


A thread switch is an operation that switches the control from one thread to another within
the same process. This is cheaper than a process switch.

43. Compare Explicit Threads and Implicit Threads?


Implicit Multithreading refers to the concurrent execution of multiple threads extracted from
a single sequential program.

Explicit Multithreading refers to the concurrent execution of instructions from different


explicit threads, either by interleaving instructions from different threads on shared pipelines
or by parallel execution on parallel pipelines.

44. What is meant by Thread Level Parallelism?


Thread-level parallelism is an important alternative to instruction-level parallelism primarily
because it could be more cost-effective to exploit than instruction-level parallelism. There are many
important applications where thread-level parallelism occurs naturally, as it does in many server
applications.

45. What is meant by Fine-grained Multithreading?


Fine-grained multithreading is a version of hardware multithreading that suggests switching
between threads after every instruction. It switches between threads on each instruction, resulting in
interleaved execution of multiple threads.

46. What is meant by Coarse-grained Multithreading?


Coarse-grained multithreading is a version of hardware multithreading that suggests
switching between threads only after significant events, such as a cache miss. It switches threads
only on costly stall like second-level cache misses.

47. What is meant by Simultaneous Multithreading?


Simultaneous multithreading is a version of multithreading that lowers the cost of
multithreading by utilizing the resources needed for multiple issue, dynamically scheduled micro-
architecture. The wide superscalar instruction is executed by executing multiple threads
simultaneously using multiple execution units of a superscalar processor.
It is a variation on hardware multithreading that uses the resources of a multiple-issue,
dynamically scheduled processor to exploit thread-level parallelism at the same time it exploits
instruction-level parallelism. The key insight that motivates SMT is that multiple-issue processors
often have more functional unit parallelism available than a single thread can effectively use.

48. What is meant by Synchronization?


It is the process of coordinating the behavior of two or more processes, which may be
running on different processors. As processors operating in parallel will normally share data, they
also need to coordinate when operating on shared data; otherwise, one processor could start
working on data before another is finished with it. This coordination is called synchronization.

49. Explain Synchronization using Locks?


It is a synchronization technique that allows access to data to only one processor at a time.
When sharing is supported with a single address space, there must be a separate mechanism for
synchronization. One approach uses a lock for a shared variable. Only one processor at a time can
acquire the lock, and the other processors interested in shared data must wait until the original
processor unlocks the variable.

PART-B
1. Explain the different techniques used for implementing Instruction level parallelism?
(Nov/Dec 2014)
2. Explain in detail about the concept of Speculation?
3. Explain the various techniques used for improving performance in static multiple
issue processors?
4. Explain briefly about Dynamic Multiple Issue Processor with neat diagram?
5. Explain the difficulties faced by parallel processing programs?
6. Explain shared memory multiprocessor?
7. Explain in detail Flynn’s classification of parallel hardware? (Nov/Dec 2015)
8. Explain cluster and other Message passing Multiprocessor?
9. Explain in detail the different forms of hardware Multithreading? (Nov/Dec 2014)
(Nov/Dec 2015)
10. Explain SISD and MIMD
11. Explain SIMD and MISD
12. Explain Multicore processors (Nov/Dec 2014)
13. Explain the different types of multithreading?
14. Discuss SISD, MIMD, SIMD, SPMD, and vector systems? (April/May 2015)
15. What is hardware multithreading? Compare and contrast fine grained
multithreading and coarse grained multithreading? (April/May 2015)

UNIT - V

MEMORY & I/O SYSTEMS

PART-A

1. What is the maximum size of the memory that can be used in a 16-bit computer and 32 bit
computer?

The maximum size of the memory that can be used in a 16-bit computer is 216=64K memory
locations.
The maximum size of the memory that can be used in a 32-bit computer is
232 =4G memory locations.

2. Define memory access time?

The time required to access one word is called the memory access time. Or It is the time that
elapses between the initiation of an operation and the completion of that operation.

3. Define memory cycle time?

It is the minimum time delay required between the initiations of two successive
memory operations. Eg. The time between two successive read operations.

4. When is a memory unit called as RAM?

A memory unit is called as RAM if any location can be accessed for a read or
write operation in some fixed amount of time that is independent of the location‟s
address.

5. What is MMU?

MMU is the Memory Management Unit. It is a special memory control circuit


used for implementing the mapping of the virtual address space onto the physical
memory.

6. Define memory cell?

A memory cell is capable of storing one bit of information. It is usually organized in the
form of an array.
7. What is a word line?

In a memory cell, all the cells of a row are connected to a common line called as word line.

8. Define static memories?

Memories that consists of circuits capable of retaining their state as long as power
is applied is called Static memories.

9. What are the Characteristics of semiconductor RAM memories?

· They are available in a wide range of speeds.


· Their cycle time range from 100ns to less than 10ns.
· They replaced the expensive magnetic core memories.
· They are used for implementing memories.

10. Why SRAMs are said to be volatile?

Because their contents are lost when power is interrupted. So SRAMs are said to be volatile.

11. What are the Characteristics of SRAMs?

· SRAMs are fast.


· They are volatile.
· They are of high cost.
· Less density.

12. What are the Characteristics of DRAMs?

· Low cost.
· High density.
· Refresh circuitry is needed.

13. Define Refresh Circuit?

It is a circuit which ensures that the contents of a DRAM are maintained when each row of
cells are accessed periodically.

14. Define Memory Latency?

It is used to refer to the amount of time it takes to transfer a word of data to or from the
memory.

15. What are asynchronous DRAMs?

In asynchronous DRAMs, the timing of the memory device is controlled asynchronously. A


specialised memory controller circuit provides the necessary control signals RAS and CAS that
govern the timing.The processor must take into account the delay in the response of the memory.
such memories are asynchronous DRAMs .
16. What are synchronous DRAMs?

Synchronous DRAMs are those whose operation is directly synchronized with a clock
signal.

17. Define Bandwidth?

When transferring blocks of data, it is of interest to know how much time is needed to
transfer an entire block. since blocks can be variable in size it is useful to define a performance
measure in terms of number of bits or bytes that can be transferred in one second. This measure is
often referred to as the memory bandwidth.

18. What is double data rate SDRAMs?

Double data rates SDRAMs are those which can transfer data on both edges of the clock and
their bandwidth is essentially doubled for long burst transfers.

19. What is mother board?

Mother Board is a main system printed circuit board which contains the processor.It will
occupy an unacceptably large amount of space on the board.

20. What are SIMMs and DIMMs?

SIMMs are Single In-line Memory Modules. DIMMs are Dual In-line Memory Modules.
Such modules are an assembly of several memory chips on a separate small board that plugs
vertically into a single socket on the motherboard.

21. What is memory Controller?

A memory controller is a circuit which is interposed between the processor and the dynamic
memory. It is used for performing multiplexing of address bits.It provides RAS-CAS timing. It also
sends R/W and CS signals to the memory. When used with DRAM chips , which do not have self
refreshing capability , the memory controller has to provide all the information needed to control
the refreshing process.

22. Differentiate static RAM and dynamic RAM? (Nov / Dec 2013)

Static RAM Dynamic RAM


They are fast They are slow
They are very expensive They are less expensive
They retain their state indefinitely They do not retain their state indefinitely
They require several transistors They require less no transistors.
Low density High density
23. What is Ram Bus technology?

The key feature of Ram bus technology is a fast signaling method used to transfer
information between chips. Instead of using signals that have voltage levels of either 0 or V supply
to represent the logic values, the signals consist of much smaller voltage swings around a reference
voltage, vref. Small voltage swings make it possible to have short transition times, which allows for
a high speed of transmission.

24. What are RDRAMs?

RDRAMs are Ram bus DRAMs. Ram bus requires specially designed memory chips. These
chips use cell arrays based on the standard DRAM technology. Multiple banks of cell arrays are
used to access more than one word at atime. Circuitry needed to interface to the Ram bus channel is
included on the chip. Such chips are known as RDRAMs.

25. What are the special features of Direct RDRAMs?

· It is a two channel Rambus..


· It has 18 data lines intended to transfer two bytes of data at a time.
· There are no separate address lines.

26. What are RIMMs?

RDRAM chips can be assembled in to larger modules called RIMMs. It can hold upto 16
RDRAMs.

27. Define ROM?

It is a non-volatile memory. It involves only reading of stored data.

28. What are the features of PROM?

· They are programmed directly by the user.


· Faster
· Less expensive
· More flexible.

29. Why EPROM chips are mounted in packages that have transparent window?

Since the erasure requires dissipating the charges trapped in the transistors of memory cells.
This can be done by exposing the chip to UV light .

30. What are the disadvantages of EPROM?

The chip must be physically removed from the circuit for reprogramming and its entire
contents are erased by the ultraviolet light.
31. What are the advantages and disadvantages of using EEPROM? (Apr/May 2011)

The advantages are that EEPROMs do not have to be removed for erasure.Also it is possible
to erase the cell contents selectively. The only disadvantage is that different voltages are needed for
erasing, writing and reading the stored data.

32. What is cache memory?

It is a small, fast memory that is inserted between the larger, slower main memory and the
processor. It reduces the memory access time.

33. Differentiate flash devices and EEPROM devices.

FLASH devices EEPROM devices


It is possible to read the contents of a single
It is possible to read and write the contents
cell, but it is only possible to write an entire
of a single cell.
block of cells.
Greater density which leads to higher
Relatively lower density
capacity.
Lower cost per bit. Relatively more cost.
Consumes less power in their operation and Consumes more power.
makes it more attractive for use in portable
equipments that is battery driven.

34. Define flash memory?

It is an approach similar to EEPROM technology. A flash cell is based on a single transistor


controlled by trapped charge just like an EEPROM cell.

35. What is locality of reference?

Analysis of programs shows that many instructions in localized areas of the program are
executed repeatedly during some time period., and the remainder of the program is accessed
relatively infrequently. This is referred to as locality of reference. This property leads to the
effectiveness of cache mechanism.

36. What are the two aspects of locality of reference?. Define them.

Two aspects of locality of reference are temporal aspect and spatial aspect. Temporal aspect
is that a recently executed instruction is likely to be executed again verysoon. The spatial aspect is
that instructions in close proximity to a recently executed instruction are also to be executed soon.

37. Define cache line.


Cache block is used to refer to a set of contiguous address locations of some size. Cache
block is also referred to as cache line.
38. What are the two ways in which the system using cache can proceed for a write
operation?
 Write through protocol technique.
 Write-back or copy back protocol technique.

39. What is write through protocol?

For a write operation using write through protocol during write hit: the cache location and
the main memory location are updated simultaneously. For a write miss: For a write miss, the
information is written directly to the main memory.

40. What is write-back or copy back protocol?

For a write operation using this protocol during write hit: the technique is to update only the
cache location and to mark it as updated with an associated flag bit, often called the dirty or
modified bit. The main memory location of the word is updated later, when the block containing
this marked word is to be removed from the cache to make room for a new block. For a write miss:
the block containing the addressed word is first brought into the cache, and then the desired word in
the cache is overwritten with the new information.

41. When does a read miss occur?


When the addressed word in a read operation is not in the cache, a read miss occur.

42. What is load-through or early restart?


When a read miss occurs for a system with cache the required word may be sent to the
processor as soon as it is read from the main memory instead of loading in to the cache. This
approach is called load through or early restart and it reduces the processor‟s waiting period .

43. What are the mapping technique?


· Direct mapping
· Associative mapping
· Set Associative mapping

44. What is a hit?


A successful access to data in cache memory is called hit.

45. Define hit rate? ( NOV/DEC 2006) (Nov / Dec 2013) (Nov/Dec 2015)
The number of hits stated as a fraction of all attempted access.

46. What are the two ways of constructing a larger module to mount flash chips on a
small card?
· Flash cards
· Flash drivers

47. Define miss rate? (Nov / Dec 2013)


It is the number of misses stated as a fraction of attempted accesses.

48. Define miss penalty?


The extra time needed to bring the desired information into the cache.
49. Define access time for magnetic disks?

The sum of seek time and rotational delay is called as access time for disks. Seek time is the
time required to move the read/write head to the proper track. Rotational delay or latency is the
amount of time that elapses after the head is positioned over the correct track until the starting
position of the addressed sector passes under the read/write head.

50. What is phase encoding or Manchester encoding? (Nov/Dec 2012)


It is one encoding technique for combining clocking information with data. It is a scheme in
which changes in magnetization occur for each data bit. It‟s disadvantage is poor bit-storage
density.

51. What is the formula for calculating the average access time experienced by the processor?
tave=hc +(1-h)M
Where
h =Hit rate
M=miss penalty
C=Time to access information in the cache.

52. What is the formula for calculating the average access time experienced by the processor
in a system with two levels of caches?

tave =h1c1(1-h1)h2c2+(1-h1)(1-h2)M
where
h1=hit rate in L1 cache
h2=hit rate in L2 cache
C1=Time to access information in the L1 cache.
C2=Time to access information in the L2 cache.

53. What are prefetch instructions?


Prefetch Instructions are those instructions which can be inserted into a program either by
the programmer or by the compiler.

54. Define system space?


Management routines are part of the operating system of the computer. It is convenient to
assemble the OS routines into a virtual address space.

55. Define user space?


The system space is separated from virtual address space in which the user application
programs reside. The letter space is called user space.

56. What are pages?


All programs and data are composed of fixed length units called pages. Each consists of
blocks of words that occupy contiguous locations in main memory.

57. What is replacement algorithm?


When the cache is full and a memory word that is not in the cache is referenced, the cache
control hardware must decide which block should be removed to create space for the new block that
contains the reference word .The collection of rules for making this decision constitutes the
replacement algorithm.
58. What is dirty or modified bit?
The cache location is updated with an associated flag bit called dirty bit.

59. What is write miss?


During the write operation if the addressed word is not in cache then said to be write miss.

60. What is associative research?


The cost of an associative cache is higher than the cost of a direct mapped cache because of
the need to search all 128 tag patterns to determine whether a given block is in the cache.A search
of this kind is called an associative search.

61. What is virtual memory and what are the benefits of virtual memory? (APRIL/MAY
2010)
Techniques that automatically move program and datablocks into the physical main memory
when they are required for execution are called as virtual memory.
The virtual memory concept also frees the programmer. The programmer no longer needs to
worry about the size constraints of the physical memory on every computer his or her program is
going to be used.

62. What is virtual address?


The binary address that the processor used for either instruction or data called as virtual
address.

63. What is virtual page number?


Each virtual address generated by the processor whether it is for an instruction fetch is
interpreted as a virtual page.

64. What is page frame?


An area in the main memory that can hold one page is called as page frame.

65. What is Winchester technology?


The disk and the read/write heads are placed in a sealed air-filtered enclosure called
Winchester technology.

66. What is a disk drive?


The electromechanical mechanism that spins the disk and moves the read/write heads called
disk drive.

67. What is disk controller?


The electronic circuitry that controls the operation of the system called as disk controller.

68. What is main memory address?


The address of the first main memory location of the block of words involved in the transfer
is called as main memory address.

69. What is word count?


The number of words in the block to be transferred.

70. What is Error checking?


It computes the error correcting code (ECC) value for the data read from a given sector and
compares it with the corresponding ECC value read from the disk.

71. What is booting?

When the power is turned on the OS has to be loaded into the main memory which takes
place as part of a process called booting. To initiate booting a tiny part of main memory is
implemented as a nonvolatile ROM.

72. What are the two states of processor?


· Supervisor state
· User state.

73. What is lockup-free?


A cache that can support multiple outstanding misses is called lockup-free.

74. List the different types of ROM? (Nov/Dec 2012)


 ROM
 EPROM
 EEPROM
 Flash Memories

75. What is meant by interleaved memory? (May/June 2013)


Interleaved memory is a design made to compensate for the relatively slow speed of core
memory or DRAM. With interleaved memory, memory addresses are allocated to each memory
bank in turn. For example, in an interleaved system with 2 memory banks if logical address 32
belongs to bank 0, then logical address 33 would belong to bank 1, logical address 34 would belong
to bank 0 and so on.
This means that contiguous reads and contiguous writes would actually use each memory
bank in turn instead of using the same repeatedly. This results in significantly higher memory
throughput as each bank has a minimum waiting time between reads and writes.

76. What are called memory-mapped I/O devices? (May/June 2013)


When I/O devices and the memory share the same address space the arrangement is called
memory-mapped I/O devices.

77. What constitutes the device’s interface circuit?


The address decoder, the data and the status registers , and the control circuitry required to
coordinate I/O transfers constitute the device‟s interface circuit.

78. What are the two important mechanisms for implementing I/O operations?
There are two commonly used mechanisms for implementing I/O operations .They are
interrupts and direct memory access.

79. What are known as interrupts? (MAY/JUNE 2006)


In the case of interrupts, the synchronization is achieved by having the I/O device send a
special signal over the bus whenever it is ready for a data transfer operation.
80. What do you mean by Direct Memory Access? (NOV/DEC 2007) (Nov / Dec 2013)
(April/May 2015)
Direct memory access is a technique used for high speed I/O devices. It involves having the
device interface transfer data directly to or from the memory.

81. What do you mean by an interrupt- request line?


The bus control line is also known as an interrupt-request line.

82. What do you mean by an interrupt acknowledge signal?


The processor must inform the device that its request has been recognized so that it may
remove its interrupt-request signal .This may be accomplished by an interrupt acknowledge signal.

83. What is a subroutine? (NOV/DEC 2007)


A subroutine performs a function required by the program from which it is called.

84. What is interrupt latency?


Saving registers also increases the delay between the time an interrupt request is received
and the start of execution of the interrupt-service routine .This delay is called interrupt latency.

85. What is known as real-time processing?


The concept of interrupts is used in operating systems and in many control applications
where processing of certain routines must be accurately timed relative to external events .The latter
type of application is referred to as real-time processing.

86. What is known as a edge triggered line?


The processor has a special interrupt-request line for which the interrupt handling circuit
responds only to the leading edge of a signal .such a line is called a edge-triggered line.

87. What is known as an interrupt vector / Vectored Interrupt? (Nov / Dec 2013)
The location pointed to by the interrupting device is used to store the starting address of the
interrupt-service routine .The processor reads this address, called the interrupt vector.

88. What is known as a debugger?


System software usually includes a program called a debugger, which helps the programmer
find errors in a program.

89. What is an exception? (Nov/Dec 2014)


The term exception is often used to refer to any event that causes an interruption.

90. What are known as privileged instructions?


To protect the operating system of a computer from being corrupted by user programs,
certain instructions can be executed only while the processor is in the supervisor mode. These are
called privileged instructions.

91. What is known as multitasking?


Multitasking is a mode of operation in which a processor executes several user programs at
the same time.

92. What is known as time slicing?


A common OS technique that makes multitasking possible is known as time slicing.
93. What is a process?
A program, together with any information that describes its current state of execution, is
regarded by the OS as an entity called a process.

94. What is a device driver?


A self-contained module that encapsulates all software pertaining to a particular device is
known as a device driver.

95. What is data abort?


Data abort arises from an error in reading or writing data.

96. What is known as prefetch abort?


Prefetch abort arises from an error when prefetching instructions from the memory.

97. What are banked registers?


The registers that replace user mode registers are called banked registers.

98. What is known as Direct Memory Access? (NOV/DEC 2006) (Nov/Dec 2012) (Nov/Dec
2014)
A special control unit may be provided to allow transfer of a block of data directly between an
external device and the main memory, without continuous intervention by the processor. This
approach is called direct memory access, or DMA.

99. What is known as a DMA controller?


DMA transfers are performed by a control circuit that is part of the I/O device
interface. This circuit is known as DMA controller.

100. What is known as cycle-stealing?


The processor originates most memory access cycles, the DMA controller can be said
to “steal” memory cycles from the processor. Hence, this interweaving technique is usually
called cycle stealing.

101. What is known as block/burst mode?


The DMA controller may be given exclusive access to the main memory to transfer a block
of data without interrupt. This is known as block/burst mode.

102. What is called a bus master?


The device that is allowed to initiate data transfers on the bus at any given time is called the
bus master.

103. What is meant by bus arbitration?(Apr/May 2010)


Bus arbitration is a way of sharing the computer's data transferring channels (buses) in an
optimal way so the faster devices won't have to wait to be able to transfer and the slower devices
(like peripherals) will have a chance to transfer as well. Different methods exist but two main types
are the serial and parallel arbitrations

104. Name and give the purpose of widely used bus standard. (Apr/May 2010)
The GPIB or General Purpose Interface Bus or IEEE 488 bus is still one of the more popular
and versatile interface standards available today. GPIB is widely used for enabling electronics test
equipment to be controlled remotely, although it us also used in a many other applications including
data acquisition. Today most bench electronics test equipment has either a GPIB option or are fitted
with it as standard.

105. What is known as distributed arbitration? (Apr/May 2011)


Distributed arbitration means that all devices waiting to use the bus have equal
responsibility in carrying out the arbitration process, without using a central arbiter.

106. What is a strobe?


Strobe captures the values of the data given instant and stores them into a buffer.

107. What is meant by handshake?


Handshake is used between the master and the slave for controlling data transfers on the
bus.

108. What is known as full handshake?


A change of state in one signal is followed by a change in the other signal. This is known as
a full handshake.

109. What is a bit rate?


The speed of transmission is known as a bit rate.

110. What is an initiator?


A master is called an initiator in PCI technology.

111. What is called a target?


The addressed device that responds to read and write commands is called a target.

112. What is a transaction?


A complete transfer operation on the bus involving an address and a burst of data ,is called a
transaction.

113. What are sectors?


Data are stored on a disk in blocks called sectors.

114. What are known as asynchronous events?


The event of pressing a key is not synchronized to any other event in a computer system; the
data generated by the keyboard are called asynchronous.

115. What are known as isochronous events?


The sampling process yields a continuous stream of digitized samples that arrive at regular
intervals, synchronized with the sampling clock.

116. What is known as plug- and- play?


The plug-and –play feature means that a new device, such as an additional speaker, can be
connected at any time while the system is operating.

117. What is called a hub?


Each node of the tree has a device called a hub which acts as an intermediate control point
between the host and the I/O devices.
118. What is a root hub?
At the root of a tree, a root hub connects the entire tree to the host computer.

119. What are called functions in USB terminology?


The leaves of the tree are the I/O devices being served which are called functions of the
USB terminology.

120. What are called pipes?


The purpose of the USB software is to provide bi-directional communication links between
application software and I/O devices. These links are called pipes.

121. What are called endpoints?


Locations in the device to or from which data transfer can take place, such as status, control,
and data registers are called endpoints.

122. What is a frame?


Devices that generate or receive isochronous data require a time reference to control the
sampling process. To provide this reference, transmission over the USB is divided into frames of
equal length.

123. What is the length of a frame?


A frame is 1 ms long for low-and full-speed data.

124. Mention the advantages of USB? (May/June 2013)


1.Higher Speed
2.Multiple Devices
3.Self-Powered
4.Truly Plug & Play
5.Hot-Swappable

125. What is the purpose of Dirty/Modified bit in Cache Memory? (Nov/Dec 2014)
For a write operation using this protocol during write hit: the technique is to update only the
cache location and to mark it as updated with an associated flag bit, often called the dirty or
modified bit. The main memory location of the word is updated later, when the block containing
this marked word is to be removed from the cache to make room for a new block. For a write miss:
the block containing the addressed word is first brought into the cache, and then the desired word in
the cache is overwritten with the new information.

PART-B

1. Describe the organization of a typical RAM chip. (MAY/JUNE 2007)


2. Explain about Synchronous DRAMS. (May/June 2012) (May/June 2013)
3. Explain about Static & Dynamic memory systems. (NOV/DEC 2007)
4. Write short note on:
i ROM technologies.
ii Memory Interleaving
iii Set associative mapping of cache.
iv RAID Disk arrays. (NOV/DEC 2007)
5. Explain various mechanisms of mapping main memory address into cache memory
addresses.(APRIL/MAY 2008) (May/June 2012) (Nov/Dec 2012) (May/June 2013) (Nov /
Dec 2013) (Nov/Dec 2014)
6. Explain about Cache memory in detail. ( NOV/DEC 2006)
7. Explain how the virtual address is converted into real address in a paged virtual memory
system. (APRIL/MAY 2008) (May/June 2013) (Nov / Dec 2013)
8. Discuss the address translation mechanism and the different page replacement policies
used in a virtual memory system. (MAY/JUNE 2006) (Nov/Dec 2012)
9. Explain the concept of memory hierarchy. (NOV/DEC 2007)
10. Explain the performance factors in memory. (MAY/JUNE 2006)
11. Describe the functional characteristics that are common to the devices used to build main
and secondary computer memories. (APRIL/MAY 2008)
12. Describe the working principle of a typical magnetic disk. (APRIL/MAY 2008)
(May/June 2013)
13. Explain how the virtual address is converted into real address in a paged virtual memory
system. (Apr/May 2010)(May/ June 2012) (April/May 2015) (Nov/Dec 2015)
14. Explain the need for memory hierarchy technology, with a four-level memory? (Nov /
Dec 2013)
15. What for replacement algorithms are used? Explain the important ones? (Nov / Dec 2013)

16. Explain interrupt priority schemes? (May/June 2012) (May/June 2013)

17. Explain how I/O devices can be interfaced with a block diagram. (NOV/DEC 2007)
(May/June 2013)
18. Explain DMA and the different types of bus arbitration mechanisms. (May/June 2012)
(Nov/Dec 2012) (Nov/Dec 2014) (Nov/Dec 2015)
19. Explain synchronous and asynchronous bus.
20. Write notes on the following((May/June 2012)
i. PCI (Nov/Dec 2012) (May/June 2013)
ii. SCSI
iii. USB
21. Design parallel priority interrupt hardware for a system with eight interrupt source.
(MAY/JUNE 2007)
22. Explain how DMA transfer is accomplished with a neat diagram. ( NOV/DEC 2006)
23. Draw the typical block diagram of a DMA controller and explain how it is used for direct
data transfer between memory and peripherals. (APRIL/MAY 2008) & (MAY/JUNE
2007)
24. Explain how data may be transferred from a hard disk to memory using DMA including
arbitration for the bus,. Assume a synchronous bus, and draw a timing diagram showing
the data transfer. (MAY/JUNE 2006)
25. Explain the use of vectored interrupts in processors. Why is priority handling desired in
interrupt controllers? How do the different priority schemes work?
(MAY/JUNE 2006)
26. Write short notes on: (NOV/DEC 2007)
i. DMA.
ii. Bus Arbitration
iii. Printer-processor Communication.
iv. USB.
27. Explain the organization of a magnetic disk in detail. (MAY/JUNE 2007)
28. How do you connect multiple I/O devices to a processor using interrupts? Explain with
suitable diagrams. (NOV/DEC 2007)
29. Explain the use of DMA Controllers in a computer system with a neat diagram.
30. Explain Handshake protocol. Depict clearly how it controls data transfer during an input
operation. (APRIL/MAY 2008)
31. Describe the working principles of USB. (APRIL/MAY 2008) & (MAY/JUNE 2006)
32.Briefly compare the characteristics of SCSI with PCI. (APRIL/MAY 2008) & (NOV/DEC
2006) & (MAY/JUNE 2007)
33.Describe the functions of SCSI with a meat diagram. (MAY/JUNE 2007)
34.Explain the advantages of USB over older I/O bus architecture. ( NOV/DEC 2006)
35.Describe the hardware mechanism for handling multiple interrupt requests.
(APRIL/MAY 2010)
36. What are handshaking signals? Explain the handshake control of data
transfer during input and output operation. (APRIL/MAY 2010)(Apr/May 2011)
37. What are the needs for input- output interface? Explain the functions of a typical 8- bit
parallel interface in detail. (APRIL/MAY 2010) (Apr/May 2011)
38. Describe the USB architecture and protocols with the help of a neat diagram.
(APRIL/MAY 2010)(Apr/May 2011).
39. What is interrupt? Explain the different types of interrupts and the different ways of
handling interrupts? (Nov/Dec 2012)
40. Explain in detail about Buses? (May/June 2013)
41. Design a parallel priority interrupt hardware for a system with eight interrupt sources
and explain? (Nov / Dec 2013)
42. Explain the USB interface? (Nov / Dec 2013)
43. Write short note on I/O Processor? (Nov / Dec 2013)
44. What is the need for an I/O Interface? Describe the functions of SCSI Interface with neat
diagram? (Nov / Dec 2013)
45. Elaborate on the various memory technologies and its relevance? (April/May 2015)

================================*******================================

You might also like