LDCO
LDCO
Organization
(Code : 214442)
ns e
io dg
Semester III – Information Technology
(Savitribai Phule Pune University)
at le
ic w
Strictly as per the New Choice Based Credit System Syllabus (2019 Course)
Savitribai Phule Pune University w.e.f. academic year 2020-2021
bl no
Pu K
Copyright © by Authors. All rights reserved. No part of this publication may be reproduced, copied, or stored in
a retrieval system, distributed or transmitted in any form or by any means, including photocopy, recording, or
other electronic or mechanical methods, without the prior written permission of the publisher.
ns e
io dg
This book is sold subject to the condition that it shall not, by the way of trade or otherwise, be lent, resold, hired
out, or otherwise circulated without the publisher’s prior written consent in any form of binding or cover other
than which it is published and without a similar condition including this condition being imposed on the
subsequent purchaser and without limiting the rights under copyright reserved above.
at le
First Printed in India : January 2002
ic w
First Edition : August 2020 (TechKnowledge Publications)
bl no
This edition is for sale in India, Bangladesh, Bhutan, Maldives, Nepal, Pakistan, Sri Lanka and designated
countries in South-East Asia. Sale and purchase of this book outside of these countries is unauthorized by the
publisher.
Pu K
ch
ISBN : 978-93-89889-45-1
Te
Published by :
TechKnowledge Publications
Head Office : B/5, First floor, Maniratna Complex, Taware Colony, Aranyeshwar Corner,
Pune - 411 009. Maharashtra State, India
Ph : 91-20-24221234, 91-20-24225678.
Email : [email protected],
Website : www.techknowledgebooks.com
ns e
Companion Course : if any :
Course Objectives :
io dg
1. To make undergraduates, aware of different levels of abstraction of computer systems from hardware perspective.
2. To make undergraduates, understand the functions, characteristics of various components of Computer& in
particular processor & memory.
at le
Course Outcomes : On completion of the course, students will be able to :
CO1 : Perform basic binary arithmetic and simplify logic expressions.
ic w
CO2 : Grasp the operations of logic ICs and Implement combinational logic functions using ICs.
CO3 : Comprehend the operations of basic memory cell types and Implement sequential logic functions using ICs.
bl no
Course Contents
ch
Unit 1
Digital Logic families: Digital IC Characteristics; TTL : Standard TTL characteristics, Operation of TTL NAND gate;
Te
CMOS : Standard CMOS characteristics, operation of CMOS NAND gate; Comparison of TTL and CMOS. Signed
Binary number representation and Arithmetic: Sign Magnitude, 1’s complement and 2’s complement representation,
unsigned Binary arithmetic (addition, subtraction, multiplication, and division), subtraction using 2’s complement;
IEEE Standard 754 Floating point number representations. Codes: Binary, BCD, octal, Hexadecimal , Excess-3 , Gray
code and their conversions Logic minimization: Representation of logic functions: logic statement, truth table, SOP
form, POS form; Simplification of logical functions using K-Maps up to 4 variables.
Case Study : 1) CMOS 4000 series ICs 2) practical applications of various codes in computers 3) four basic
arithmetic operations using floating point numbers in a calculator. (Refer chapters 1, 2, 3 and 4)
Unit 2
Design using SSI chips: Code converters, Half- Adder, Full Adder, Half Subtractor, Full Subtractor, n bit Binary adder.
Introduction to MSI chips: Multiplexer (IC 74153), Demultiplexer (IC 74138), Decoder (74238) Encoder (IC 74147),
Binary adder (IC 7483) Design using MSI chips: BCD adder and subtractor using IC 7483, Implementation of logic
functions using IC 74153 & 74138.
Case Study : Use of combinational logic design in 7 segment display interface (Refer chapters 5)
Introduction to sequential circuits: Difference between combinational circuits and sequential circuits; Memory
element-latch and Flip-Flop. Flip- Flops: Logic diagram, truth table and excitation table of SR, JK, D, T flip flops;
Conversion from one FF to another, Study of flip flops with regard to asynchronous and synchronous, Preset and
Clear, Master Slave configuration ; Study of 7474, 7476 flip flop ICs. Application of flip-flops: Counters- asynchronous,
synchronous and modulo n counters, study of 7490 modulus n counter ICs and their applications to implement mod
counters; Registers- shift register types (SISO, SIPO, PISO and PIPO) and applications.
ns e
Case Study : Use of sequential logic design in a simple traffic light controller (Refer chapters 6, 7 and 8)
io dg
Unit 4
Computer organization and computer architecture, organization, functions and types of computer units- CPU(typical
at le
organization, Functions, Types), Memory (Types and their uses in computer), IO( types and functions) and system bus
(Address, data and control, Typical control lines, Multiple-Bus Hierarchies); Von Neumann and Harvard architecture;
Instruction cycle Processor: Single bus organization of CPU; ALU( ALU signals, Functions and types); Register (types
ic w
and functions of user visible, control and status registers such as general purpose, address registers, data registers,
flags, PC, MAR, MBR, IR) and control unit (control signals and typical organization of hard wired and
bl no
microprogrammed CU). Micro Operations (fetch, Indirect, Execute, interrupt) and control signals for these micro
operations. Case Study : 8086 processor , PCI bus (Refer chapter 9)
Unit 5
Pu K
Instruction : elements of machine instruction ; instruction representation (Opcode and mnemonics, Assembly
ch
language elements) ; Instruction Format and 0-1-2-3 address formats, Types of operands Addressing modes;
Instruction types based on operations (functions and examples of each); key characteristics of RISC and CISC;
Interrupt: its purpose, types, Classes and interrupt handling (ISR, multiple interrupts), Exceptions; Instruction
pipelining (operation and speed up) Multiprocessor systems: Taxonomy of Parallel Processor Architectures, two types
Te
of MIMD clusters and SMP (organization and benefits) and multicore processor (various Alternatives and advantages
of multicores), Typical features of multicore intel core i7.
Memory Systems: Characteristics of Memory Systems, Memory Hierarchy, signals to connect memory to processor,
memory read and write cycle, Characteristics of semiconductor memory: SRAM, DRAM and ROM, Cache Memory –
Principle of Locality, Organization, Mapping functions, write policies, Replacement policies, Multilevel Caches, Cache
Coherence, Input / Output Systems : I/O Module, Programmed I/O, Interrupt Driven I/O, Direct Memory Access
(DMA). Case Study : USB flash drive (Refer chapter 11)
Case Study : CMOS 4000 series ICs. 1.6.1 Advantages of TTL ............................... 1-16
ns e
1.6.2 Disadvantages of TTL .......................... 1-16
1.1 Logic Families ....................................................... 1-2
io dg
1.7 MOS - Logic Family ............................................ 1-16
1.1.1 Classification Based on Circuit
1.8 CMOS Logic ....................................................... 1-16
Complexity .............................................. 1-2
1.8.1 CMOS NAND Gate ............................... 1-16
1.2 Classification of Logic Families ............................ 1-2
1.8.2 CMOS Series ....................................... 1-18
1.3
at le
1.2.1 Classification Based on Devices Used ... 1-2
1.3.2 Fan-in and Fan-out ................................. 1-4 1.9.3 Noise Margins ...................................... 1-18
bl no
1.3.8 Invalid Voltage Levels ............................ 1-7 Review Questions ........................................ 1-21
1.3.9 Current Sourcing and Current Sinking ... 1-7 Unit 1
1.3.10 Power Supply Requirements .................. 1-7 Chapter 2 : Number Systems and Codes 2-1 to 2-28
1.4 Transistor-Transistor Logic (TTL) ......................... 1-7 Syllabus : Binary, BCD, Octal, Hexadecimal, Excess-3,
1.4.1 The Multiple Emitter Transistor ............... 1-7 Gray code and their conversions.
Case study : Practical applications of various codes in
1.4.2 Two Input TTL-NAND Gate (Totempole
computers.
Output) ................................................... 1-8
2.1 Introduction .......................................................... 2-2
1.4.3 Totem-pole (Active Pull up) Output
2.2 System or Circuit .................................................. 2-2
Stage . ................................................... 1-10
2.2.1 Digital Systems ...................................... 2-2
1.4.4 Unconnected Inputs ............................. 1-11
2.3 Binary Logic and Logic Levels .............................. 2-2
1.4.5 Clamping Diodes .................................. 1-11
2.3.1 Positive Logic ......................................... 2-2
1.4.6 5400 Series .......................................... 1-11
2.3.2 Negative Logic ....................................... 2-2
1.4.7 Three Input TTL NAND Gate ................ 1-11
2.4 Number Systems .................................................. 2-3
1.5 Open Collector Outputs (TTL) ............................ 1-12
2.4.1 Important Definitions Related to All 2.14.2 Hex to Other Systems .......................... 2-18
Numbering Systems ............................... 2-3
2.15 Concept of Coding .............................................. 2-21
2.4.2 Various Numbering Systems .................. 2-3
2.16 Classification of Codes ....................................... 2-21
2.5 The Decimal Number System ............................... 2-4
2.16.1 Weighted Binary Codes ....................... 2-21
2.5.1 Characteristics of a Decimal System ...... 2-4
2.16.2 Non Weighted Codes ........................... 2-21
2.6 The Binary Number System .................................. 2-4
2.16.3 Alphanumeric Codes ............................ 2-21
2.6.1 Binary Number Formats ......................... 2-5
ns e
2.17 Binary Coded Decimal (BCD) Code ................... 2-22
2.7 Octal Number System .......................................... 2-5
2.17.1 Comparison with Binary ....................... 2-22
io dg
2.8 Hexadecimal Number System .............................. 2-5
2.17.2 Advantages of BCD Codes .................. 2-23
2.9 Conversion of Number Systems ........................... 2-6
2.17.3 Disadvantages ..................................... 2-23
2.10 Conversions Related to Decimal System ............. 2-6
at le
2.10.1 Conversion from any Radix r to
Decimal .................................................. 2-6
2.18 Non – weighted Codes ....................................... 2-23
2.10.2.1 Successive Division for Integer Part 2.19.2 Advantages of Gray Code .................... 2-25
Conversion ............................................. 2-7 2.19.3 Gray-to-Binary Conversion ................... 2-25
2.10.2.2 Successive Multiplication for Fractional 2.19.4 Binary to Gray Conversion ................... 2-26
Pu K
2.11 Conversion from Binary to Other Systems ......... 2-11 2.20.2 BCD to Binary Conversion ................... 2-26
2.11.1 Conversion from Binary to Decimal ...... 2-11 2.20.3 BCD to Excess – 3 ............................... 2-26
Te
2.11.2 Binary to Octal Conversion ................... 2-11 2.20.4 Excess – 3 to BCD Conversion ............ 2-27
2.11.3 Binary to Hex Conversion ..................... 2-12 Review Questions ........................................ 2-27
2.13.2 Conversion from Other Systems to Case study : Four basic arithmetic operations using
Octal ..................................................... 2-14 floating point numbers in a calculator.
2.14 Conversions Related to Hexadecimal System .... 2-16 3.1 Introduction .......................................................... 3-2
2.14.1 Other Systems to Hex .......................... 2-16 3.2 Unsigned Binary Numbers ................................... 3-2
3.2.6 Subtraction and Borrow .......................... 3-3 3.9 Definition of Boolean Algebra ............................. 3-21
ns e
3.2.7 Binary Multiplication ............................... 3-4 3.9.1 Boolean Postulates and Laws .............. 3-21
io dg
3.2.8 Binary Division ........................................ 3-4 3.10 Two Valued Boolean Algebra ............................. 3-22
3.3 Sign-Magnitude Numbers ..................................... 3-4 3.11 Basic Theorems and Properties of Boolean
Algebra ............................................................... 3-23
3.3.1 Range of Sign-Magnitude Numbers ....... 3-5
3.4 at le
Complements ....................................................... 3-5
3.11.2
Duality .................................................. 3-23
3.4.3 2’s Complement ..................................... 3-6 3.12 Boolean Expression and Boolean Function ....... 3-26
3.4.4 Representation of Signed Numbers using 3.12.1 Truth Table Formation .......................... 3-26
2’s Complement ..................................... 3-6
3.12.2 Examples on Reducing the Boolean
Pu K
3.6 Floating Point Representation ............................ 3-13 of logical functions using K-Maps upto 4 variables.
3.6.1 Parts of Floating Point Numbers .......... 3-13 4.1 System or Circuit .................................................. 4-2
3.6.2 Binary Floating Point Numbers ............. 3-14 4.1.1 Digital Systems ...................................... 4-2
3.6.3 Single Precision Floating Point Binary 4.1.2 Types of Digital Systems ........................ 4-2
Numbers ............................................... 3-14
4.1.3 Combinational Circuit Design ................. 4-3
3.7 IEEE-754 Standard for Representing Floating Point
4.2 Standard Representations for Logical Functions . 4-3
Numbers ............................................................. 3-16
4.2.1 Sum-of-Products (SOP) Form ................ 4-4
3.8 Introduction to Boolean Algebra ......................... 3-19
3.8.1 Basic Logical Operations (Logic 4.2.2 Product of the Sums Form (POS) .......... 4-4
Variables) ............................................. 3-19 4.2.3 Standard or Canonical SOP and POS
3.8.2 NOT Operator (Inversion) ..................... 3-19 Forms ..................................................... 4-4
4.3 Concepts of Minterm and Maxterm ....................... 4-6 4.7.3 Don’t Care Conditions .......................... 4-22
4.3.1 Representation of Logical Expressions using 4.8 Product of Sum (POS) Simplification ................. 4-23
Minterms and Maxterms ......................... 4-7
4.8.1 K-map Representation of POS Form ... 4-23
4.3.2 Writing SOP and POS Forms for a Given
4.8.2 Representation of Standard POS
Truth Table ............................................. 4-7
form on K-map ..................................... 4-24
ns e
4.3.3 To Write Standard SOP Expression for a
4.8.3 Simplification of Standard POS Form using
Given Truth Table ................................... 4-7
K-map ................................................... 4-25
io dg
4.3.4 To Write a Standard POS Expression for a
Review Questions ........................................ 4-28
Given Truth Table ................................... 4-8
Unit 2
4.3.5 Conversion from SOP to POS and Vice
4.4
at le Versa ...................................................... 4-8
4.5.1 K-map Structure ................................... 4-10 Demultiplexer (IC 74138), Decoder (74238), Encoder (IC
74147), Binary adder (IC 7483).
4.5.2 K-map Boxes and Associated Product
Design using MSI chips : BCD adder & subtractor using
Pu K
4.5.4 Truth Table to K-map ............................ 4-12 Case study : Use of combinational logic design in 7
4.5.5 Representation of Standard SOP Form on segment display interface.
K-map ................................................... 4-13
5.1 Introduction to Combinational Circuits ................. 5-2
Te
4.7 Minimization of SOP Expressions (K Map 5.2.5 Excess 3 to BCD Converter ................. 5-10
Simplification) ..................................................... 4-17 5.3 Binary Adders and Subtractors .......................... 5-10
4.7.1 Elimination of a Redundant Group ....... 4-20 5.3.1 Types of Binary Adders ........................ 5-11
5.3.2 Half Adder. ............................................ 5-11 5.8.1 A 2-Bit Comparator . ............................. 5-27
5.3.3 Full Adder. ............................................. 5-12 5.9 Multiplexer (Data Selector) ................................. 5-29
5.3.4 Full Adder using Half Adder ................. 5-13 5.9.1 Necessity of Multiplexers ..................... 5-29
5.3.5 Applications of Full Adder ..................... 5-13 5.9.2 Advantages of Multiplexers .................. 5-29
5.3.6 Binary Subtractors ................................ 5-13 5.10 Types of Multiplexers .......................................... 5-30
ns e
5.3.8 Full Subtractor ...................................... 5-14 5.10.2 A 4 : 1 Multiplexer . ............................... 5-30
io dg
Subtractors ........................................... 5-15
5.10.4 Applications of a Multiplexer ................ 5-31
5.4 The n-Bit Parallel Adder ..................................... 5-16
5.11 Study of Different Multiplexer ICs ....................... 5-32
5.4.1 A Four Bit Parallel Adder Using Full
at le
5.4.2
Adders . ................................................ 5-16
5.4.6 Four Bit Binary Adder using IC 7483 .... 5-19 Function ............................................... 5-36
5.4.7 Cascading of Adders ............................ 5-19 5.13.3 Use of 8 : 1 MUX to Realize a 4 Variable
ch
ns e
5.19 Decoder .............................................................. 5-49
6.1.5 Symbol and Truth Table of S-R Latch .... 6-4
5.19.1 2 to 4 Line Decoder .............................. 5-49
io dg
6.1.6 Characteristic Equation .......................... 6-4
5.19.2 Demultiplexer as Decoder .................... 5-49
6.1.7 NAND Latch [S-R Latch using
5.19.3 3 to 8 Line Decoder .............................. 5-49
NAND Gates] ......................................... 6-5
5.19.4 1:8 DEMUX as 38 Decoder .................. 5-50
at le
5.19.5
5.19.6
IC 74138 / IC 74238 as 3:8 Decoder .... 5-50
5.19.7 Advantage of Decoder Realization ....... 5-54 6.2.3 Concept of Edge Triggering ................... 6-7
bl no
5.20 Case Study Combinational Logic Design of 6.2.4 Types of Edge Triggered Flip Flops ....... 6-7
BCD to 7 Segment Display Controller ................ 5-54
6.3 Gated Latches (Level Triggered SR Flip Flop) ..... 6-7
5.20.1 Seven Segment LED Display ............... 5-54
6.3.1 Types of Level Triggered (Clocked) Flip
Pu K
5.20.4 Common Cathode Display ................... 5-54 Flip Flop) .............................................................. 6-7
5.20.5 Use of a Decoder for Driving the Seven
6.4.1 Positive Level Triggered SR Flip-flop ..... 6-7
Segment Display .................................. 5-55
6.4.2 Negative Level Triggered SR Flip Flop .. 6-8
Te
Flip-Flops : Logic diagram, Truth table & excitation table of 6.7.1 Positive Edge Triggered S-R
SR, JK, D, T flip flops; Conversion from one FF to another, Flip Flop ............................................... 6-11
Study of flip flops with regard to asynchronous and
6.7.2 Negative Edge Triggered S - R
synchronous, Preset & clear, Master slave configuration;
Flip Flop ............................................... 6-13
Study of 7474, 7476 flip flop ICs.
6.8 Edge Triggered D Flip Flop ................................ 6-13
Case study : Use of sequential logic design in a simple
traffic light controller. 6.8.1 Positive Edge Triggered D Flip Flop .... 6-13
ns e
6.9.4 Negative Edge Triggered JK
6.15.9 D FF to SR FF Conversion ................... 6-28
Flip-Flop ............................................... 6-16
io dg
6.15.10 T FF to SR FF Conversion ................... 6-28
6.10 Toggle Flip Flop (T Flip Flop) ............................. 6-17
6.15.11 Conversion from D FF to JK FF ........... 6-28
6.10.1 Positive Edge Triggered T-FF .............. 6-17
6.16 Applications of Flip Flops ................................... 6-29
at le
6.10.2
6.10.3
Negative Edge Triggered T Flip Flop .... 6-18
6.12.1 S-R Flip-Flop with Preset and Clear 6.17.2 SN74LS76A Dual JK Flip-Flop with Set and
Inputs .................................................... 6-20 Clear Low Power Schottky ................... 6-30
6.12.2 Synchronous Preset and Clear 6.18 Analysis of Clocked Sequential Circuits ............. 6-31
Pu K
6.15.3 SR Flip Flop to T Flip Flop .................... 6-25 7.1.1 Types of Counters . ................................. 7-2
6.15.4 SR Flip Flop to JK Flip Flop .................. 6-25 7.1.2 Classification of Counters ...................... 7-2
7.2.2 3 Bit Asynchronous Up Counter ............. 7-4 7.11 Lock Out Condition ............................................. 7-29
7.2.3 4 Bit Asynchronous up Counter .............. 7-5 7.11.1 Bushless Circuit . .................................. 7-29
7.2.4 State Diagram of a Counter .................... 7-6 7.12 Bush Diagram ..................................................... 7-31
7.3 Asynchronous Down Counters ............................. 7-6 7.13 Applications of Counters .................................... 7-33
ns e
7.3.1 3- Bit Asynchronous Down Counter ....... 7-6 Review Questions ........................................ 7-33
io dg
7.4.1 UP/DOWN Ripple Counters ................... 7-7 Chapter 8 : Registers 8-1 to 8-16
7.5 Modulus of the Counter (MOD-N Counter) ........... 7-8
Syllabus : Shift register types (SISO, SIPO, PISO & PIPO)
at le
7.5.1 Design of Asynchronous MOD
Counters ................................................. 7-8
& applications.
7.6.2 Other Applications of IC 7490 .............. 7-12 (Shift Left Mode) ..................................... 8-4
7.7 Problems Faced by Ripple Counters ................. 7-19 8.5.3 Applications of Serial Operation ............. 8-6
7.8.1 2-Bit Synchronous up Counter ............. 7-19 8.7 Parallel In Serial Out Mode (PISO) ....................... 8-7
8.11.1 Pseudo Random Binary Sequence (PRBS) 9.6.2 Execute Cycle ...................................... 9-10
Generator ............................................. 8-16
9.6.3 Interrupt Cycle ...................................... 9-11
Review Questions ........................................ 8-16
9.6.4 Examples of Microprograms ................ 9-12
Unit 4 9.6.5 Applications of Microprogramming ....... 9-14
Chapter 9 : Computer Organization & Processor 9.7 Control Unit Hardwired Control Unit Design
9-1 to 9-18 Methods .............................................................. 9-15
ns e
Syllabus : Computer organization and computer
architecture, Organization, Functions and types of Control Unit Design Methods ............................. 9-16
io dg
computer units - CPU (Typical organization, Functions, 9.8.1 Wilkie’s Microprogrammed Control
Types), Memory (Types and their uses in computer), IO Unit ....................................................... 9-17
(Types and functions) and system bus (Address, data and
9.8.2 Comparison between Hardwired and
control, Typical control lines, Multiple-Bus Hierarchies);
Micro-programmed Control ................... 9-18
at le
Von Neumann and Harvard architecture; Instruction cycle.
Flags, PC, MAR, MBR, IR) & control unit (Control signals Enhancements 10-1 to 10-40
and typical organization of hard wired and
Syllabus : Instruction : Elements of machine instruction;
microprogrammed CU), Micro Operations (Fetch, Indirect,
Instruction representation (Opcode and mnemonics,
Execute, Interrupt) and control signals for these micro
Assembly language elements) ; Instruction format and 0-1-
Pu K
operations.
2-3 address formats, Types of operands,
Case study : 8086 processor, PCI bus.
ch
9.4 Basic Instruction Cycle ......................................... 9-6 Case study : 8086 Assembly language programming.
9.4.1 Interrupt Cycle ........................................ 9-6 10.1 Instruction Encoding Format .............................. 10-2
9.5 CPU Architecture and Register Organization ....... 9-8 10.2 Instruction Format and 0-1-2-3 Address
Formats .............................................................. 10-5
9.5.1 Interrupt Control ..................................... 9-9
10.2.1 Instruction Formats .............................. 10-5
9.5.2 Timing and Control Unit .......................... 9-9
10.2.2 Instruction Word Format - Number of
9.6 Instruction, Micro-instructions and Micro-operations
Addresses ............................................ 10-5
Interpretation and Sequencing ............................. 9-9
10.3 Addressing Modes .............................................. 10-6
9.6.1 Fetch Cycle .......................................... 9-10
10.3.1 Examples on Addressing Modes .......... 10-8 10.9.2.4 Branch Prediction .............................. 10-31
10.4 Instruction Set of 8085 ........................................ 10-9 10.9.2.5 Pipeline Stall (Delayed Branch) ........ 10-31
10.5 Reduced Instruction Set Computer 10.9.2.6 Loop Unrolling Technique ................. 10-31
Principles .......................................................... 10-11 10.9.2.7 Software Scheduling or Software
10.5.1 RISC Versus CISC ............................. 10-11 Pipelining ........................................... 10-31
10.5.2 RISC Properties ................................. 10-11 10.9.2.8 Trace Scheduling .............................. 10-32
10.5.3 Register Window ................................ 10-12 10.9.2.9 Predicated Execution ........................ 10-33
ns e
10.5.4 Miscellaneous Features or Advantages of 10.9.2.10 Speculative Loading ....................... 10-33
RISC Systems .................................... 10-12
io dg
10.9.2.11 Register Tagging ............................ 10-33
10.5.5 RISC Shortcomings ............................ 10-14 10.9.3 Branch Prediction ............................... 10-34
10.5.6 ON-Chip Register File Versus Cache 10.9.3.1 Misprediction Penalty ......................... 10-34
Evaluation ........................................... 10-14
10.6
10.7
at le
Polling and Interrupts ........................................ 10-14
10.7.3 Linear Pipeline Procesors .................. 10-17 10.10 Multiprocessor Systems and Multicore Processor
(Intel Core i7 Processor) .................................. 10-35
Pu K
10.9.1 Methods to Resolve the Data Hazards and Chapter 11 : Memory & Input / Output Systems
Advances in Pipelining ....................... 10-29 11-1 to 11-38
10.9.1.1 Pipeline Stalls ..................................... 10-29
Syllabus : Memory Systems : Characteristics of memory
10.9.1.2 Operand Forwarding (or) Bypassing .. 10-29 systems, Memory hierarchy, Signals to connect memory to
10.9.1.3 Dynamic Instruction Scheduling (or) Out- processor, Memory read and write cycle, Characteristics of
Of-Order (OOO) Execution ................. 10-30 semiconductor memory : SRAM, DRAM and ROM, Cache
memory – Principle of locality, Organization, Mapping
10.9.2 Handling of Branch Instructions to Resolve
functions, Write policies, Replacement policies, Multilevel
Control Hazards ................................. 10-30
caches, Cache coherence,
10.9.2.1 Pre-Fetch Target Instruction .............. 10-30
Input / Output systems : I/O module, Programmed I/O,
10.9.2.2 Branch Target Buffer (BTB) ............... 10-30 Interrupt driven I/O, Direct Memory Access (DMA).
10.9.2.3 Loop Buffer ........................................ 10-30
Case study : USB flash drive.
11.1 Introduction to Memory and Memory 11.9.6 Cost and Performance Measurement of Two
Parameters ......................................................... 11-2 Level Memory Hierarchy .................... 11-28
11.1.1 Bytes and Bits ...................................... 11-3 11.9.7 Cache Consistency (Also Known as Cache
Coherency) ......................................... 11-28
11.2 Memory Hierarchy Classifications of Primary and
Secondary Memories .......................................... 11-3 11.9.8 Bus Master / Cache Interaction for Cache
Coherency .......................................... 11-28
11.3 Types of RAM ..................................................... 11-4
11.10 Pentium Processor Cache Unit ........................ 11-29
11.4 ROM (Read Only Memory) ................................. 11-5
ns e
11.10.1 Memory Reads Initiated by the Pentium
11.4.1 Types of ROM ...................................... 11-5
Processor ........................................... 11-29
io dg
11.4.2 Magnetic Memory ................................. 11-6
11.10.2 Memory Writes Initiated by PENTIUM
11.4.3 Optical Memory .................................... 11-7 Processor ........................................... 11-30
11.5 Allocation Policies ............................................... 11-9 11.10.3 Memory Reads Initiated by Another Bus
11.6
at le
Signals to Connect Memory To Processor and
Internal Organization of Memory ...................... 11-10
Master ................................................ 11-31
11.9 Cache Mapping Techniques ............................. 11-20 11.13.1 Programmed I/O ................................. 11-34
11.9.1 Direct Mapping Technique .................. 11-20 11.13.2 Interrupt Driven I/O ............................. 11-35
Te
11.9.2 Fully Associative Mapping .................. 11-21 11.13.3 DMA ................................................... 11-36
11.9.3 Set Associative Mapping .................... 11-22 11.13.4 DMA Transfer Modes ......................... 11-36
11.9.4 Write Policy ........................................ 11-24 Review Questions ...................................... 11-37
Chapter
1
ns e
io dg
at le
ic w
Digital Logic Families
bl no
Pu K
Syllabus
Digital IC characteristics; TTL : Standard TTL characteristics, Operation of TTL NAND gate; CMOS :
ch
Standard CMOS characteristics, Operation of CMOS NAND gate; Comparison of TTL & CMOS.
Case study : CMOS 4000 series ICs.
Te
Chapter Contents
1.1 Logic Families 1.7 MOS - Logic Family
1.2 Classification of Logic Families 1.8 CMOS Logic
1.3 Characteristics of Digital ICs 1.9 Standard CMOS Characteristics
1.4 Transistor-Transistor Logic (TTL) 1.10 Comparison of CMOS and TTL
1.5 Open Collector Outputs (TTL) 1.11 Case Study : CMOS 4000 series ICs
1.6 Standard TTL Characteristics
ns e
Logic families are defined as the type of logic circuit 3. LSI : Large Scale Integration.
used in an IC. Various digital ICs available in market
io dg
4. VLSI : Very Large Scale Integration.
belong to various types. Each type is known as a logic
The number of components (diodes, transistors, gates
family.
etc.) in SSI will be the lowest and that in VLSI will be the
Various digital ICs available in market belong to various highest.
at le
types. These types are known as “families”.
University Questions.
Table 1.1.1 : Comparison of important
features of logic families
Q. 1 Give the classification of logic families.
ch
ns e
DTL uses diodes and transistors, TTL uses transistors as are as follows :
the main elements.
io dg
1. Voltage and current parameters.
TTL has become the most popular family in SSI (Small 2. Fan-in and fan-out.
Scale Integration) and MSI (Medium Scale Integration) 3. Noise margin.
chips, while ECL is the fastest logic family which is used 4. Propagation delay (speed).
at le
for high speed applications.
In the MOS category there are three logic families 1. VIL (max) - Worst case low level input voltage :
namely PMOS (p-channel MOSFETs) family, NMOS (n- This is the maximum value of input voltage which is to
channel MOSFETs) family and CMOS (Complementary
be considered as a logic 0 level.
MOSFETs) family.
If the input voltage is higher than VIL (max), then it will not
PMOS is the oldest and slowest type. NMOS is used for
be treated as a low (0) input level.
the LSI (large scale integration) field i.e. for the
2. VIH (min) - Worst case high level input voltage :
microprocessors and memories.
CMOS which uses a push pull arrangement of n-channel This is the minimum value of the input voltage which is
and p-channel MOSFETs, is extensively used where low to be considered as a logic 1 level.
power consumption is needed such as in pocket If the input voltage is lower than VIH (min), then it will not
calculators. be treated as a High (1) input.
3. VOH (min) - Worst case high level output voltage : 4. IOH - High level output current :
This is the minimum value of the output voltage which This is the current flowing from the output when the
will be considered as a logic HIGH (1) level. output voltage happens to be in the specified HIGH (1)
voltage range and a specified load is applied.
If the output voltage is lower than this level then it
won't be treated as a HIGH (1) output. If the output current flows into the output terminal then
it is called as a sinking current and if the output current
4. VOL (max) - Worst case low level output voltage :
flows out of the output terminal then it is called as a
This is the maximum value of the output voltage which sourcing current.
ns e
will be considered as a logic LOW (0) level.
The current parameters are displayed on the logic circuit
If the output voltage is higher than this value then it shown in Fig. 1.3.3.
io dg
won't be treated as a LOW (0) output. All the voltage
parameters are shown in Fig. 1.3.1.
at le (a)
ic w
bl no
(b)
(C-7584) Fig. 1.3.1 : Voltage parameters
(C-208) Fig. 1.3.3 : Current parameters
The voltage parameters can be shown on the digital
Note that the actual current directions can be opposite
circuit consisting of gates as shown in Fig. 1.3.2.
Pu K
a high level input voltage in the specified range is Fan-in is defined as the number of inputs a gate has. For
applied. example a two input gate will have a fan-in equal to 2.
This is the current that flows out of the output when the Fan-out is defined as the maximum number of inputs of
output voltage happens to be in the specified low (0) the same IC family that a gate can drive without falling
voltage range and a specified load is applied. outside the specified output voltage limits.
Higher fan out indicates higher the current supplying In order to avoid the effects of noise voltage, the
capacity of a gate. designers adjust the voltage levels VOH (min) and VIH (min)
For example, a fan out of 5 indicates that the gate can to different levels with some difference between them
drive (supply current to) at the most 5 inputs of the as shown in Fig. 1.3.4.
same IC family. The difference between VOH (min) and VIH (min) is known as
ns e
University Questions.
High level noise margin,
Q. 1 Explain noise margin. (Dec. 08, 2 Marks)
io dg
Q. 2 Explain the following TTL characteristics : VNH = VOH (min) – VIH (min)
at le
Q. 3
3. Figure of Merit (Dec. 15, 6 Marks)
Explain any three characteristics of digital ICs.
When a high logic output is driving a logic circuit input,
any negative noise spike greater than VNH can force the
ic w
(May 17, 6 Marks) voltage to reduce into the invalid range.
Q. 4 Define following terms related to logic families :
Similarly when a low logic output is driving a logic
bl no
(Speed of Operation) :
University Questions.
A quantitative measure of noise immunity of a logic the instant of occurrence of the corresponding output
University Questions.
Q. 1 Explain the power dissipation. (Dec. 08, 2 Marks)
ns e
Q. 2 Define following term related to logic families :
io dg
Power dissipation. (May 18, 2 Marks)
(a) Propagation delays for an inverter
Definition :
given by,
P = VCC ICC
where ICC is the current drawn from the power supply.
Pu K
output makes a transition from LOW (0) to HIGH (1) then the propagation delay will increase in order to
state. keep their product constant.
The values of tPHL and tPLH are not always equal. If they Usually there is only one power supply terminal on any
are not equal then the one which is higher is considered IC. It is denoted by VCC for the TTL ICs and VDD for the
as the propagation delay time of the gate. CMOS ICs.
The propagation delays are measured between the 1.3.6 Operating Temperature :
points corresponding to 50% levels as shown in
The temperature range acceptable for the consumer
Fig. 1.3.5.
and industrial applications is 0 to 70 C and that for the
Ideally propagation delay should be zero and practically
military applications is – 55 C to 125 C.
it should be as short as possible.
The performance of gates will be in the specified limits
The values of propagation delays are used to measure
how fast a logic circuits is. only if the temperature is in these ranges.
University Questions
Definition :
(C-1000) Fig. 1.3.6
The figure of merit of a logical family is the product of
power dissipation and propagation delay. Current sinking :
ns e
It is called as the speed power product. The speed is The current sinking action has been demonstrated in
io dg
specified in seconds and power is specified in W. Fig. 1.3.6(b).
Figure of merit = Propagation delay time Power Gate-2 which is the load gate acts as a load.
dissipation. As soon as the output of Gate-1 goes low, the current
Ideally the value of figure of merit is zero and starts flowing into the output terminal of Gate-1 as
at le
practically, it should be as low as possible.
Figure of merit is always a compromise between speed
shown in Fig. 1.3.6(b).
The normal transistor has only three terminals namely Q. 3 Explain the working of two input TTL NAND gate
collector, base and emitter. with active pull up. Consider various input, output
But this special transistor has more than one emitters, as states for explanation.
shown in Fig. 1.4.1(a) and its equivalent circuit is shown (May 12, Dec. 12, 8 Marks)
in Fig. 1.4.1(b). The number of emitters is equal to the Q. 4 Draw and explain 2 inputs TTL NAND gate with
number of inputs of the gate. totem pole output. (Dec. 13, 6 Marks)
Circuit diagram :
ns e
A and B are the two inputs while Y is the output
io dg
terminal of this NAND gate.
at le
The multiple emitter input transistor can have upto
eight emitters, for an eight input NAND gate.
ic w
In the equivalent circuit of Fig. 1.4.1(b), diodes D1 and D2
represent the two base to emitter junctions whereas D3
bl no
either (or both) the diodes D1 and D2. us replace transistor Q1 by its equivalent circuit as
shown in Fig. 1.4.3.
This transistor will be in the OFF state if and only if both
the base-emitter junctions (D1 and D2) are reverse
Te
biased.
Standard TTL :
1.4.2 Two Input TTL-NAND Gate (C-1012) Fig. 1.4.3 : Transistor Q1 is replaced
(Totempole Output) : by its equivalent
SPPU : Dec. 08, Dec. 09, May 10, May 12, 1. A and B are the input terminals. The input voltages A
Dec. 12, Dec. 13. and B can be either LOW (zero Volt ideally) or HIGH
University Questions. (+ VCC ideally).
Q. 1 Draw three input standard TTL NAND gate circuit 2. A and B both LOW (A = B = 0) :
and explain its operation. (Dec. 08, 8 Marks)
If A and B both are connected to ground, then both the
Q. 2 Draw 2-input standard TTL NAND gate circuit and B-E junctions of transistor Q1 are forward biased.
explain operation of transistor (ON/OFF) with
suitable conditions and truth table. Hence diodes D1 and D2 in Fig. 1.4.3 will conduct to
(Dec. 09, May 10, 10 Marks) force the voltage at point C in Fig. 1.4.3 to 0.7 V.
ns e
The equivalent circuit for this input condition is shown
io dg
in Fig. 1.4.4(a).
at le 4.
A and B both HIGH (A = B = 1) :
Fig. 1.4.4(c).
3. Either A or B LOW (A = 0 or B = 0) :
remains OFF.
As Q3 acts as an emitter follower, output Y will be pulled (C-1014) Fig. 1.4.4(c) : Equivalent circuit for A = B = 1
to VCC. As Q2 conducts, the voltage at X will drop down and Q3
will be OFF, whereas voltage at Z (across R3) will
increase to a sufficient level to turn ON Q4.
(C-6373)
As Q4 goes into saturation, the output voltage Y will be
The equivalent circuit for this mode is shown in pulled down to a low voltage.
Fig. 1.4.4(b). Y = 0 …For A = B = 1
ns e
io dg
(C-1015) Fig. 1.4.5(a) : No power dissipation in R3
1.4.3 Totem-pole (Active Pull up) Output time constant will be very short for charging up any
bl no
Stage : SPPU : May 05, Dec. 07. capacitive load on the output as shown in Fig. 1.4.5(b).
University Questions.
Q. 1 Give advantages and disadvantages of totem-pole
Pu K
arrangement.
The Totem-pole output is also known as active pull-up. Disadvantages of Totem-pole output :
Advantages of totem-pole output stage : 1. Q4 in the totem-pole output turns OFF more slowly than
Q3 turns ON.
The advantages of using the totem-pole output stage
2. So before Q4 is completely turned OFF, Q3 will come
are as follows :
into conduction. So for a very short duration of few
1. With Q3 in the circuit, the current flowing through R3 will
nanoseconds, both the transistors will be simultaneously
be equal to zero when the output Y = 0, that means
ON.
when Q4 is ON, as shown in Fig. 1.4.5(a). This is
3. This is called as cross conduction and it will draw
important because it reduces the power dissipation
relatively large current (30 to 40 mA) from the 5 V
taking place in the circuit. supply.
Function of diode D :
Diode D is added in the circuit in order to keep Q3 OFF
when Q4 is already conducting.
It is important to avoid simultaneous conduction of Q3
and Q4, because it will lead to cross conduction and will
increase power dissipation.
Thus D3 is used for successfully avoiding the cross
conduction.
(a) Clamping diodes
ns e
Function of R3 :
io dg
During the cross conduction, if R3 is not used then there
will be no current limiting element in series with Q3 and
Q4 and a heavy current will be drawn from the source
which can damage the IC.
at le
This can be avoided by limiting the current by inserting
resistor R3 in series with Q3.
ic w
1.4.4 Unconnected Inputs :
If any input of a TTL gate is left open, disconnected or (b) Effect of clamping diodes
bl no
floating, then the corresponding base emitter junction (C-1019) Fig. 1.4.6
of the input transistor Q1 is not forward biased as shown
in Fig. 1.4.6(a). These are fast recovery diodes. They are forward biased
Therefore the open or floating input is equivalent to a during the negative half cycles of the ringing sinusoidal
Pu K
logical 1 is applied to that input. waveform. Hence the negative input voltage will be
Hence in TTL ICs all the unconnected inputs are treated restricted (clamped) to – 0.7 Volt approximately as
ch
suitable resistor.
from 0 to 70C, over a supply voltage range of 4.75
1.4.5 Clamping Diodes : to 5.25 V.
oscillations with positive and negative half cycles). Due 1.4.7 Three Input TTL NAND Gate :
to ringing, the inputs will be subjected to negative SPPU : Dec. 10, May 11.
The circuit diagram of a three input NAND gate is as Table 1.4.2 shows the truth table and the status of
shown in Fig. 1.4.7. various transistors in the circuit.
Note that the multiple emitter transistor has three (C-6351) Table 1.4.2 : Truth table of the 3-input NAND gate
emitter terminals which act as the inputs A, B and C to
the NAND gate.
ns e
io dg
1.5 Open Collector Outputs (TTL) :
at le University Questions.
SPPU : May 08, Dec. 11, May 15.
ic w
Q. 1 What is the advantage of open collector output ?
Justify your answer with suitable circuit diagram.
bl no
Circuit diagram :
We have seen that the gates having totem-pole output
cannot be wired ANDed.
Such a connection becomes possible if another type
Te
The principle of operation of this circuit is exactly same You will realize that this is the same TTL NAND gate
as that of the two input NAND gate discussed in which we have discussed earlier but with R3 and Q3
Y = 0 if all the inputs are 1. (C-1027) Fig. 1.5.1 : Open collector 2 input NAND gate
The collector point of Q4 is brought out as output as 1.5.1 Disadvantages of Open Collector
shown in Fig. 1.5.1, therefore it is called as open
Output :
collector output. 1. The value of pull up resistance is high (few k).
For proper operation it is necessary to connect an Therefore if the load capacitance is large then the RC
external resistance R3 between VCC and the open time constant (R3C) becomes large. This slows down the
collector output as shown in Fig. 1.5.1. switching speed of Q4. Therefore the gates having an
This resistance is called as pull up resistance. open collector output will be slow.
Operation :
2. Second disadvantage is increased power dissipation.
ns e
1. With A = B = 0 : When Q4 is ON, a large current flows through the pull
With A = B = 0, both the BE junctions of Q1 are forward up R3. Hence power dissipation is increased. This
io dg
biased. So Q2 remains OFF. problem is eliminated if we use totem-pole output
arrangement.
Hence no current flows through R4. So VZ 0 V.
Therefore Q4 is OFF and its collector voltage is equal to 1.5.2 Advantage of Open Collector Output :
2.
at le
VCC. So Y = 1 when A = B = 0.
3. With A = B = 1 :
Wired ANDing means connecting the outputs of gates
When both the inputs are high, transistor Q1 is turned
together to obtain the AND function.
ch
OFF.
It is possible to connect the outputs of two or more
So Q2 will be turned ON. Sufficient voltage is developed
gates together as shown in Fig. 1.5.3(a).
across R4. Base current is applied to Q4 and Q4 goes into
The wired ANDing is represented schematically as
Te
saturation.
shown in Fig. 1.5.3(b).
So the output voltage is equal to VCE (sat) of Q4 which is
very small. Thus Y = 0 when A = B = 1. The equivalent
circuit for this mode is shown in Fig. 1.5.2.
(C-1028) Fig. 1.5.2 : Equivalent circuit of open collector (C-1029) Fig. 1.5.3(a) : Wired ANDing of two open collector
2-input NAND gate with A = B = 1 TTL gates
ns e
io dg
(C-1029) Fig. 1.5.3(b) : Symbol of wired ANDing
at le
Q4A and Q4B are the output transistors of Gate-A and
Gate-B respectively. A common pull up resistance is (c) Wired ANDing of inverter outputs
ic w
used for all the output transistors.
The transistors Q4A and Q4B are operated as switches.
bl no
The output will be high (1) if and only if all the 1.5.4 Comparison of Totem-pole and Open
transistors are in their OFF state. Collector Outputs :
This is the reason why such a connection is called as
Te
Sr. Open
Parameter Totem-pole
No. collector
1. Circuit Q3 (pull up Only the pull
components transistor), D down
on the output and Q4 (pull transistor Q4
side down transistor) is used.
(C-1030) Fig. 1.5.4(a) : Equivalent circuit of wired ANDing are used.
ns e
up transistor flowing Noise margin :
Q3. through R3.
We have already defined the noise margins as,
io dg
5. Speed Operating Operating
speed is high. speed is low. High level noise margin,
SPPU : May 11, Dec. 11, and Low level noise margin,
at le
University Questions.
May 12, Dec. 14, Dec. 15, Dec. 17
VNL = VIL (max) – VOL (max)
Mention typical values for standard TTL family : VNL = 0.8 – 0.4 = 0.4 V.
1. Propagation delay 2. Fan-out
Thus noise margin for the TTL family is 0.4 V. This means
3. VIL, VIH 4. Noise margin
that as long as the induced noise voltage is less than 0.4
Pu K
1. Noise Immunity
2. High level input voltage (VNH) Power dissipation :
3. Figure of Merit (Dec. 15, 6 Marks)
The average power dissipation for the standard TTL 74
Q. 4 Explain standard TTL characteristics in brief.
series is approximately 10 mW.
Te
Table 1.6.2 shows the input and output logic voltage All the important characteristics of TTL logic family are
Table 1.6.3 : Important parameters of TTL logic family 1.8 CMOS Logic :
Sr. Definition :
Parameter Values
No.
CMOS stands for complementary MOSFET. It is obtained
1. Supply voltage 74 series : (4.75 to 5.25 V) by using a p-channel MOSFET and n-channel MOSFET
54 series : (4.5 to 5.5 V) simultaneously.
2. Temperature 74 series : 0 to 70 C The p and n channel MOSFETs are connected in series,
range 54 series : – 55 to 125 C with their drains connected together and output taken
3. Voltage levels VIL (max) = 0.8 V from common drain point.
ns e
VOL (max) = 0.4 V Input is applied at the common gate terminal formed by
VIH (min) = 2 V connecting two gates together.
io dg
VOH (min) = 2.4 V The 74C00 CMOS series is a group of CMOS circuits
4. Noise margin 0.4 V which are pin-for-pin and function for function
compatible with the TTL 7400 devices.
5. Power 10 mW
6.
at ledissipation
Propagation 10 nanosec.
For example 74C32 is a quad 2-input OR gate in the
CMOS family whereas 7402 is a quad 2 input TTL gate in
the 7400 TTL family.
ic w
delay
7. Fan out 10 1.8.1 CMOS NAND Gate :
8. Figure of merit 100 SPPU : Dec. 11,,Dec. 12.
bl no
University Questions.
1.6.1 Advantages of TTL :
Q. 1 Draw the structure of two input CMOS NAND gate.
1. TTL circuits are fast. Explain its working. (Dec. 11, 4 Marks)
Pu K
2. Low propagation delay. Q. 2 Draw and explain the working of a 2-input CMOS
3. Power dissipation is not dependent on frequency. NAND gate. (Dec. 12, 8 Marks)
ch
In logic circuits, the enhancement MOSFETs are used. (a) 2-input CMOS NAND gate
2. With A = 0 and B = 1 :
1. So Y = 1 if A = 0 and B = 1.
ns e
io dg
(b) Equivalent circuit
at le
Q1 and Q2 are p-channel MOSFETs. They are connected
in parallel with each other.
ic w
Q3 and Q4 are n-channel MOSFETs. They are connected
in series with each other.
Input A is connected to gates of Q1 and Q3. So A
bl no
controls the status of MOSFETs Q1 and Q3.
(C-1059) Fig. 1.8.2(b) : Equivalent circuit for A = 0, B = 1
Input B is connected to gates of Q2 and Q4. So B
controls the status of MOSFETs Q2 and Q4. 3. With A = 1 and B = 0 :
Pu K
1. A=B=0:
turned OFF.
With A = 0 and B = 0, both the PMOSFETs i.e. Q1 and Q2
As seen from the equivalent circuit of Fig. 1.8.2(c), the
will be ON. But both the N-MOSFETs i.e. Q3 and Q4 will
output Y = + VDD (logic 1).
be OFF.
Te
So Y = 1 if A = B = 0.
So Y = 1 if A = 1 and B = 0.
4. With A = 1, B = 1 :
With A = B = 1 both P-MOSFETs i.e. Q1 and Q2 will be
(C-1059) Fig. 1.8.2(a) : Equivalent circuit for A = B = 0 OFF and both the N-MOSFETs i.e. Q3 and Q4 will be ON.
The equivalent circuit of this mode is shown in These ICs can operate on higher power supply voltages,
Fig. 1.8.2(d). when higher noise margin is required.
The other CMOS families such as 74 HC/HCT, 74
AC/ACT and 74 AHC/AHCT operate over a voltage
range of 2 to 6 V.
ns e
levels for different CMOS series.
io dg
Table 1.9.1 : Logic voltage levels (in volts) with
VDD = VCC = + 5 V
CMOS TTL
Para-
(C-1060) Fig. 1.8.2(d) : Equivalent circuit for A = 1, B = 1 4000B 74 74 74 74 74 74 74 74 74 74
at le
It shows that output Y = 0 (LOW).
So Y = 0 if A = B = 1.
meter
VIH(min) 3.5
HC HCT AC ACT AHC AHCT
VOL(max) 0.05 0.1 0.1 0.1 0.1 0.44 0.1 0.4 0.5 0.5 0.4
gate.
VNH 1.45 1.4 2.9 1.4 2.9 0.55 1.15 0.4 0.7 0.7 0.7
Table 1.8.1 : Truth Table of a CMOS NAND gate
VNL 1.45 0.9 0.7 1.4 0.7 1.21 0.7 0.4 0.3 0.3 0.4
Inputs Transistors Output
Pu K
0 0 ON ON OFF OFF 1 1. All the voltage levels shown in Table 1.9.1 correspond to
ch
Table 1.9.1 contains the high level and low level noise
1.9 Standard CMOS Characteristics : margins VNH and VNL for each CMOS and TTL series.
The 4000/14000 series and 74C series CMOS ICs are It can be observed that the CMOS devices will have
capable of operating over a wide range of power supply higher noise margins as compared to those of TTL. So
voltage (typically 3 V to 15 V). CMOS ICs should be preferred to TTL for operation in
the noisy environment.
That means the CMOS ICs from these series are
extremely versatile. They can operate on 3 V batteries as The noise margins in CMOS will increase further if the
well as 5 V TTL compatible power supplies. supply voltages are increased further.
1.9.4 Power Dissipation : SPPU : May 06. These capacitors act as load on the driving gate as
shown in Fig. 1.9.1.
University Questions.
The charging current for these capacitor has to be
Q. 1 Define power dissipation and give typical values of
supplied by the driving gate. This current should not be
this parameters with respect to CMOS logic family.
too large. This will limit the fan-out to 50.
(May 06, 2 Marks)
ns e
Typically the dc power dissipation is 2.5 nW per gate
io dg
when VDD = 5 V and 10 nW for VDD = 10 V. This is too
small as compared to TTL gates (PD = 10 mW).
at le
operated systems.
is demonstrated in Table 1.9.2. Q. 1 Define speed of operation and give typical values
of this parameters with respect to CMOS logic
Pu K
Table 1.9.2 : Relation between PD and frequency family. (May 06, 2 Marks)
Frequency 0 (dc) 100 kHz 1 MHz The output resistance of CMOS is low in both the states
ch
(0 or 1) of output.
Power dissipation PD 10 nW 0.1 mW 1 mW
So eventhough it has to drive large capacitive loads, the
1.9.5 Fan Out : SPPU : May 06. switching speed can still be faster than the NMOS or
Te
PMOS devices.
University Questions.
The NAND gate of 4000 series has following values of
Q. 1 Define fan out and give typical values of this
propagation delays :
parameters with respect to CMOS logic family.
Average tpd = 50 nS ….. at VDD = 5 V
(May 06, 2 Marks)
Average tpd = 25 nS ….. at VDD = 10 V
The input resistance of CMOS devices is very high
The average time delay for various CMOS ICs are given
12
(10 ). So their input current is very very small, almost in Table 1.9.3.
zero.
(C-6352) Table 1.9.3 : Switching speeds for various
Therefore one CMOS gate can drive a large number of CMOS ICs at VDD = 5 V
other CMOS gates. Hence fan out of CMOS devices will
be large as compared to fan out of TTL.
All the CMOS inputs should be either connected to 0 V Q. 3 Compare TTL and CMOS logic family. Draw
(ground) or VDD, or to another inputs. CMOS NOR gate. (Dec. 14, 6 Marks)
This is necessary to avoid permanent damage of CMOS Table 1.10.1 : Comparison of CMOS and TTL
ICs.
Sr.
Sometimes there are some unused gates on a chip. The Parameter CMOS TTL
No.
inputs of such gates also should be connected to
ground or + VDD . 1. Device used N-channel Bipolar junction
The CMOS ICs are damaged due to induced voltages at MOSFET and transistor
ns e
the floating inputs due to noise or static charges. P-channel
MOSFET
io dg
Such voltages can bias the P-MOS or N-MOS in their
2. VIH (min) 3.5 V (VDD = 5 V) 2V
conduction state that may cause overheating and
damage. 3. VIL (max) 1.5 V 0.8 V
1.9.8 Advantages of CMOS : SPPU : Dec. 07. 4. VOH (min) 4.95 V 2.7 V
at le
University Questions.
5.
6.
VOL (max)
High level
0.05 V
VNH = 1.45 V
0.4 V
0.4 V
ic w
Q. 1 What are the advantages of CMOS devices over
noise margin
TTL devices ? Explain in short. (Dec. 07, 4 Marks)
7. Low level VNL = 1.45 V 0.4 V
bl no
need smaller BJT needs Single 8-Input 4068 4068 4078 4078
space while more space.
fabricating an IC. Quad 2-Input NAND = 4093 (schmitt trigger inputs)
17. Operating MOSFETs are Transistors are Dual 2-Input NAND = 40107 open drain outputs
ns e
areas operated as operated in Other circuits :
io dg
switches. i.e. in saturation or
4008 4-bit binary full adder
the ohmic region cut off regions.
or cut off region. 4006 Shift Registers
at le supply
voltage
to 15 V. 5 V. 4024
4026
Binary Counter
7-Segment Decoder
ic w
1.11 Case Study : CMOS 4000 series ICs : 4027 Dual M-S JK Flip-Flop
10:1 Multiplexer
bl no
The popular CMOS series are 4000/14000 series, 74C 4046 PLL
Multivibrator
Logic gates :
Q. 1 Which are the different logic families ? Write their
1. One input logic gates : characteristics.
Te
Quad Buffer/Inverter = 4041 (4x CMOS drive) Q. 2 Explain the use of multi-emitter inputs.
Quad Buffer = 40109 (dual power-rails for voltage-level Q. 3 Define the following terms regarding a logic family :
translation)
1. Noise margin
Hex Buffer = 4504 (dual power-rails for voltage-level
2. Propagation delay
translation)
Q. 4 Compare the performance of TTL and CMOS logic.
Hex Buffer = 4050 (4x 74LS drive)
Q. 5 Explain the features of complementary symmetry
Hex Inverter = 4049 (4x 74LS drive)
logic (CMOS).
Hex Inverter = 4069
Q. 6 Mention the advantages and disadvantages of TTL,
Hex Inverter = 40106 (S inputs)
and CMOS IC families.
2. Two to eight input logic gates :
Q. 7 Define any four characteristics of logic gates.
Configuration AND NAND OR NOR XOR XNOR
Q. 8 Classify the IC according to their scale of integration.
Quad 2-Input 4081 4011 4071 4001 4070 4077
Q. 9 Explain what is meant by TTL ?
Q. 10 Draw the circuit diagram of two input TTL NAND Q. 12 Give the important characteristics of CMOS logic
gate and explain its function. family and explain their importance.
Q. 11 Explain briefly the operation of CMOS NAND gate. Q. 13 State specifications of standard TTL family.
ns e
io dg
at le
ic w
bl no
Pu K
ch
Te
Chapter
2
ns e
io dg
at le Number Systems and
ic w
Codes
bl no
Pu K
Syllabus
Binary, BCD, Octal, Hexadecimal, Excess-3, Gray code and their conversions.
ch
Chapter Contents
2.1 Introduction 2.11 Conversion from Binary to Other Systems
2.2 System or Circuit 2.12 Conversion from Other Systems to Binary System
2.3 Binary Logic and Logic Levels 2.13 Conversion from Octal to Other Systems
2.7 Octal Number System 2.17 Binary Coded Decimal (BCD) Code
ns e
But today, the digital electronics is used in many other
applications such as : TV, Radar, Military systems,
io dg
Medical equipments, Communication systems, Industrial (B-1706) Fig. 2.2.1 : Digital circuit
process control and Consumer electronics. Examples of digital systems :
Signals :
The examples of digital circuits are adders, subtractors,
at le
We can define “signal” as a physical quantity, which
contains some information and which is a function of
one or more independent variables.
registers, flip-flops, counters, microprocessors, digital
calculators, computers etc.
ic w
2.3 Binary Logic and Logic Levels :
The signals can be of two types :
otherwise it is OFF.
finite number of distinct values.
ns e
positive logic. Also we will assume the logic 0 level
Hence we can implement a common rule for all the
corresponds to 0 Volts and logic 1 level
io dg
numbering systems as follows.
corresponds to + 5 V.
For a general number, we have to multiply each of digit
2.4 Number Systems : by some power of base (B) or radix as shown in
Fig. 2.4.2.
Definition :
at le
A number system defines a set of values used to
represent a quantity.
ic w
We talk about the number of people attending class,
the number of modules taken per student and also use
numbers to represent grades obtained by students in
bl no
tests.
(C-6) Fig. 2.4.2
The study of number systems is not just limited to
computers. 4. Column numbers :
Pu K
We apply numbers every day and knowing how The column number is the number assigned to the
numbers work will give us an insight into how a digits placed in relation with the decimal point.
computer manipulates and stores numbers.
ch
2.5 The Decimal Number System : 2.6 The Binary Number System :
Definition : Definition :
The number system which has a base of 10, is called as A number system with a radix 2 is called as the binary
We are all familiar with counting and mathematics that Most modern computer systems use the binary logic for
Looking at its make up will help us to understand other A computer cannot operate on the decimal number
ns e
numbering systems. system.
io dg
2.5.1 Characteristics of a Decimal System :
and 1.
Some of the important characteristics of a decimal The binary number system works like the decimal
system are : number system except one change. It uses the
1.
2. at le
It uses the base of 10.
(C-8) Fig. 2.5.1 : Positions and corresponding weighted values (C-10) Fig. 2.6.1 : Weights for different
for a decimal system positions for a binary system
Te
Most Significant Digit (MSD) : The binary digits (0 and 1) are also called as bits. Thus
The leftmost digit having the highest weight is called as binary system is a two bit system.
the most significant digit of a number. The leftmost bit in a given binary number with the
Least Significant Digit (LSD) : highest weight is called as Most Significant Bit (MSB)
whereas the rightmost bit in a given number with the
The rightmost digit having the lowest weight is called as
lowest weight is called as Least Significant Bit (LSB).
the least significant digit of a number.
Ex. 2.6.1 : Express the binary number 1011.011 in terms
Ex. 2.5.1 : Represent the decimal number 532.86 in terms
of powers of 2.
of powers of 10.
Soln. :
Soln. :
Step 1 : Express the given number in powers of 2 :
The required representation is shown in Fig. P. 2.5.1.
2.6.1 Binary Number Formats : 4. Each digit has a different multiple of base. This is as
shown in Fig. 2.7.1.
We typically write binary numbers as a sequence of bits
(bits is short for binary digits).
ns e
octal system
io dg
Ex. 2.7.1 : Represent the octal number 645 in power of 8.
Soln. :
Representation in power of 8 :
2.7 at le
Octal Number System :
ic w
Definition :
(C-14) Fig. P. 2.7.1
A number system with a radix 8 is called as the octal
2.8 Hexadecimal Number System :
bl no
number system.
Features : Definition :
The important features of the octal number systems are A number system with a radix 16 is called as the
Pu K
2. The number of values assumed by each digit : The important features of a hexadecimal number system
are as follows :
Each digit in the octal system will assume 8 different
values from 0 to 7 (0, 1, 2,….., 6, 7). 1. Base : The base of hexadecimal system is 16.
Te
The largest value of a digit in the octal system will be 7. The number of values assumed by each digit is 16.
That means the octal number higher than 7 will not be
The values include digits 0 through 9 and letters A, B, C,
8, instead of that it will be 10.
D, E, F. Hence the sixteen possible values are :
Table 2.7.1 gives you a clear idea about this.
0123456789ABCDEF
(C-7770) Table 2.7.1 : Octal numbers
0 represents the least significant digit whereas F
represents the most significant digit.
(C-7772) Table 2.8.1 : Hexadecimal digits and their values Ex. 2.8.1 : Represent the hexadecimal number 6DE in the
powers of 16.
Soln. :
ns e
(C-17) Fig. P. 2.8.1
io dg
2.9 Conversion of Number Systems :
at le
done by expanding the given number in a power series
and adding all the terms.
The hexadecimal (base 16) numbering system solves If the given number includes the radix point, then it is
this problem. necessary to separate the number into an integer part
1. Hex numbers are very compact. Then each part should be converted by considering
to hex.
2.10 Conversions Related to Decimal
Largest value of a digit : System :
The largest value of a digit in the hexadecimal number
Te
system is 15 and it is represented by F. In this section we will perform the following conversions
The hexadecimal number higher than F will be 10. related to the decimal system :
Table 2.8.2 gives you a clear idea about hexadecimal 1. Decimal to other systems.
numbers.
2. Other systems to decimal.
The largest two digit hexadecimal number is FF which
corresponds to 255 decimal. The next higher number 2.10.1 Conversion from any Radix r to
Decimal :
after FF is 100.
Positional weights : The general procedure for conversion from binary to
The positional weights for a hexadecimal number to the decimal is as given below :
left and right of decimal point are as shown in Fig. 2.8.1. Steps to be followed :
Step 4 : Add all the product numbers to get the decimal Hex to decimal :
equivalent. Ex. 2.10.4 : Convert the hex number (4C8.2)16 into its
The following example demonstrates the conversion of equivalent decimal number.
a binary, octal and hex number to its decimal Soln. :
equivalent.
Binary to decimal :
ns e
decimal equivalent.
Soln. :
io dg
Steps 1, 2 and 3 :
(C-19)
Ex. 2.10.6 : Perform the following operations :
Step 4 : Addition : (1001.10)2 = (______)10
(May 12, 2 Marks)
(1 0 1 1 . 01)2 = (11.25)10 …Ans.
Soln. : Solve it yourself.
Pu K
Ex. 2.10.2 : Convert the octal number (314) into its (1001.10)2 = (9.5)10
ch
8
decimal equivalent. 2.10.2 Conversion from Decimal to Other
Soln. : Systems :
Ex. 2.10.3 : Convert the octal number (365.24)8 into its The procedures for converting the integer part and the
equivalent decimal number. fractional part are completely different from each other.
Soln. : 2.10.2.1 Successive Division for Integer Part
Conversion :
(C-20)
Step 1 : Divide the integer part of given decimal number
ns e
(204)10 = (314)8 …Ans.
Decimal to Binary :
(C-24) Fig. P. 2.10.8 : Decimal to octal conversion
io dg
Ex. 2.10.7 : Convert (105)10 to the equivalent binary
Ex. 2.10.9 : Do the required conversions for the following
number.
number :
Soln. : (1000)10 = (_______)8 Dec. 11, 6 Marks.
at le
Refer Fig. P. 2.10.7 which shows a simpler method. We
divide the given number by the radix or base of binary
Soln. :
(1000)10 = (_______)8
ic w
system which is 2.
bl no
(C-6106)
Decimal to Hex :
Pu K
Ex. 2.10.11 : Convert the decimal number 259 into its hex
ch
equivalent.
Soln. :
The conversion takes place as follows :
Te
Decimal to Octal :
Soln. :
(C-3205) So (0.42)10 = (0.01101)2 …Ans.
(1024)10 = (______)16 :
Note : We could have continued further. But the
conversion is generally carried out only upto 5
digits.
ns e
Fractional Part Conversion :
io dg
Now let us see how to convert the fractional part of a
decimal number into any other radix number.
at le
Steps to be followed :
Decimal to Binary :
binary.
Soln. :
decimal to octal
Decimal to Hex :
Soln. :
ns e
io dg
(C-30)
Now let us see the conversion of mixed decimal number Decimal to Octal :
Pu K
number.
Follow the procedure given below for such a conversion.
Soln. :
Steps to be followed :
Step 1 : Separate the integer and fractional parts :
Te
Soln. :
ns e
Step 2 : Convert integer part : equivalent octal number, follow the procedure given
below :
io dg
Step 1 : Divide the binary bits into groups of 3 starting
from the LSB.
at le (C-35)
Step 2 : Convert each group into its equivalent decimal.
As the number of bits in each group is restricted
to 3, the decimal number will be same as octal
ic w
Step 3 : Convert the fractional part into hex :
number.
Ex. 2.11.1 : Convert the binary number (1 1 0 1 0 0 1 0)2
bl no
(C-36)
(0.31)10 = (0.4F5C2)
16
Ex. 2.10.20 : Convert the following numbers into equivalent Note : In the third group, there are only 2 bits. Hence we
decimal numbers :
have assumed the number to be 011 instead of 11.
1. (327.4051)8
Always add the extra zeros on the MSB side, not
2. (5A.FF)16
on LSB side.
3. (101110111)2
Ex. 2.11.2 : Convert the following binary numbers to octal
4. (3FFF)16. Dec. 12, 8 Marks)
Soln. : Solve it yourself. then to decimal. Show the steps of
Ans. : conversions.
ns e
0010 into the equivalent hex number.
io dg
Soln. :
1. (1010.11)Decimal
2. (428.10)Decimal Dec. 09, 6 Marks.
N = 64 + 16 + 3 + 0.25 + 0.078125
Soln. : Solve it yourself.
= 83.328125
Te
Ans. :
(123.25)8 = (83.328125)10 …Ans.
1. (1010.11)10 = (1111110010.0001)2
3. (10110011)2 = (?)8 = (?)10 2. (428.10)10 = (110101100.0001)2
Step 1 : Convert binary to octal : (C-6327) Ex. 2.12.2 : Express the following numbers in binary,
show the step-by-step equations and
calculations :
1. (110.110)Decimal 2. (234.234)Decimal
May 10, 6 Marks.
Soln. : Solve it yourself.
Ans. :
(10110011)2 = (263)8 …Ans.
1. (110.110)10 = (1101110.0001)2
(C-6366)
ns e
Ex. 2.12.4 : Convert (364.25)8 into its equivalent binary
io dg
number.
Soln. :
Follow the same procedure explained in the previous
at le
example. (C-7434) Fig. P. 2.12.6(a)
(C-7437)
(C-38) Fig. P. 2.12.5 : Hex to binary conversion (5 D B)16 = (0101 1101 1011 . 1111 1010)2 ...Ans.
Hence (A F B 2) = (1010 1111 1011 0010)2 …Ans.
16
Ex. 2.12.7 : Express the following numbers in binary
Ex. 2.12.6 : Convert the following numbers in Binary format. Write step by step solution.
form : 1. (7762)octal
1. (125.12)10 = (?)2
2. (432A)hex
2. (337.025)8 = (?)2
3. (2946)decimal
3. (5DB.FA)16 = (?)2 Dec. 18, 6 Marks
4. (1101.11)decimal Dec. 10, 12 Marks.
ns e
We have already discussed the following two 3. Octal to decimal :
io dg
conversions : (777)8 = (1FF)16 ...Ans.
1. Octal to decimal 2. Octal to binary. (C-1949)
starting from the LSB side. Note : The bits are grouped starting from the fractional
Step 4 : Then convert each of this 3-bit group into an octal point and moving towards right.
digit.
Conversion of mixed hex number to octal :
Ex. 2.13.5 : Convert the hex number 4CA into its
equivalent octal form. The procedure to be followed for conversion of mixed
Soln. : hex number is same as the one discussed for fractional
Step 1 : Convert (4CA)16 into binary :
hex conversion.
ns e
4 C A
Ex. 2.13.7 : Convert the hex number (68.4B) into
16
io dg
0100 1100 1010 equivalent octal number.
Step 2 : Combine the 4 bit binary sections by removing Soln. :
spaces :
(4 C A) = (0100 1100 1010)2
16
at le
Step 3 : Group these binary bits into groups of 3 bits :
(C-6368)
(C-42)
Step 4 :
(4 C A)16 = (2 3 1 2) …Ans. 4. Form groups of 3 bits :
8
Step 1 : Convert the given fractional hex number into its 5. Convert into octal :
equivalent binary number. (68.4B)16 = (150.226)8 …Ans.
Step 2 : Group the binary bits into groups of 3 bits.
Ex. 2.13.8 : Convert the following octal numbers into its
Te
(C-41)
Soln. : Solve it yourself.
ns e
2.14.1 Other Systems to Hex :
io dg
(C-6340)
We are supposed to discuss the following conversions :
(453.54)8 = (100101011.101100)2 ...Ans.
1. Decimal to Hex
Step 2 : Convert octal to hex :
2. Binary to Hex 3. Octal to Hex
Binary = (100101011.101100)2
at le
We have already discussed them.
conversion) :
1. (357.2)8 2. (453.54)8
(C-6341)
May 14, 6 Marks)
(453.54)8 = (12B.B)16 ...Ans.
Soln. :
Pu K
(C-6338) (C-806)
Te
(C-6339) Soln. :
Step 3 : Convert octal to decimal : Step 1 : Separate integer and fractional part :
(C-4807)
(C-5902)
ns e
(C-5903)
io dg
(357.3)8 = (EF.6)16 ...Ans.
at le (C-5904)
ic w
Step 4 : Combine results of steps 2 and 3 : (C-4807(a))
1. (675.625)10 2. (451)8
(C-5907) 3. (95.5)10 4. (11001011101)2.
(101011.111011)2 = (53.73)8 …Ans. (Dec. 12, 8 Marks)
Soln. : Solve it yourself.
Te
2. (451)8 = (129)16
3. (95.5)10 = (5F.80)H
(C-5706)
= (19.8CC)16
Ex. 2.14.6 : Convert the following octal numbers into its Step 2 : Group these binary bits into groups of 3 bits :
equivalent Hexadecimal, Binary and Decimal
numbers :
ns e
2. (0.7634)8 = (0.111110011100)2 = (0.F9C)16 =
io dg
(0.9755)10
at le
2.14.2 Hex to Other Systems :
(DEF)Hex :
…Ans.
ic w
1. Hex to Decimal. Conversion of hex to octal :
2. Hex to Binary Step 1 : Convert each hex digit into 4-bit binary word :
bl no
3. Hex to Octal.
1. (ABC)Hex. (C-1934)
1. (ABC)Hex :
(C-1935)
Conversion of hex to octal :
(DEF)16 = (3567)10 …Ans.
Step 1 : Convert each hex digit into 4-bit binary word :
Ex. 2.14.8 : Convert the following numbers, show all the
steps :
1. (101101.10101)2 = ( )10
2. (247)10 = ( )8
(C-1930)
3. (0.BF85)16 = ( )8 Dec. 14, 6 Marks)
1. (101101.10101)2 = (?)10
(C-5113)
(2598)10 = (A26)16
(C-4936)
ns e
N = 45.65625
io dg
(101101.10101)2 = (45.65625)10 ...Ans.
2. (247)10 = (?)8 :
at le (C-5114)
(0.675)10 = (0.ACCC)16
ic w
Step 4 : Combine results of steps 2 and 3 :
(C-4937) (2598.675)10 = (A26.ACCC)16 ...Ans.
bl no
(C-6342) (C-5115)
3. (A72E)16 = ( ? )8 :
(C-5117)
(C-5112)
(A72E)16 = (123456)8 ...Ans.
Soln. : (C-5706)
ns e
Step 1 : Separate integer and fractional part :
Ex. 2.14.11 : Do the required conversions for the following
io dg
numbers :
(BF8)16 = (______)10 Dec. 11, 2 Marks.
(C-5703)
Soln. : Solve it yourself.
at le
Step 2 : Convert the integer : Ans. :
(BF8)16 = (3064)10
ic w
Ex. 2.14.12 : Do the required conversions for the following
numbers :
(1FFF)16 = (_______)10 (May 12, 2 Marks)
bl no
2. (77BA)H = (0111 0111 1011 1010)2 = (73672)8 Each position of a number represents a specific weight.
2.15 Concept of Coding : Several systems of codes are used to express the
decimal digits 0 through 9. These codes have been
Definition : listed in Fig. 2.16.2.
When numbers, letters or text characters are The codes 8421, 2421, 3321 …. all are the weighted
ns e
represented by a specific group of symbols, it is said codes.
that the number, letter or word is being encoded.
io dg
In these codes each decimal digit is represented by a
And the group of symbols is called as the code. group of four bits as shown in Fig. 2.16.2.
Binary codes :
The digital data is represented, stored and transmitted
at le
as group of binary bits. Such a group of binary bits is
also called as binary code.
ic w
The binary codes can be used for representing the
numbers as well as alphanumeric letters.
bl no
3. Reflective codes. The codes in which the positional weights are not
4. Sequential codes. assigned, are known as non weighted codes.
Definition :
ns e
The positional weights associated to the binary bits in which is expressed as (0001 0000) in BCD.
io dg
BCD code are 8-4-2-1 with 1 corresponding to LSB and
2.17.1 Comparison with Binary :
8 corresponding to MSB.
3 2 1 0
1. BCD is less efficient than binary :
These weights are actually 2 , 2 , 2 and 2 which are
Conversion of a decimal number 78 into BCD and binary
same as those used in the normal binary system.
at le
Conversion from decimal to BCD :
is same as binary.
(C-6142)Table 2.17.1
2. BCD arithmetic is more complicated than Binary
arithmetic.
3. The advantage of a BCD code is that the conversion
from Decimal to BCD or vice versa is simpler.
Te
But in BCD code only the first ten of these are used
(0000 to 1001).
25 0010 0101 The Excess-3 code words are derived from the 8421
BCD code words by adding (0011)2 or (3) 10 to each code
ns e
169 0001 0110 1001
523 0101 0010 0011 word in 8421. The excess - 3 codes are obtained as
io dg
2.17.2 Advantages of BCD Codes : follows :
at le
numbers 0 to 9 only.
2.17.3 Disadvantages :
Excess – 3 codes for the single digit decimal numbers
1. (5)10
2. (37)10
ns e
Soln. : (25)10 = (11001)2 …Ans.
io dg
Decimal to excess - 3 conversion Step 2 : Convert decimal to BCD :
at le (5)10 = 0101
ic w
Step 2 : Convert to excess-3 :
2. N = (37)10 :
(C-7147) Fig. P. 2.18.3(b)
University Questions.
Ex. 2.18.3 : Do the following : Convert the decimal It has a very special feature that only one bit in the gray
number 25 into Binary format, Excess-3 code will change, each time the decimal number is
format and BCD format. May 18, 3 Marks incremented as shown in Table 2.19.1.
(25)10 = (?)2 = (?)EXCESS-3 = (?)BCD called as a unit distance code. The Gray code is a
Cyclic code.
(C-91) Table 2.19.1 Hence a 180 error in the disc position would result and
the user would not even notice it, because in binary
codes, any number of digits can change their values at a
given instant of time.
ns e
reduced to 33% whereas in a 4-bit code it reduces to
25%. This is the advantage of using gray code.
io dg
2.19.3 Gray-to-Binary Conversion :
Steps :
at le below :
University Questions.
2 addition (MOD - 2). It is equivalent to an Ex-OR
Q. 1 How gray codes are useful in digital system ?
operations hence denoted by sign instead of simple
ch
encoders. 0 0 = 0
A shaft position encoder produces a code word which 0 1 = 1
represents the angular position of the shaft.
1 0 = 1
2.19.2 Advantages of Gray Code :
1 1 = 0
SPPU : May 06.
The gray to binary conversion is illustrated in the
University Questions.
following example.
Q. 1 What are the advantages of gray code over pure
Ex. 2.19.1 : Convert 1110 gray to binary.
binary code ? (May 06, 2 Marks)
Soln. :
Consider the disc which produces the binary codes.
Imagine a situation in which the existing position is 111
and the position is about to change to 000.
ns e
2.19.4 Binary to Gray Conversion : Ex. 2.20.1 : Convert the binary number (110101)2 into BCD.
io dg
Soln. :
Steps :
Step 1 : Conversion from binary to decimal :
A straight binary number can be converted in gray by
following the steps given below :
at le
Step 1 : Record the MSB as it is, because the MSB of
Gray is same as that of binary. (C-2255)
ic w
Step 2 : Add this bit to the next position, note down the
Step 2 : Decimal to BCD :
sum and neglect any carry.
bl no
Steps to be followed :
ch
G3 (MSB) = B3 (MSB),
G2 = B3 B2 (C-2257)
Step 1 : Convert BCD to decimal. Ex. 2.20.7 : Represent the decimal numbers : (a) 396 and
Step 2 : Add (3)10 to this decimal number. (b) 4096 in : 1. BCD 2. Excess - 3 code.
May 08, 4 Marks.
Step 3 : Convert the decimal number of step 2 into binary, Soln. : Solve it yourself.
to get the excess – 3 code.
Ans. :
Ex. 2.20.3 : Convert (1001)BCD to excess – 3.
(396)10 = (0011 1001 0110)BCD
Soln. : (C-2258)
= (0110 1100 1001)xs – 3
ns e
(4096)10 = (0100 0000 1001 0110)BCD
io dg
Ex. 2.20.8 : Represent decimal number 327 in :
1. BCD code 2. Excess-3 code.
Dec. 08, 2 Marks.
Ex. 2.20.4 : Convert (0101 0011)BCD into excess – 3.
Soln. :
at le
(C-2259)
Soln. : Solve it yourself.
Ans. :
(327)10 = (0011 0010 0111)BCD
ic w
= (011001011010)xs – 3
Ans. :
Add (0011)2 to each 4-bit BCD number to obtain the 1. (11001100)2 = (10101010)gray
corresponding Excess – 3 number.
ch
2. (01011110)2 = (01110001)gray
Q. 11 What is gray code ? What are its applications ? Q. 16 Give four comparison between BCD code and Gray
code.
Q. 12 What is BCD code ?
Q. 17 Differentiate between Binary and Gray code.
Q. 13 What is excess-3 code ?
Q. 18 Distinguish between Excess-3 code and Gray code.
Q. 14 Write short note on ASCII code. Q. 19 Explain the rules of BCD addition.
Q. 15 What are the different types of codes used in digital
systems ? Explain them.
ns e
io dg
at le
ic w
bl no
Pu K
ch
Te
Chapter
3
ns e
io dg
at le
ic w
Binary Arithmetic
bl no
Pu K
Syllabus
Signed binary number representation and arithmetic : Sign magnitude, 1’s complement and
ch
2’s complement representation, Unsigned binary arithmetic (Addition, subtraction, multiplication and
division), Subtraction using 2’s complement; IEEE standard 754 floating point number representations.
Case study : Four basic arithmetic operations using floating point numbers in a calculator.
Te
Chapter Contents
3.1 Introduction 3.7 IEEE-754 Standard for Representing Floating
Point Numbers
3.5 2’s Complement Arithmetic 3.11 Basic Theorems and Properties of Boolean
Algebra
3.6 Floating Point Representation 3.12 Boolean Expression and Boolean Function
ns e
In some applications, all the data is either positive or If the addition or multiplication of two 8-bit numbers
io dg
results in generation of a number, greater than (255)10,
negative.
then it is said that overflow has taken place.
Then we can just forget about the (+) or (–) signs, and
concentrate only on the magnitude (absolute value) of 3.2.2 Unsigned Binary Arithmetic :
the data.
at le
For example, the smallest 8 bit binary number is 0000
0000 i.e. all zeros, and the largest 8 bit binary number is
In the following subsections we will discuss the four
basic arithmetic operations on the binary numbers:
Addition, subtraction, multiplication and division
ic w
1111 1111.
Hence the complete range of unsigned 8 bit binary 3.2.3 Binary Addition :
bl no
in Table 3.2.1.
Similarly for 16-bit numbers the complete range is given
(C-6107) Table 3.2.1 : Four cases of binary addition
by,
ch
The four basic rules of binary addition in terms of sum 3.2.6 Subtraction and Borrow :
and carry are as follows.
These two words will be used very frequently for the
(C-6902(a)) Table 3.2.2 : Rules of binary addition binary subtraction.
For binary subtraction we have to remember the
following four cases given in Table 3.2.3.
(C-6343) Table 3.2.3 : Four basic rules for binary subtraction
ns e
io dg
Ex. 3.2.1 : Add the following binary numbers : 011 and
101.
Soln. :
Consider case 4 in Table 2.1.3. It is [0 – 1]. Hence a logic
at le
1 is borrowed.
This will change the subtraction
[0 – 1] to [10 – 1] that means [ – ] = = 1.
from
ic w
Ex. 3.2.2 : Subtract the decimal numbers (38)10 and (29)10
(C-57)
by converting them into binary.
bl no
(C-64)
Column 4 : 10 – 1 = 1
(C-4898)
Column 5 : 10 – 1 – 1 = – – = 0
We have to do the same thing for subtracting the binary
Column 6 : 1 – 1 = 0
numbers.
ns e
00 = 0 Soln. :
01 = 0
io dg
(10101011)2 (101)2 : (C-2172)
10 = 0
11 = 1
at le
been illustrated in the following example.
(C-2174)
(11010)2 (101)2 :
Pu K
ch
(C-6108)
Ans. : 1 0 1 0 0 1 . 1 0 1 1 Ex. 3.2.7 : Perform (11001)2 (101)2
A B = (41.6875)10
Ex. 3.2.4 : Perform : (11001)2 (101)2 Q. 1 What do you mean by signed magnitude
Soln. : representation of a number ? (Dec. 07, 2 Marks)
For a sign-magnitude representation, the + or – signs The largest magnitude is 127, which is approximately
are also represented in the binary form i.e. by using 0 or half of the largest magnitude obtained for unsigned
1. So a 0 is used to represent the (+) sign and 1 is used binary numbers.
to represent the (–) sign. With the sign-magnitude numbers, we can use the 8-bit
arithmetic as long as the input data range falls in
The MSB of a binary number is used to represent the
decimal – 127 to + 127.
sign and the remaining bits are used for representing
It is still necessary to check all the sums for overflow.
the magnitude. 8-bit signed binary numbers are shown
If the magnitude of data is greater than 127 then 16 bit
in Fig. 3.3.1.
ns e
arithmetic should be used.
With 16-bit numbers the range of sign-magnitude
io dg
numbers extends from decimal (– 32,767) to (+ 32,767).
3.4 Complements :
Numbers represented in this form are called as
sign-magnitude numbers or only sign-numbers.
ch
0 to 255.
But in the sign magnitude numbers, the MSB is utilized The 1’s and 2’s complement of a binary number are
for representing the sign. important because we can use them for representation
Therefore the range gets modified. (as there are only 7 of negative numbers.
bits left to represent the magnitude).
Definition :
For an 8-bit sign-magnitude number, the largest
negative number is (– 127) given by, The 1’s complement of a number is found by inverting
(C-6369)
original number. The 1’s complement system is very
easy to implement merely using inverters.
In this way the range of sign-magnitude, 8-bit binary
numbers is modified to decimal (– 127) to (+127) from 0 to Ex. 3.4.1 : Obtain the 1’s complement of the following
255. numbers. (1010)2, (11010101)2.
Soln. : Table 3.4.1 shows the full range of 4 bit numbers in the
1’s complement form.
(C-6111) Table 3.4.1
(C-6279(a))
ns e
(a) Given number : 1 1 0 1
1’s complement : 0 0 1 0 ...Ans.
io dg
(b) Given number : 1 0 1 1
1’s complement : 0 1 0 0 …Ans.
at le
using 1’s Complement :
Definition :
Thus (– 5)10 is represented as 1010 in the 1’s used in computers to handle negative numbers.
complement form.
Ex. 3.4.3 : Obtain the 2’s complement of (1011 0010)2.
Range of 1’s complement number system :
Soln. :
number is 0111.
n–1
For n = 4 the maximum positive number is 7. i.e. [2 – 1] (C-6345)
2. The maximum negative number is represented as 1 1 1 Hence the 2’s complement of (1011 0010)2 is
(0100 1110)2.
1 . Hence for n = 4 the maximum negative number is – 7
n–1
i.e. [2 – 1]. 3.4.4 Representation of Signed Numbers
using 2’s Complement :
Range of 1’s complement number system is given by,
n–1 Positive numbers in 2’s complement form are
Maximum positive number (2 – 1)
represented the same way as in the sign-magnitude and
n–1
Maximum negative number – (2 – 1) 1’s complement form.
ns e
characteristics. They are as follows :
1. There is one unique 0.
io dg
(C-6112) 2. The two’s complement of 0 is 0.
2’s complement is another method of storing negative 3. The MSB of a sign-magnitude number cannot be used
values. In a microcomputer the positive and negative to express quantity. It can only be used as a sign bit.
at le
numbers
complement.
are represented with the help of
But the negative numbers are represented in the 2’s In the signed – magnitude system a number is negated
complement form. by changing its sign, but the signed complement system
will negate the number by taking its complement.
However note that the MSB (sign bit) is not changed
We know that a positive number always starts with a
while obtaining the 2’s complement. For example 2’s
complement of (– 6)10 is obtained as follows : zero (plus) in its MSB position, therefore the
ns e
negative zero.
io dg
position. This is how we can distinguish the negative
numbers from the positive ones.
at le (C-71)
in the signed magnitude and 1’s complement systems
there are eight positive and eight negative numbers
ic w
In the signed-magnitude representation, the left most 1 including two zeros.
represents (–) sign and the remaining seven bits In 2’s complement system there are eight positive
bl no
represent the magnitude 7. numbers including one zero and eight negative
In the signed 1’s complement representation, – 7 is numbers.
obtained by complementing all the bits of + 7 including
The signed magnitude system is awkward when used in
Pu K
Representation of + 25 :
ns e
For example (+ 16) + (– 25) = – (25 – 16) = – 9 and this
subtraction is performed by subtracting the smaller
io dg
magnitude of 16 from the larger magnitude of 25 and
giving the sign of larger number to the subtraction.
This subtraction requires the comparison of signs and
Representation of + 40 : follows :
widely used form in computers and microprocessor – The answer is positive (sign bit is 0) and equal to
based systems. 00010011 i.e. 19.
Express (– 7) in the signed 2’s complement form as 3.5.1 Subtraction of Unsigned Binary using
follows : 2’s Complement :
+ 7 = 0 0 0 0 0 1 1 1 signed magnitude form
If the subtraction of two binary numbers A and B is to
– 7 = 1 1 1 1 1 0 0 1 2’s complement of + 7 be performed using the 2’s complement, then the
Perform the addition following steps are to be followed.
Steps to be followed :
Step 1 : Add (A)2 to the 2’s complement of (B)2.
ns e
Step 2 : If the carry is generated then the result is
positive and in its true form.
(C-6117)
io dg
Step 3 : If the carry is not produced, then the result is
The final carry is discarded to obtain the correct answer.
negative and in its 2’s complement form.
at le (C-6118)
Soln. :
bl no
1011
Therefore bring the answer in it’s true form by
Step 2 : Add (9)10 to 2’s complement of (5)10 :
subtracting 1 from the answer and then inverting all the
ch
Note : The final carry bit acts as a sign bit for the answer.
If it is 1 then the answer is positive, and if it is 0
(C-6124)
ns e
Step 2 : Add 2’s complements of (– 48)10 and (23)10 :
Step 3 : Convert the answer into its true form :
io dg
(C-4183)
at le
Thus the answer is – (0101)2 i.e. (– 5)10.
(C-6125)
…Ans.
As final carry is not generated the answer is negative
and in 2’s complement form.
ic w
Ex. 3.5.3 : Perform subtraction using 2’s complement for Step 3 : Convert the answer into its true form :
given numbers (– 48) – (+23) use 8 bit
(C-8293)
representation of number.
bl no
complement method.
Decimal Binary 2’s complement
(48)10 (00110000)2 Dec. 14, 4 Marks, May 18, 6 Marks)
(11010000)
ch
Soln. :
(23)10 (00010111)2 (11101001)
Step 1 : Convert decimal number to binary :
Step 2 : Add 2’s complement of (48)10 and (23)10 :
A = (27.50)10 = (00011011.1000)2
Te
B = (68.75)10 = (01000100.1100)2
(C-6128)
(C-6349)
– (01000111)2 = (– 71)10
(– 48) – (+ 23) = – (01000111)2 = (– 71)10
(C-4934)
Ex. 3.5.4 : Perform the following operations using 2’s
complement method. Final carry indicates that the answer is positive and in its
– (48)10 – (– 23)10 true form.
Dec. 13, 6 Marks, May 19, 3 Marks. (68.75)10 – (27.50)10 = (00101001.0100)2
Convert answer into decimal, Step 2 : Add 2’s complement of (7)10 and (11)10 :
(C-4935)
N = 32 + 8 + 1 + 0.25 = 41.25
(68.75)10 – (27.50)10 = (41.25)10 ...Ans. (C-6131)
ns e
Ex. 3.5.6 : Perform 2’s complement arithmetics of :
1. (7)10 – (11)10 2. (–7)10 – (11)10
io dg
3. (–7)10 + (11)10 May 15, 6 Marks
Soln. :
1. (7)10 – (11)10 :
at le
Step 1 : Convert both numbers to their binary form :
(7)10 = (0111)2 and (11)10 = (1011)2
(C-6132)
(C-6133)
Step 4 : Convert the answer into its true form :
Answer = (0100)2 = (4)10
(– 7)10 + (11)10 = + (4)10 ...Ans.
ns e
The user must interprete the results of such addition or
3.5.2 Subtraction of Signed Binary
io dg
subtraction differently depending on whether it is
Numbers :
assumed that the numbers are signed or unsigned.
Subtraction of two signed binary numbers when the
negative numbers are in the 2’s complement form can
3.6 Floating Point Representation :
at le
be performed as follows :
equivalent to an addition if the sign of the subtrahend is The floating point number system is the remedy to
changed. this problem. It is based on scientific notation and is
capable of representing very large and very small
This is demonstrated as follows :
Pu K
(C-6134)
The given number can be represented in the floating
point number as follows :
Discarding the final carry we get the answer of + 5.
(C-1789) Soln. :
Thus mantissa is the magnitude and exponent (9) Step 1 : Represent the given number in fractional form :
represents the number of places that the decimal point The given binary number contains 13 bits. Let us express
is moved. it as 1 plus a fractional binary number by moving the
binary point 12 places to the left and then multiply by
3.6.2 Binary Floating Point Numbers :
ns e
appropriate power of two as follows :
For binary floating point numbers, the format is defined
io dg
by ANSI / IEEE standard 754 – 1985 in the following
three forms :
3.
at le
Double precision floating point binary numbers.
precision floating point binary number. The exponent is actually 12 because we have shifted the
binary point by 12 places. In order to get the biased
The MSB is assigned to indicate the sign (S). The next
Pu K
E = 12 + 127 = (139)10
part (F).
Convert the biased exponent to binary
In the mantissa (F) part, the binary point is assumed to (C-6257)
(C-4922(a))
– The number 0.0 is represented by all 0s and infinity is
represented by all 1s in the exponent and all zeros in
Step 3 : Write the complete floating point number :
the mantissa.
The complete single precision floating point number is
as follows :
Ex. 3.6.4 : Represent the following decimal numbers in
(C-8299)
single precision floating point format :
ns e
1. 255.5 2. 110.65
io dg
May 15, 6 Marks
Conversion from floating point to binary : Soln. :
1. (255.5)10 :
Now let us see the conversion from floating point
at le
number to binary number.
(C-8300)
ch
Soln. :
Te
S = 1, E = 10001011, F = 001010011101
S E – 127
Binary number = (– 1) (1 + F) (2 ) (C-5125)
1 139 – 127
Binary number = (– 1) (1 + 0.001010011101) (2 ) (255.5)10 = (11111111.10)2 ...(1)
12
= – (1.001010011101) 2 Step 2 : Decide the values of S, E and F :
= – 1001010011101
(C-1791) (C-5126)
extremely large and small numbers using the floating Exponent E = Actual exponent + 127 = 7 + 127
point number systems. = (134)10
Convert the biased exponent to binary 3.7 IEEE-754 Standard for Representing
Floating Point Numbers :
E = (134)10 = (10000110)2
The mantissa is the fractional part of binary number. Representation of floating point number discussed in
section 3.6 has many subtle problems.
To form 23 bit mantissa, fourteen zeros are appended.
IEEE floating point standards addresses a number of
F = 11111111000000000000000
such problems.
Step 3 : Write the complete floating point number :
Zero has definite representation in IEEE format.
(C-8319)
ns e
has been represented in IEEE format. A +
indicated that the result of an arithmetic expression is
io dg
too large to be stored.
2. (110.65)10 :
If an underflow occurs, implying that a result is too
Step 1 : Convert decimal to binary : small to be represented as a normalized number, it is
at le
encoded in a denormalized scale.
Fig. 3.7.1 gives the representation of floating point
numbers.
ic w
bl no
Pu K
(C-5127)
ch
(110.65)10 = (1101110.111001)2
(C-5128)
Assuming the number to be the positive one the sign (co 2.51)Fig. 3.7.1 : IEEE standard format
bit will have a zero value. (a) Single precision (32 bits) :
S = 0 Exponent Significant
Value/comments
(E) (N)
Exponent E = Actual exponent + 127 = 6 + 127
255 Not equal to 0 Does not represent a
= (133)10
number
E = (133)10 = (10000101)2
255 0 – or + depending
The mantissa is fractional part of binary number. on sign bit
To form 23 bit mantissa, 11 zeros are appended. Normalized 0 < E < 255 Any (1.N) 2E – 127
scale
F = 10111011100100000000000
Denormalized 0 Not equal to 0 (0.N) 2– 126
Step 3 : Write the floating point number : (C-8320)
scale
0 0 0 depending on sign
bit
ns e
Step 1 :
scale
(0.125)10 = 2 = (0.010)2
io dg
Denormalized 0 Not equal to 0 (0.N) 2– 1022
scale Step 2 : Normalization :
–3
0 0 0 depending on sign 10 2
bit Step 3 : Single precision :
at le
Fig. 3.7.2 : Values of floating point numbers as
per IEEE format
Biased exp = –3 + 127 = (124)10
= (01110)2
ic w
Ex. 3.7.1 : Represent (20.625)10 in both single precision Double precision
as well as double precision = –3 + 1023 = 1020
bl no
Soln. : = (01111110)2
Step 1 : Convert to binary : Single precision :
(C-8303)
16 20
Pu K
8
16 12 C (20)10 = (C8)16
= (110100)2
ch
0
Double precision :
0.625 16 = 10 = A
(C-8304)
(0.625)10 = (0.A)16 = (01010)2
Te
(20.625)10 = (110100.1010)2
Step 2 : Normalization :
7 Note : The biased exponent can be in the range 1-254 for
110100101 2
single precision (exponent range is – 126 to + 127)
Step 3 : Calculate biased exponent :
and 1 – 2046 for double precision (exp range
For single precision
– 1022 to + 1023). The biased exp values 0 and
Biased exp = exp + 127 = (134)10 = (100110)2 255 for single precision && (0 and 2047 for double
For double precision : precision) are used to represent zero,,
Biased exp = exp + 1023 = (1030)10 denormalized form, NaN (Not a Number.)
= (10000110)2
Ex. 3.7.3 : Represent (178.1875)10 in single and double
Step 4 : Find representation : precision floating point format.
Single precision : (C-8301) Soln. :
Convert given decimal number into its equivalent binary
178 = 101110
ns e
E – 127
exponent (E) and significant (N) is given by (1.N) 2 E – 127
into the form (1.N) 2 .
in order to represent (101110.011)2, we must convert it
io dg
8
E – 127 10110101.011 = 1.0110101011 2
into the form (1.N) 2 .
5 8 = E – 127
10111.011 = 1.0111011 2
E = 127 + 8 = 135
5 = E – 127
at le E = 127 + 5 = 132
(132)10 = (10010)2
(135)10 = (10011)2
ic w
bl no
(b) Double precision format : In IEEE double precision format, the value of a number
for given exponent (E) and significant (N) is given by,
In IEEE double precision format, the value of a number
Pu K
E – 1023
for given exponent (E) and significant (N) is given by (1.N) 2
E – 1023
(1.N) 2 . In order to represent 10110101.011, we must convert it
ch
E – 1023
In order to represent (101110.011)2, we must convert it into the form (1.N) 2 .
E – 1023 8
into the form (1.N) 2 . 10110101.011 = 1.0110101011 2
5
10111.011 = 1.0111011 2 5 = E – 1023
Te
Ex. 3.7.6 : Convert (127.125)10 in IEEE-754 single and This is called Boolean algebra and it is extremely useful
double precision floating point representation.
in the design and analysis of digital systems.
Dec. 16, 10 Marks.
ns e
Soln. : Boolean algebra is used to write or simplify the logical
expressions.
io dg
Step 1 :
at le (C-8306)
2. OR operator.
(0.125)10 = (2)16 = (0.00)
3. NOT operator (Inverter).
Step 2 : (127 – 125)10 = (011 111.010)2
6
3.8.2 NOT Operator (Inversion) :
= (1.111101)2 2
Pu K
The “OR” operator represents logical addition. It is (C-196) Table 3.8.1 : Various logic gates
A + B …Logical addition
ns e
3.8.5 Logic Gates :
io dg
Logic gates are the logic circuits which act as the basic
at le
inputs and only one output.
ic w
The relationship between the input and the output is
Truth table :
Pu K
The truth table consists of all the possible combinations University Questions.
of the inputs and the corresponding state of output Q. 1 With suitable examples, explain the following in
Te
Boolean algebra :
produced by that logic gate or logic circuit.
1. Commutative law
Boolean expression :
2. Associative law
The relation between the inputs and the outputs of a 3. Distributive law (Dec. 04, Dec. 08, 3 Marks)
gate can be expressed mathematically by means of the 2. Associative law :
Boolean Expression.
For a given set “S” a binary operator * can be said to be
Let us now discuss the operation of various logic gates. associative if the following equation is satisfied.
ns e
(C-128) variable.
io dg
Take the example of set of integers (I) for which 0 is the Rules in boolean algebra :
identity element with respect to + operation that means There are some rules to be followed while using a
x = 0 and * = +. Hence the inverse of an element A Boolean algebra, these are :
would be – A because only then the condition A * B = x
6.
at le
is satisfied as follows :
Distributive law :
1.
2.
Variables used can have only two values. Binary 1 for
HIGH and Binary 0 for LOW.
Complement of a variable is represented by a overbar
ic w
–
If * and · are two binary operators working on a set S, (–). Thus complement of variable B is represented as B.
then * is said to be distributive over · if the following – –
Thus if B = 0 then B = 1 and if B = 1 then B = 0.
bl no
condition is satisfied.
3. ORing of the variables is represented by a plus (+) sign
A * (B · C) = (A * B) · (A * C) …(3.8.3) between them. For example ORing of A, B, C is
The operators and postulates of such a field have the represented as A + B + C.
4. Logical ANDing of the two or more variables is
Pu K
following meanings :
1. We define the binary operator + as addition. represented by writing a dot between them such as
A B C D E. Sometimes the dot may be omitted like
ch
2(b) An identity element with respect to (·), designated Table 3.10.1(a) : AND operator Table 3.10.1(b) : OR
by 1 : operator
This is because for any A the following expression is Inputs Output Inputs Output
always true. A B A·B A B A+B
0 0 0 0 0 0
A · 1 = 1 · A = A.
0 1 0 0 1 1
3(a) Commutative law with respect to (+) : 1 0 0 1 0 1
A+B = B+A 1 1 1 1 1 1
Table 3.10.1(c) : NOT operator
3(b) Commutative law with respect to (·) :
ns e
Input Output
A·B = B·A
–
io dg
A A
4(a) (·) is distributed over (+) :
0 1
A · (B + C) = (A · B) + (A · C)
1 0
4(b) (+) is distributed over (·) :
We are now going to demonstrate that Huntington
5.
at le A + (B · C) = (A + B) · (A + C)
(C-6136)Table 3.10.3 : Verification of the distributive law 3.11 Basic Theorems and Properties of
Boolean Algebra :
3.11.1 Duality :
ns e
3. Complement any 1 or 0 appearing in the expression.
Duality theorem is sometimes useful in creating new
io dg
(b) The distributive law of + over · can also be proved using expressions from the given Boolean expressions.
the truth tables in the similar way. For example if the given expression is A + 1 = 1 then
replace the OR (+) operation by AND ( · ) operation and
5. Inverse :
at le
From Table 3.10.1(b) it is easily seen that,
–
A+A=1 i.e. 0 + 1 = 1 and 1 + 0 = 1
take complement of 1 to write the dual of the given
relation as,
A·0 = 0
ic w
And from Table 3.10.1(a) it can be shown that The dual of A (B + C) = AB + AC is given by,
– A + (B · C) = (A + B) · (A + C)
AA =0 i.e. 0 1 = 0 and 10=0
bl no
i.e. (A · B) · C = A · (B · C) they are called as “AND” laws. The AND laws are as follows :
and (A + B) + C = A + (B + C) A0=0:
The associative law can be verified by referring to the That means if one input of an AND operation is
following truth table.
permanently kept at the LOW (0) level, then the output
(C-6137)Table 3.10.4 : Verification of the associative law is zero irrespective of the other variable.
A·1=A:
A·A=A:
This law states that the result of an “AND” operation on 1. A+0=A 3. A+A=A
– –
a variable (A) and its complement (A ) is always LOW (0). 2. A+1=1 4. A+A =1
–
If A = 0 then A = 1 and Y = 0 · 1 = 0 whereas if A = 1 3. INVERSION Law :
–
then A = 0 and Y = 1 · 0 = 0. This law uses the “NOT” operation. The inversion law
states that if a variable is subjected to a double
Summary of AND Laws :
inversion then it will result in the original variable itself
ns e
1. A·0=0 3. A·A=A
i.e.
–
2. A·1=A 4. A·A =0 ––
–
io dg
A = A
2. OR Laws : Inversion law is being illustrated in Fig. 3.11.1 which
These laws use the OR operation. Therefore they are – ––
–
shows that if A = 0 then A = 1 and ( A ) = 0 Y = A,
called as OR laws. The OR laws are as follows : – ––
–
A+0=A:
at le
That means if one variable of an “OR” operation is LOW
whereas if A = 1 then A = 0 and ( A ) = 1 Y = A.
ic w
=
(0) permanently, then the output is always equal to the (B-481) Fig. 3.11.1 : Illustration of inversion law : A = A
B = 0 permanently therefore for A = 0, Y = 0 + 0 = 0 i.e. Sr. No. Name Statement of the law
Y = A and for A = 1, Y = 1 + 0 = 1 i.e. Y = A. 1. Commutative Law A·B=B·A
A+1=1: A+B=B+A
(A · B) C = A · (B · C)
Pu K
2. Associative Law
That means if one variable of an “OR” operation is HIGH
(A + B) + C = A + (B + C)
(1) permanently, then the output is HIGH (1)
3. Distributive Law A · (B + C ) = AB + AC
ch
A·A=0
Thus output remains HIGH (1) always, irrespective of the
5. OR Laws A+0=A
value of A. A+1=1
A+A=A: A+A=A
–
This law states that if both the variables of an OR A+A =1
6. Inversion Law =
operation have the same value either “0” or “1” then the A =A
output also will be equal to the input i.e. 0 or 1 7. Other Important Laws A + BC = (A + B) (A + C)
respectively. – –
A + AB = A + B
For A = 0, Y = 0 + 0 = 0 i.e. Y = A and for A = 1, – – – –
A + AB = A + B
Y = 1 + 1 = 1 i.e. Y = A.
A + AB = A
– –
A+A=1: A+AB=A+B
This law states that the result of an “OR” operation on a Ex. 3.11.1 : Obtain the dual of following Boolean
variable and its complement is always 1 (HIGH). equations :
– –
If A = 0 then A = 1 and Y = 0 + 1 = 1 whereas if A = 1 1. A + AB = A 2. A + A B = A + B
– –
then A = 0 and Y = 1 + 0 = 1. 3. A + A = 1 4. (A + B) (A + C) = A + BC.
Soln. :
1. A + AB = A :
Replace (+) by (·) and (·) by (+) to get the dual of the
given equation as :
A · (A + B) = A …Ans.
Similarly the duals of the other expressions are shown in
–– – –
Table P. 3.11.1. (C-6138) Fig. 3.11.3 : Verification of the theorem AB = A + B
ns e
Given expression Dual This theorem is illustrated in Fig. 3.11.4. The LHS of this
io dg
– –
A+AB=A+B A · (A + B) = A · B theorem represents a NOR gate with inputs A and B
– – whereas the RHS represents an AND gate with inverted
A+A=1 A·A=0
(A + B) (A + C) = A + BC AB + AC = A · (B + C) inputs.
Such an AND gate is called as “Bubbled AND”. Thus we
3.11.3
at le De-Morgan’s Theorems :
SPPU : Dec. 05, May 08, Dec. 09.
can state De-Morgan's second theorem as a NOR
function is equivalent to a bubbled AND function.
ic w
University Questions NOR Bubbled AND
1. Parenthesis 2. NOT
This theorem can be verified by writing a truth table as That means the expression inside the parenthesis
should be evaluated before all the operations.
shown in Fig. 3.11.3.
ns e
Function :
io dg
Boolean algebra deals with binary variables and logic
operations.
2
A Boolean function is described by an algebraic If there are two inputs (n = 2), then there are 2 = 4
expression called Boolean expression which consists of combinations of inputs. For four inputs (n = 4), there are
at le
binary variables, the constants 0 and 1 and the logic
operation symbols. 3.12.2
4
2 = 16 combinations.
(C-6140) …(3.12.1) –
1. A + AB = A 2. A + A B = A + B
3. (A + B) (A + C) = A + BC
Here the left side of the equation represents the output Soln. :
– LHS = A + AB = A (1 + B)
Y = A + BC + ADC …(3.12.2)
But (1 + B) = 1 LHS = A · 1But A · 1 = A
ch
It shows that there are three inputs A, B, and C and one LHS = RHS
–
output f (A, B, C) or simply F. A+AB = A+B … Proved.
The truth table for this equation is shown by ……. According to the distributive law.
n
Table 3.12.1. The number of rows in the truth table is 2 But AA = A,
But (1 + C) = 1 –––––––––––––
– – –
Y = (A + B + A + AB)
LHS = A + AB + BC = A (1 + B) + BC
– – –
But (1 + B) = 1 But A + A = A
––––––––––
– –
LHS = A + BC
Y = (A + B + AB)
Thus (A + B) (A + C) = A + BC … Proved.
Now use De-Morgan’s second theorem which states that,
– – – ––––––––– – – –
Ex. 3.12.2 : Prove that (A + B + AB) (A + B ) (A B) = 0 A+B+C = A·B·C
Soln. : ––
–– ––
–– –––
Y = A · B · AB
ns e
– – –
LHS = (A + B + AB) (A + B) (A B) ––
–– ––
––
But A = A and B = B
io dg
But A + AB = A … Refer to Ex. 3.12.1
–––
– – – Y = A · B · AB
LHS = (A + B) (A + B) (A B)
––– – –
– – –– – But AB = (A + B ) …. De-Morgan’s second theorem.
= (AA + AB + AB + BB) (A B) – – – –
at le But A · A = A and
– – – –
A B + AB = B (A + A) = AB
– – –
B · B = B and
Y = A · B (A + B) = AAB + ABB
– –
But AA = 0 and BB = 0
ic w
Y = 0·B+A·0=0+0
– – –
LHS = (A + AB + B) (AB) …since 0 · B = 0 and A · 0 = 0
– – –
bl no
= [A (1 + B) + B] (AB) Y = 0
– – –
But (1 + B) = 1 LHS = [(A · 1) + B ] (AB) 3.12.3 Complement of a Function :
– – –
LHS = (A + B) (AB) … since A · 1 = A The complement of a function F is denoted by F . We
Pu K
– – – –
= AAB+ABB can obtain F by replacing 0’s by 1’s and 1’s by 0’s while
– – calculating the value of F.
ch
But AA = 0 and BB = 0
It is possible to derive the complement of a function
LHS = 0 + 0 = 0 … Proved.
algebraically using De Morgan’s theorems. We can
– extend De Morgan’s theorems to three or more
Ex. 3.12.3 : Prove that A + AB + AB = A + B.
Te
variables as well.
Soln. :
– – The three variable form of De Morgan’s first theorem is
LHS = A + AB + AB = A + B (A + A)
derived as follows :
–
But A + A = 1 ________
LHS = A + B = RHS …Proved. LHS = ( A + B + C )
Let B+C = D
–
Ex. 3.12.4 : Simplify : ABCD + ABCD. _____ __ __
– – LHS = ( A + D ) = A D
Soln. : Y = ABCD + ABCD = ACD (B + B)
….As per De Morgan’s theorem
–
But (B + B) = 1 __ ____ __ __ __
= A ( B + C ) = A (B C )
Y = ACD …Ans.
________ __ __ __
Ex. 3.12.5 : Simplify the following expression : (A+B+C) = A B C …… Proved.
––––––––––
––– – On the same lines we can derive the De Morgan’s
Y = ( AB + A + AB) theorems for any number of variables. These theorems
––––––––––––
––– – in the generalized form can be stated as follows :
Soln. : Y = ( AB + A + AB)
––– ____________ __________ __ __ __ __ __
– –
But AB = A + B…De-Morgan’s first theorem 1. ( A + B + C + D ……….. + G ) = A B C D …….. G
ns e
__ ______________
_ _ _ _______
_ _ _______
_ _ Q. 8 Subtract using 2’s complement (11011011)2 from
F1 = ( A B C + A B C) = ( A B C ) ( A B C ) (0101010)2.
io dg
__ _ _ Q. 9 Subtract using 2’s complement 11001 from 01101.
F1 =(A+B+C)(A+B+C) …Ans.
Q. 10 Subtract 1101101 – 11010 using 2’s complement.
_ _
2. F2 = A (B C + BC) Q. 11 State AND laws.
at le
__
F2 =
_
____________
_ _
[ A (B C + BC ) ]
_________
_
Q. 12
Q. 13
State OR laws.
Q. 1 Explain with example the binary addition and Q. 19 State and explain the associative law.
ch
Chapter
4
ns e
io dg
at le Logic Minimization
ic w
bl no
Syllabus
Pu K
Representation of logic function : Logic statement, Truth-table, SOP form, POS form; Simplification of
logical functions using K-Maps upto 4 variables.
ch
Te
Chapter Contents
4.1 System or Circuit 4.5 Karnaugh-Map Simplification (The Map Method)
4.2 Standard Representations for Logical 4.6 Simplification of Boolean Expressions using K-map
Functions
4.4 Methods to Simplify the Boolean Functions 4.8 Product of Sum (POS) Simplification
ns e
2. Switching equations :
types :
1. Analog systems The relation between inputs and output(s) can be
io dg
presented in the form of equation(s) called as switching
2. Digital systems.
equations.
4.1.1 Digital Systems : The input variables are called as switching variables. The
output (Y) is written on the left hand side of the
at le
We may define the digital system as the system which
processes or works upon the digital signals to produce
another digital signal.
equation whereas the terms containing the input
variables are written on the right hand side of the
switching equation as shown below :
ic w
– –––
Y = ABC + ABC + ABC …(4.1.1)
– – ––
bl no
The relation between input and output can be The digital systems in general are classified into two
represented in various different ways as given below : categories namely :
The block diagram of a combinational circuit is shown in 2. Use of Medium Scale Integration (MSI).
Fig. 4.1.2. In this chapter we will discuss the traditional method.
Steps followed in traditional circuit design method :
ns e
and a number of outputs.
Methods to simplify the boolean equations :
The circuit of Fig. 4.1.2 has “n” inputs and “m” outputs.
io dg
The methods used for simplifying the Boolean functions
Between the inputs and outputs, logic gates are
are as follows :
connected so combinational circuit basically consists of
logic gates. 1. Algebraic method.
2. Karnaugh-map (K-map) simplification.
at le
A combinational circuit operates in three steps :
1.
2.
It accepts n-different inputs.
The combination of gates operates on the inputs.
3.
4.
Quine-Mc Cluskey method and
Variable Entered Mapping (VEM) technique.
ic w
3. “m” different outputs are produced as per The Boolean theorems and De-Morgan’s theorems are
requirement. useful in simplifying the given Boolean expressions. We
bl no
Examples of combinational circuits : can then realize the logical expressions using either the
conventional gates or universal gates.
Following are the examples of some combinational
circuits : We should use the minimum number of logic gates for
the realization of a logical expression.
1. Adders, subtractors
Pu K
From the specified requirements we have to design a When we realize the Boolean equation by using gates,
combinational circuit using a combination of gates each literal acts as an input as shown in Fig. 4.2.1.
which will fulfill all the requirements. Any logic expression can be expressed in the following
We can adopt one of the following two approaches to two standard forms :
the combinational circuit design : 1. Sum-of-Products form (SOP) and
1. Traditional methods. 2. Product-of-Sums form (POS)
ns e
Forms :
Canonical or standard forms :
io dg
The word standard is used in order to describe a
(C-1217(a)) Fig. 4.2.2 : Sum-of-Products (SOP) form condition of switching equation. The meaning of the
word standard is conforming to a general rule.
Therefore such expressions are known as expression in
at le
SOP form.
The sums and products in the SOP form are not the
This rule states that each term used in a switching
equation must contain all the available input variables.
The two formats of a switching equation in the standard
ic w
actual additions or multiplications. In fact they are the
form are :
OR and AND functions.
1. Sum of Product (SOP) format.
A, B and C are the literals or the inputs of the
bl no
Non-standard forms : –
Ex. 4.2.1 : Convert the expression Y = AB + AC + BC into
The two standard forms discussed earlier are the basic the standard SOP form.
forms that are obtained from the truth table. Soln. :
However these forms are not used often because these The given expression has 3-input variables A, B and C.
equations are not in the minimized form because each Step 1 : Find the missing literal for each term : (C-2382)
ns e
In this form each term may contain one, two or any Step 2 : AND each term with (Missing literal + Its
io dg
number of literals. It is not necessary that each term complement) : (C-2383)
should contain all the literals.
There can be two types of non-standard forms :
1. Non-standard SOP form.
at le
2. Non-standard POS form.
The examples of standard and non-standard SOP and
ic w
POS expressions are given in Table 4.2.1.
Step 3 : Simplify the expression to get the standard
(C-8065) Table 4.2.1 : Non-standard and standard SOP and SOP :
POS forms
bl no
– – – –
Y = AB ( C + C ) + AC ( B + B ) + BC ( A + A )
– – –– –
= ABC + ABC + ABC + ABC + ABC + ABC
– – –– –
= (ABC + ABC) + ( ABC + AB C ) + ABC + AB C
Pu K
But A + A = A
– – –
(ABC + ABC) = ABC and ( ABC + AB C ) = ABC
ch
(C-6152)
Let us see how to convert the given non-standard SOP
and POS expressions into the corresponding non- Conversion from non-standard POS to standard POS
standard SOP and POS forms. form :
Conversion from non-standard SOP to standard SOP The conversion of non-standard POS expression into
form : standard POS form can be obtained by following the
steps given below :
The procedure to be followed for converting a non-
standard SOP expression into a standard SOP Steps to be followed :
expression is as follows : Step 1 : For each term in the given non-standard POS
Steps to be followed : expression, find the missing literal.
Step 1 : For each term in the given non-standard SOP Step 2 : Then OR each such term with the term formed by
expression find the missing literal. ANDing the missing literal in that term with its
complement.
Step 2 : Then AND this term with the term formed by
ORing the missing literal and its complement. Step 3 : Simplify the expression to get the standard POS.
Step 3 : Simplify the expression to get the standard SOP Ex. 4.2.2 : Convert the expression Y = (A + B) (A + C)
expression. –
(B + C) into standard POS form.
Soln. : Maxterm :
Step 1 : Find the missing literal for each term : Each individual term in the standard POS form is called
as maxterm. This is shown in Fig. 4.3.1.
(C-2384)
ns e
(C-1220(a)) Fig. 4.3.1 : Concept of maxterm and minterm
io dg
Table 4.3.1 gives the minterms and maxterms for a three
variable/literal logic function. Let Y be the output and A,
B, C be the inputs.
at le –
(C-6153)
Y = (A + B + C) (A + B + C ) (A + C + B)
– – – –
(A + C + B ) (B + C + A) (B + C + A )
But AA = A
(A + B + C) (A + C + B) = (A + B + C)
Pu K
– – –
and (A + B + C ) (B + C + A) = (A + B + C )
(C-6154)
ch
3
The number of minterms and maxterms is 2 = 8. In
general for “n” number of variables the number of
Te
n
Ex. 4.2.3 : Convert the following expressions into their minterms or maxterms will be 2 .
standard SOP or POS forms : Each minterm is represented by mi where i = 0, 1, ….,
n
1. Y = AB + AC + BC 2 – 1 and each maxterm is represented by Mi.
– Writing minterm for a particular combination of ABC :
2. Y = (A + B) (B + C)
Each individual term in the standard SOP form is called Let ABC = 011.
as minterm. This is shown in Fig. 4.3.1. Assume that ABC are inputs to an OR gate.
We want the output of the OR gate to be 0. So all the Hence it is possible to obtain the logic expression in the
inputs to the OR gate should be 0s. standard SOP or POS form if a truth table is given to us.
So invert the inputs which are 1s. i.e. B and C in this case
4.3.3 To Write Standard SOP Expression for
and write the maxterm.
– –
a Given Truth Table :
Maxterm = (A + B + C ) corresponding to ABC = 0 1 1
Similarly other maxterms in Table 4.3.1 can be obtained. The procedure to be followed for writing the standard
SOP expression from a given truth table is as follows :
Ex. 4.3.1 : For the truth table of two variables write the Step 1 : From the given truth table, consider only those
minterms and maxterms.
ns e
combinations of inputs which produce an output
Soln. : Y = 1.
io dg
Refer Table P. 4.3.1 for solution. Step 2 : Write down a product term interms of input
(C-8068) Table P. 4.3.1 : Solution to Ex. 4.3.1 variables for each such combination.
Step 3 : OR all these product terms produced in step 2 to
get the standard SOP.
at le Ex. 4.3.2 : From the truth table P. 4.3.2, obtain the logical
expression in the standard SOP form.
ic w
(C-8114) Table P. 4.3.2 : Given truth table
bl no
correspond to Y = 1.
Steps 2 and 3 :
(C-6156)
Now OR (Add) all the product terms to write the final
4.3.2 Writing SOP and POS Forms for a
expression in standard SOP form as follows :
Given Truth Table :
Y = Y1 + Y2
(C-6412) Table P. 4.3.3 : Given truth table Step 2 : AND (take product of) all the maxterms to get
standard POS form :
ANDing (taking product of) all the maxterms written in
step 1 we get,
– – – – – –
Y = (A + B + C) (A + B + C ) (A + B + C ) (A + B + C)
Y = M0 M3 M5 M6 = M (0, 3, 5, 6)
ns e
It is important to note that the SOP and POS forms
written for the same truth table are always logically
io dg
Soln. : equivalent.
Step 1 : Product terms corresponding to combinations of This point can be proved by solving the following
inputs for which Y = 1. example.
at le
Step 2 : OR (Add) all the product terms :
ORing all the product terms we get,
–– ––
Ex. 4.3.5 : For the given truth table write the logical
expressions in the standard SOP and POS
forms and prove their equivalence.
ic w
Y = A BC + ABC + A B C …Ans.
(C-6414) Table P. 4.3.5 : Given truth table
Y = m1 + m4 + m7 = m (1, 4, 7)
bl no
output (Y = 0).
Step 2 : Write the maxterms only for such combinations.
Step 3 : AND these maxterms to obtain the expression in
Soln. :
Te
ns e
+ BC + CC ]
terms of minterms is known and vice versa.
– – – – – –
But AA = A, CC = 0, A A = A , B B = 0, CC = 0
io dg
For example, if a SOP expression for 4-variables is given
– – –
Y = [A + AB + AC + AC + BC] by,
– –– – – – – –– Y = m ( 0, 1, 3, 5, 6, 7, 11, 12, 15 )
[A +A B +A C + A B + BC + A C + BC ]
– – – Then we can get the equivalent POS expression using
at le –
= [A (1 + B) + A (C + C) + BC]
–
– – –
–
– – ––
[(1 + B)A + A(C +C) +AB + BC + BC]
the complementary relationship as follows :
Y = M ( 2, 4, 8, 9, 10, 13, 14 )
ic w
But (1 + B) = 1, (C +C)= 1, (1+B)= 1
4.4 Methods to Simplify the Boolean
– – – – ––
Y = [A + A + BC] [A + A +A B + B C + BC ] Functions :
bl no
But A + A = A
The methods used for simplifying the Boolean
– – –
And A + A = A expressions are as follows :
– – – ––
Y = (A + BC) [A +A B + B C + BC] 1. Algebraic method.
Pu K
– – ––
= (A + BC) [A (1 + B) + B C + BC] 2. Karnaugh-map simplification.
But ( 1 + B ) = 1 3. Quine Mc-Cluskey method and
ch
– – ––
Y = (A + BC) (A + BC + BC ) 4. Variable Entered Mapping (VEM) technique.
– –– –– – The Boolean theorems and De-Morgan’s theorems are
= AA + ABC + AB C + A B C + (BC BC )
– –– useful in simplifying the given Boolean expressions.
+ (BC BC )
Te
Due to equivalence between SOP and POS forms we can The most important thing is to convert the given
Step 1 : Bring the given expression into the SOP form by = AAB + AABC + BAB + BABC
using the Boolean laws and De-Morgan’s = AB + ABC + AB + ABC
theorems. = AB + AB + ABC + ABC
Step 2 : Simplify this SOP expression by checking the But AB + AB = AB and ABC + ABC = ABC
product terms for common factors. Y = AB + ABC = AB ( 1 + C)
Ex. 4.4.1 : Simplify the expression given below : Y = AB ...since 1 + C = 1.
–
Y = AB + (A + B) (A + B). This is the simplified expression.
ns e
Soln. : 4.5 Karnaugh-Map Simplification (The
io dg
Step 1 : Bring the given expression in SOP form : Map Method) :
Given expression :
– This is another simplification technique to reduce the
Y = AB + (A + B) (A + B)
Boolean equation.
– –
= AB + (AA + AB + A B + BB) It overcomes all the disadvantages of the algebraic
at le
Step 2 : Search for common factors and simplify :
– –
Y = AB + AA + AB + A B + BB
simplification technique.
K-map (short form of Karnaugh map) is a graphical
ic w
– – method of simplifying a Boolean equation.
= AB + AB + BB + A A + A B
– K-map is a graphical chart made up of rectangular
But AB + AB = AB, BB = B and A A = 0
boxes.
bl no
– –
Y = AB + B + A B = B (A + 1) + A B The information contained in a truth table or available in
But (A + 1) = 1 the SOP or POS form can be represented on a K-map.
– – –
Y = B + A B = B (1 + A ) = B …since (1 + A ) = 1 The K-map can be used for systematic simplification of
Pu K
Ex. 4.4.2 : For the logic circuit shown in Fig. P. 4.4.2 write cumbersome.
the Boolean expression and simplify it. K-map is ideally suitable for designing the combinational
logic circuits using either a SOP method or a POS method.
Te
ns e
Terms :
io dg
The rectangular boxes in a K-map are to be filled with
the values of output Y corresponding to different
combinations of inputs, as shown in Fig. 4.5.2.
Each row and column of a K-map is labelled with a
– –
first column with C D . Hence the product term written
output Y for different combinations of inputs A and B.
––– –
in this box is A B C D .
ch
2 variable K-map.
4.5.3 Alternative Way to Label the K-map : Referring to Fig. 4.5.4(a) we conclude that inside the
We can label the rows and columns of a K-map in a boxes of the K-map we have to enter the values of
different way as shown in Fig. 4.5.3.
output Y corresponding to various combinations of
Instead of labelling the rows and columns with the
– – A and B.
inputs and their complements (A, A , AB etc.), their
values in terms of 0s and 1s is used for labelling. Fig. 4.5.4(b) shows the representation of a truth table
And inside the boxes, instead of writing the actual using a 3-variable K-map and Fig. 4.5.4(c) shows the
product term, the corresponding shorthand minterm representation of a truth table using a 4-variable K-map.
notations m0, m1 ….. are entered.
ns e
io dg
at le
ic w
bl no
Pu K
ns e
(C-224) Fig. P. 4.5.2 : Representation of canonical
(C-222) Fig. 4.5.4(c) : Relation between the truth table
SOP on Karnaugh map
and 4-variable K-map
io dg
4.6 Simplification of Boolean
4.5.5 Representation of Standard SOP Form
Expressions using K-map :
on K-map :
Simplification of Boolean expressions using K-map is
The logical expression in standard SOP form can be based on combining or grouping the terms in the
at le
represented with the help of a K-map by simply
entering 1’s in the cells (boxes) of the K-map
adjacent cells (or boxes) of a K-map.
Two cells of a K-map are said to be adjacent if they
ic w
corresponding to each minterm present in the equation. differ in only one variable as shown in Fig. 4.6.1.
Karnaugh map.
– –– – – –– – Note the cells on left or right side or at the bottom and
Y = A BC + A B C + ABC + ABC + ABC. top of cells are adjacent cells. But the cells connected
ch
The given expression is in the standard SOP form. Each The left most cells are adjacent to their corresponding
each minterm, as shown in Fig. P. 4.5.1.
ns e
The grouping of either adjacent 1’s or adjacent 0’s henceforth just by looking at the pair we should be able
io dg
results in the simplification of Boolean expression in the to identify the variable that will be eliminated.
SOP or POS forms respectively.
The other types of pairs and corresponding
If we group the adjacent 1’s then the result of simplifications are as shown in Fig. 4.6.4(a) and
simplification is in SOP (Sum Of Products) form. Fig. 4.6.4(b).
at le
And if the adjacent 0’s are grouped, then the result of
simplification is in the POS (Product Of Sums) form.
Given K-map :
ic w
4.6.2 Way of Grouping (Pairs, Quads and
Octets) :
bl no
pair. –– –
Y = A BC + A B C
2. Quad : A group of four adjacent 1’s or 0’s is called as a
– –
quad. = BC (A + A)
3. Octet : A group of eight adjacent 1’s or 0’s is called as – –
Y = BC ….since (A + A) = 1
Te
ns e
minterms are same and the other two are not the same.
io dg
After forming a Quad, the simplification takes place in
such a way that the two variables which are not same
(C-228) Fig. 4.6.3(d)
will be eliminated.
Simplification :
Thus a quad eliminates two variables.
– –– –– –– –
at le Y = A BC D + ABC D = BC D (A + A)
––
Y = BC D
Thus A is eliminated.
Fig. 4.6.5(a) to (f) shows various types of Quads and the
corresponding mathematical simplification.
Note that overlapping is possible in Quads.
ic w
Example of overlap : Given K-map :
Given K-map :
bl no
Pu K
ch
Final expression –– – –– – – –
– – Y = ABC D + ABC D + ABC D + A BC D
Y = A+B
– – – – –
= AB [C D + C D + C D + C D]
Note that in order to cover all the 1’s, we have to
– – – –
overlap two pairs as shown. = AB [C (D + D) + C (D + D )]
– – –
Given K-map : Y = AB [C + C] = AB
Thus C and D are eliminated.
Given K-map :
Simplification :
Leftmost and rightmost 1’s forming a Quad :
– –– – – – ––
Y = A BC D + A BC D + ABC D + A BC D
– –– – –
= C D [A B +A B + A B + AB]
– – – –
= C D [A (B + B) + A (B + B)]
– – –
= C D [A + A] = C D
ns e
Four adjacent ones forming a square : (C-231(c)) Fig. 4.6.5(e)
io dg
– – – – – – – –
Y = A BC D + A BC D + A BCD + A B C D
– – – – –
= BC D (A + A) + B C D (A + A)
– – –
= BCD+BCD
at le – – –
Y = B D (C + C) = BD
Thus A and C are eliminated.
ic w
1’s corresponding to corners forming a Quad :
bl no
– – – – –
= BC D + BC D = BC (D + D)
ch
–
Y = BC Thus A and D are eliminated.
–– – – – – –
= BC D (A + A) + BCD (A + A )
–– – – – –– –
= BC D + BCD = BD (C + C)
––
Y = BD
– –– –– –– –
Y = A BCD + ABCD + ABCD + ABCD
–– – – –
= A BD (C + C) + ABD (C + C)
–– –
= A BD + A BD
– – –
Y = BD (A + A) = BD
ns e
Fig. 4.6.6(a) to (d) shows various types of octets and the
io dg
corresponding output.
Given K-map :
at le
ic w
(d)
bl no
(C-235)
Fig. 4.6.6
Simplification :
Summary :
– –– – – –– –– –– –
Y = A BC D + A BC D + A BC D + A BC D
1. No zeros allowed.
ch
– – – – – – – –
+ A BC D + A BC D + A B C D + A B C D 2. No diagonals.
– – – – –
+ A B C (D + D ) + A B C (D + D ) 5. Every one must be in at least one group.
–– – – – 6. Overlapping allowed.
Y = A B (C + C) + A B (C + C)
7. Wrap around allowed.
– – –
Y = A (B + B) = A 8. Fewest number of groups possible.
–
The only variable that remains same is A . So it appears 4.7 Minimization of SOP Expressions
as output. (K Map Simplification) :
Minimization procedure :
ns e
way and encircle them.
Step 5 : Identify the 1’s which can form an octet in only (C-251) Fig. P. 4.7.2
io dg
one way and encircle them. Minimized expression is,
Step 6 : After identifying the pairs, quads and octets, check (C-6158)
if any 1 is yet to be encircled. If yes then encircle
them with each other or with the already encircled
at le 1’s (by means of overlapping).
Note that the number of groups should be minimum.
Ex. 4.7.3 : For the logical expression given below draw
the K-map and obtain the simplified logical
expression. Y = m (1, 5, 7, 9, 11, 13, 15).
ic w
Also note that any 1 can be included any number of Realize the minimized expression using the
times without affecting the expression. basic gates.
bl no
Soln. :
Let us solve some examples on this to make the concept
The given expression is,
clear.
Y = m1 + m5 + m7 + m9 + m11 + m13 + m15
Ex. 4.7.1 : A logical expression in the standard SOP form It can be expressed on K-map as shown in
Fig. P. 4.7.3(a).
Pu K
is as follows :
–– – – – – –
Y = A B C + A B C + ABC + ABC
ch
Soln. : Y = m (0, 2, 3, 5)
The required K-map is as shown in Fig. P. 4.7.1.
Te
Ex. 4.7.2 : The logical expression representing a logic (C-252) Fig. P. 4.7.3(a)
circuit is Y = m (0, 1, 2, 5, 13, 15). Draw the
Realization :
K-map and find the minimized logical
Equation (1) can be realized as shown in Fig. P. 4.7.3(b).
expression.
Soln. :
From the given expression, it is clear that the number of
variables is 4.
Y = m0 + m1 + m2 + m5 + m13 + m15
(C-253) Fig. P. 4.7.3(b) : Realization with minimum
The required K-map is as shown in Fig. P. 4.7.2. number of gates
Ex. 4.7.4 : Minimize the following Boolean expression Step 2 : Realization using gates :
using K-map and realize it using the basic
gates. Y = m (1, 3, 5, 9, 11, 13)
Soln. :
The given expression can be expressed in terms of the
minterms as,
Y = m1 + m3 + m5 + m9 + m11 + m13
The corresponding K-map is shown in Fig. P. 4.7.4(a). (C-256) Fig. P. 4.7.5(b) : Realization with minimum number of
gates
ns e
Ex. 4.7.6 : Solve the following using minimization
technique :
io dg
z = f (A, B, C, D) = (0, 2, 4, 7, 11, 13, 15)
Dec. 09, 10 Marks.
Soln. :
Realization :
bl no
Ex. 4.7.5 : Minimize the following expression using z = A B D + A C D + ABD + BCD + ACD
K-map and realize using the basic gates :
Ex. 4.7.7 : Solve the following equations using
Y = m (1, 2, 9, 10, 11, 14, 15) corresponding minimization techniques, also
Te
May 19, 6 Marks draw MSI design for the minimized output
Soln. : equation :
Step 1 : K-map simplification : Z = f (A, B, C, D) = (0, 3, 4, 9, 10, 12, 14).
May 10, 12 Marks.
Soln. : Solve it yourself.
Ans. :
– – – – – – – – ––
Z = f(A, B, C, D) = A C D + B C D + A C D + A B C D + A B C D
ns e
Effect of a redundant group and its elimination is
illustrated in the Ex. 4.7.8.
io dg
Ex. 4.7.10 : Minimize the following expression using the
K-map.
Y = m (1, 5, 6, 7, 11, 12, 13, 15)
Soln. :
1. at le
The given expression can be expressed in the standard
SOP form as follows :
ic w
Y = m1 + m5 + m6 + m7 + m11 + m12 + m13 + m15 (C-259) Fig. P. 4.7.10(b)
Fig. P. 4.7.10(a).
––––––
Y = D ( A C ) + B (A C)
ch
ns e
(C-6357)
Soln. :
io dg
Step 1 : Enter 1 in the cell with A = 0, B = 0, C = 1, D = 1
– –
for the first term A BCD and in the cell with A = 0,
– –
B = 1, C = 1, D = 0 for the second term ABCD as
at le
shown in Fig. P. 4.7.11(a).
Step 4 :
(C-262) Fig. P. 4.7.11(c)
– – –
Step 2 : Consider the third term A B C. Enter 1 in two cells
But it is not always true that the cells not containing 1’s Z = f(A, B, C, D) = m (1, 3, 6, 7, 12, 13)
ns e
(in SOP) will contain 0’s, because some combinations of + d (0, 2, 8, 9) (May 12, 6 Marks)
input variable do not occur.
io dg
Soln. :
Take the example of a 4 bit BCD counter. It will have
valid outputs from 0000 to 1001 only. Step 1 : Reduction using K-map :
Also for some functions the outputs corresponding to
certain combinations of input variables do not matter.
at le
That means for such input combinations it does not
matter whether the value of output is 0 or 1.
In such situation we are free to assume a 0 or 1 as
ic w
output for each of such input combinations.
These conditions are known as the “Don’t care
bl no
(C-6161)
(C-3225)Fig. P. 4.7.13(b)
ns e
io dg
–– – ––
f (A, B, C, D) = A B C + C D + AD + AB C (C-286) Fig. 4.8.1(a) : Two variable K-map for POS form
Step 2 : Implementation using basic gates :
at le
ic w
bl no
Pu K
(C-6330) Fig. P. 4.7.14(a) : Implementation using basic gates (C-286) (b) Three variable K-map for POS form
1. Z = f (A, B, C, D)
= (1, 2, 7, 8, 10, 12, 15) + d (0, 5, 6)
Te
2. Z = f (A, B, C, D)
= (1, 3, 4, 6, 8, 11, 15) + d (0, 5, 7)
Soln. : Solve it yourself. (c) Four variable K-map for POS form
Ex. 4.7.16 : Solve the following equation using (C-286) Fig. 4.8.1
corresponding minimization technique. Draw
The entries in the K-map can be shown in terms of the
the diagram for the output :
maxterms M0, M1 …. etc. as shown in Fig. 4.8.2.
Z = f(A, B, C, D) = m (2, 4, 6, 11, 12, 14) +
d (3, 10). Dec. 11, 6 Marks.
Soln. : Solve it yourself.
(b)
ns e
io dg
(C-288) Fig. P. 4.8.1 : Representation of standard
POS on K-map
on K-map : (A + B + C + D) (A + B + C + D)
represented on K-map by entering 0’s in the cells of The given expression contains four maxterms as
Pu K
(A + B + C + D) = M12
expression on Karnaugh map.
– – –
Y = (A + B + C) (A + B + C) (A + B + C). Enter a 0 corresponding to each maxterm as shown in
Soln. : Fig. P. 4.8.2.
Each term in the given logical equation is a maxterm.
Fig. P. 4.8.1.
(A + B + C) = M0 ,
–
(A + B + C) = M2 ,
– –
(A + B + C ) = M6
–
Group 1 = A + B
Conclusion :
ns e
io dg
(C-291)
at le
using K-map :
Minimization procedure :
1. The given POS expression consists of maxterms.
(C-291(a))
ic w
Ex. 4.8.4 : Minimize the following standard POS
2. Corresponding to every maxterm enter a 0 in the K-
map. expression using K-map :
bl no
Ex. 4.8.3 : Find the expression in the POS form for the
K-map shown in Fig. P. 4.8.3.
Te
Simplification :
– – –
Group 1 (A + B + C) (A + B + C)
– – –– – – –– –
= AA + AB + AC + AB + B B + BC + AC + B C + CC
–– – – – – –
But AA = A, BB= B , CC= 0, AB+AB= AB
– – – – ––
Group 1 = A + AB + AC + B + BC + AC + B C
– – – ––
= A (1 + B) + A (C + C) + B (1 + C) + B C (C-293) Fig. P. 4.8.5 : Given K-map
(C-6358) Soln. :
Minimized expression is as follows :
(C-293(a))
ns e
Soln. :
io dg
1. Figs. P. 4.8.8(b) and (c) shows two different but valid
ways of grouping 0’s.
2. The corresponding equations for the output in the POS
form are obtained after that.
Soln. :
at le (C-295) Fig. P. 4.8.6 3. Referring to the grouping shown in Fig. P. 4.8.8(b) we
get,
ic w
Expression for output Y = B + D
(C-6359)
Ex. 4.8.7 : For the K-map shown in Fig. P. 4.8.7 write the
bl no
expression for output in the POS form. This shows that for the same K-map we can get
completely different answers and all of them are correct.
Pu K
ch
Te
(C-6361)
Ex. 4.8.8.
Soln. :
Step 1 : Simplification using K-map :
ns e
(C-3228) Fig. P. 4.8.10
io dg
– –
f(A, B, C, D) = (A + B ) (A + C + D) ...Ans.
at le Soln. :
f (A, B, C, D) =
+ d (3, 6, 7)
m (0, 1, 2, 4, 8, 9, 12, 13)
May 14, 6 Marks)
ic w
Step 1 : K-map :
bl no
Z = (A + D ) (A + B ) (B + C )
–––––––––––––––––––––––––
–––––– –––––– ––––––
– – –
Z = (A + D ) + (A + B ) + (B + C )
(C-4805) Fig. P. 4.8.11(a)
Te
– – – ––
f (A, B, C, D) = AC + C D + A B
A B C Y
Ex. 4.8.13 : Solve the following equations using
0 0 0 0
corresponding minimization techniques, also
0 0 1 1
draw MSI design for the minimized output
0 1 0 0
equation : Z = f (A, B, C, D) = (2, 7, 8, 10,
0 1 1 0
11, 13, 15) May 10, 12 Marks.
1 0 0 1
Soln. : Solve it yourself. 1 0 1 0
ns e
Ex. 4.8.14 : Solve the following equation using K map 1 1 0 0
io dg
minimization technique. Draw the diagram for 1 1 1 1
the output : Z = f(A, B, C, D) = M (0, 1, 6, 7, Q. 6 For the same truth table write the standard POS
8, 9) Dec. 11, 6 Marks. expression.
at le
Soln. : Solve it yourself.
Review Questions
Q. 7
Q. 8
Q. 9
Explain the K-map reduction technique.
Draw the structure of four input K-map.
1. Product term
Q. 12 Draw the structure of four variable K-map to
2. Sum term
represent the standard POS form.
Q. 2 Explain the term SOP and POS related to Boolean
Q. 13 Solve the following with K-maps :
Pu K
function.
Q. 3 Convert the equation into standard POS form 1. f (A,B,C) = m (0,1,3,4,5)
––
Y = (A + B) (A + C) (B + C). f(A,B,C) = m (0,1,2,3,6,7)
ch
2.
Q. 4 State the disadvantages of algebraic method of
simplification. Q. 14 Explain the different methods used to simplify the
Q. 5 Write the standard SOP equation for the truth table Boolean function.
Te
shown in Table 1.
Chapter
5
ns e
io dg
at le Combinational Logic
ic w
Design
bl no
Pu K
Syllabus
ch
Design using SSI chips : Code converters, Half- adder, Full adder, Half subtractor, Full subtractor, n bit
binary adder.
Introduction to MSI chips : Multiplexer (IC 74153), Demultiplexer (IC 74138), Decoder (74238) Encoder
(IC 74147), Binary adder (IC 7483).
Te
Design using MSI chips : BCD adder & subtractor using IC 7483, Implementation of logic functions
using IC 74153 & 74138.
Case Study : Use of combinational logic design in 7 segment display interface.
Chapter Contents
5.1 Introduction to Combinational Circuits 5.11 Study of Different Multiplexer ICs
5.2 Design of Combinational Logic using SSI 5.12 Multiplexer Tree/Cascading of multiplexer
chips
5.3 Binary Adders and Subtractors 5.13 Use of Multiplexers in Combinational Logic
Design
5.4 The n-Bit Parallel Adder 5.14 Demultiplexers
5.5 n-bit Parallel Subtractor 5.15 Types of Demultiplexers
5.6 BCD Addition 5.16 Demultiplexer Tree
5.7 BCD Subtractor using MSI IC 7483 5.17 Encoders
5.8 Magnitude Comparators 5.18 Priority Encoder
5.9 Multiplexer (Data Selector) 5.19 Decoder
5.10 Types of Multiplexers 5.20 Case Study : Combinational Logic Design of
BCD to 7 Segment Display Controller
ns e
Combinational circuits :
In the analysis problem, the logic diagram of a
The combinational circuit is a digital system the output
io dg
combinational circuit is given to us.
of which at any instant of time, depends only on the
We have to obtain the Boolean expression for its output
levels present at its input terminals.
or write a truth table or explain the operation of circuit.
The combinational circuits do not use any memory.
Analysis Procedure :
at le
Hence the previous state of input does not have any
effect on the present state of the circuit.
Analysis :
Step 1 : Boolean expressions for the outputs of all the
(C-332) Fig. 5.1.1 : Block diagram of a combinational circuit input gates :
A combinational circuit is made up of logic gates. These Input gates are 1, 2, 3 and 4. The Boolean expressions
gates are connected suitably between the inputs and for their outputs are as follows :
outputs of the combinational circuit. – –
T1 = AB T3 = C
A combinational circuit operates in three steps : –
T2 = BC T4 = B
1. It accepts n-different inputs.
Step 2 : Boolean expressions for remaining gates :
2. The combination of gates operates on the inputs.
–
3. “m” different outputs are produced as per T5 = T1 + T2 = AB + BC
requirement. – –
T6 = F1 = T5 T3 = (AB + BC) C
–– – ––
= ABC + BCC = ABC Step 5 : Draw the logic diagram (combinational circuit).
– Ex. 5.1.1 : A circuit has four inputs and two outputs. One
T7 = F2 = T2 + T3 = BC + C
of the outputs is high when majority of inputs
Step 3 : Boolean expressions for final outputs :
are high. The second output is high only when
––
F1 = T6 = ABC all inputs are of same type. Design the
– combinational circuit.
F2 = T7 = C + BC
OR
Step 4 : Write the truth table :
Design a logic circuit which has three inputs A,
ns e
(C-8244) Table 5.1.1 B, C and gives a high output when majority of
inputs is high.
io dg
Soln. :
Step 1 : Assign symbols to input and output variables :
Let the four inputs be A, B, C, D and the two outputs be
at le
Y1 and Y2.
Step 2 : Write the truth table :
The truth table is as given in Table P. 5.1.1.
ic w
(C-8245) Table P. 5.1.1 : Truth table relating the
inputs and outputs
bl no
Note that the outputs of all the gates (T1 to T5) are first
written in the truth table for various combinations of
inputs and then the values of F1 and F2 are entered
Pu K
Step 2 : Determine the number of inputs and outputs and From the truth table we note the following things.
assign letter symbols to input and output Y1 = 1 when number of 1 inputs is higher than the
variables. For example F1, F2 … for outputs and number of 0 inputs.
A, B, C. …. for the inputs.
Y2 = 1 when A = B = C = D.
Step 3 : Prepare a truth table relating the inputs and
Step 3 : Write K-map for each output and get simplified
outputs. expression :
Step 4 : Write K-map for each output in terms of inputs and K-maps for the two outputs and the corresponding
obtain the simplified Boolean expression for each simplified Boolean expressions are given in
output. Figs. P. 5.1.1(a) and (b).
ns e
In this section we are going to convert one type of code
into another type.
io dg
5.2.1.1 BCD to Excess 3 Converter :
(C-334) Fig. P. 5.1.1(a) : K-map and simplification for Y1 SPPU : May 06, Dec. 12
at le University Questions.
Principle :
The logic diagram using logic gates is as shown in If we add 3 i.e. (0011 0011) then the corresponding
Fig. P. 5.1.1(c). Excess 3 code is 0100 0110.
Step 1 : Write the truth table relating BCD and
Excess 3 :
(C-8109) Table 5.2.1 : Truth table relating BCD and Excess-3
codes
ns e
(C-366) Fig. 5.2.1(c) : K-map for E1
io dg
–– ––
E1 = D1 D0 + D1D0 = (D1 D0)
at le
ic w
bl no
Simplified equation :
Pu K
E3 = D3 + D2 (D0 + D1) E0 = D0
ch
Simplified equation :
–– –– –– ––
E2 = D2 D1 + D2 D0 + D2 D1 D0
–– –– ––
E2 = D2 (D1 + D0) + D2 D1 D0
University Questions.
ns e
Step 1 : Write the truth table relating BCD and gray
io dg
codes : (C-371) Fig. 5.2.3(b) : K-map for G2
at le
ic w
bl no
Pu K
ch
Simplified equation : G3 = B3
ns e
5.2.3 Binary to Gray Code Converter :
SPPU : May 12.
io dg
University Questions. (C-374) Fig. 5.2.5(b) : K-map for G2
–– ––
Q. 1 Design 4-bit binary to gray code converter. State the Simplified equation : G2 = B3 B2 + B3B2 = B3 B2
applications of gray code. (May 12, 8 Marks)
In gray code only one bit changes at a time.
at le
Step 1 : Write the truth table relating binary inputs and
gray outputs :
ic w
(C-8252) Table 5.2.3 : Truth table relating binary and gray
codes
bl no
Pu K
–– ––
Simplified equation : G1 = B2B1 + B2 B1 = B2 B1
Te
Simplified equation :
–– ––
G0 = B1 B0 + B1 B0 = B1 B0
(C-374) Fig. 5.2.5(a) : K-map for G3 Binary to gray code converter is shown in Fig. 5.2.5(e).
ns e
(C-376) Fig. 5.2.5(e) : Binary to gray code converter
io dg
Ex. 5.2.1 : Convert 4-bit gray code into corresponding
BCD code. Show truth table and MSI circuit.
Soln. :
at le May 10, 6 Marks.
ic w
Step 1 : Write the truth table relating gray and BCD
codes : (C-8279)
bl no
Pu K
ch
Step 3 : Realization :
Step 2 : Write K-maps for each output and get The gray to BCD converter is shown in Fig. P. 5.2.1(a).
simplified equation :
Soln. :
Step 1 : Write the truth table 4 bit gray to 5 bit BCD
code converter :
ns e
io dg
at le
ic w
bl no
(C-6178)
–– ––
B0 = (G3 G2) (G1 G0 + G1G0)
–– ––
+ ( G3 G2 ) (G1 G0 + G1G0 )
B0 = (G3 G2) ( G1 G0 )
ns e
+ ( G3 G2 ) (G1 G0)
io dg
(C-6179)
Let X = G3 G2
Step 2 : K-map and simplification :
at le
And Y = (G1 G0)
–– ––
B0 = X Y + X Y = X Y
ic w
Substituting for X and Y we get,
bl no
ns e
K-maps for carry and sum outputs are as shown in
io dg
Figs. 5.3.2(a) and (b).
5.3.1
at leTypes of Binary Adders :
University Questions.
…(5.3.1)
Q. 1 What do you mean by half adder ?
ch
(C-346)
(May 07, 2 Marks)
The disadvantage of half adder is that addition of three
Definition and block diagram :
bits is not possible to perform.
Half adder is a combinational logic circuit with two
Te
The half adder circuit is supposed to add two single bit Let Number A = A1 A0
binary numbers A and B. And Number B = B1 B0
Therefore the truth table of a half adder is as shown in Then the addition should take place as shown in
Fig. 5.3.1(b). Fig. 5.3.4(a).
The K-maps :
The K-maps for the sum (S) and carry out (Co) outputs
and the corresponding Boolean expressions are as
(C-347) Fig. 5.3.4(a) : Two bit binary addition shown in Fig. 5.3.6.
A half adder can add A0 and B0 to produce S0 and C0. Note that the K-maps have been written from the truth
But the addition of next bits requires the addition of A1,
table of Fig. 5.3.5(b).
B1 and C0.
For the sum output :
The addition of three bits is not possible to perform by
ns e
using a half adder. Hence we cannot use a half adder in
practice.
io dg
5.3.3 Full Adder : SPPU : May 07.
University Questions.
Q. 1 What do you mean by full adder ?
at le
Definition :
(May 07, 6 Marks)
(C-351) Fig. 5.3.6(a) : K-map for sum output
ic w
Full adder is a three input two output combinational Expression for sum output :
logic circuit which can add three single bits applied at
its input to produce Sum and Carry outputs.
bl no
S = Cin A B …(5.3.2)
S = (A B) Cin = A B Cin
Logic diagram for full adder :
This expression is same as that obtained for the full
adder. Thus the sum output has been successfully
implemented by the circuit shown in Fig. 5.3.8.
Now write the expression for carry output Co as,
Co = (A B) Cin + AB
–– –– –– ––
Co = (AB + AB) Cin + AB = ABCin + A BCin + AB
–– ––
= ABCin + A BCin + AB (1 + Cin)
ns e
–– ––
io dg
–– ––
5.3.4 Full Adder using Half Adder : = BCin (A + A) + ABCin + AB
––
SPPU : May 07.
= BCin + A BCin + AB
University Questions. ––
at le
Q. 1 How will you implement full adder using half
adder ? Explain with circuit diagram.
(May 07, 6 Marks)
= BCin + A BCin + AB (1 + Cin)
––
= BCin + A BCin + AB + ABCin
––
ic w
= BCin + AB + ACin ( B + B)
The full adder circuit can be constructed using two half Co = BCin + AB + ACin …Proved.
adders as shown in Fig. 5.3.7 and the detail circuit is
bl no
(C-354) Fig. 5.3.7 : Full adder using half adders The rules of binary subtraction are as follows :
Te
Definition :
(C-355) Fig. 5.3.8 : Full adder using two half adders Half subtractor is a combinational circuit with two inputs
and two outputs (difference and borrow).
Now let us prove that this circuit acts as a full adder.
Proof : It produces the difference between the two binary bits
Refer Fig. 5.3.8 and write the expression for sum output at the input and also produces an output (borrow) to
as, indicate if a 1 has been borrowed.
Truth table showing the outputs of a half subtractor for The full subtractor is a combinational circuit with three
all the possible combinations of input are shown in inputs A, B and Bin and two outputs D and Bo.
Table 5.3.1.
ns e
A is the minuend, B is subtrahend, Bin is the borrow
(C-8058) Table 5.3.1 : Truth table for half subtractor
io dg
produced by the previous stage, D is the difference
output and Bo is the borrow output.
at le
use the full subtractor.
Truth table :
(C-356) Fig. 5.3.9 (C-8059) Table 5.3.2 : Truth table for a full subtractor
–– ––
Difference D = AB + A B = A B
––
Borrow Bo = AB
Logic diagram :
Half subtractor can only perform the subtraction of two K-maps for D and Bo outputs are shown in Figs. 5.3.11(a)
ns e
–– –– –– –– –– ––
D = A B Bin + ABBin + A B Bin + ABBin
io dg
Simplification for difference output : (C-361) Fig. 5.3.12 : Logic diagram for a full subtractor
at le (C-6376)
University Questions
Q. 1 With the help of a circuit diagram explain the full
ic w
–– subtractor using half subtractor. (Dec. 08, 8 Marks)
D = Bin ( A B ) + Bin (A B)
Logic diagram :
Let A B = C,
bl no
D = Bin A B …(5.3.4)
Pu K
–– –– Proof :
Bo = ABin+ AB + BBin
Refer Fig. 5.3.13 to write the expression for difference
Simplification for borrow output : output D as,
Fig. 5.3.12. This has been drawn by using the Boolean = (A B + AB) Bin + A B = A B Bin + ABBin + A B
–– –– ––
equations of (5.3.8) and (5.3.9). = A B Bin + ABBin + A B (1 + Bin) …since (1 + Bin) = 1
ns e
–– ––
= A Bin + BBin + A B …Proved. (C-386) Fig. 5.4.2 : Block diagram of a four-bit parallel adder
io dg
Note that this expression is exactly same as that for Bo Let the two four bit words that are to be added be A
of the full subtractor. and B and be denoted as follows :
Thus the circuit shown in Fig. 5.3.13 acts as a full A = A3 A2 A1 A0,
subtractor.
5.4 at le
The n-Bit Parallel Adder :
B = B3 B2 B1 B0.
A0 and B0 represent the LSBs of the four bit words A and
B. Hence full adder-0 is the lowest stage. Hence its Cin
ic w
The full adder is capable of adding only two single digit has been connected to 0 permanently.
binary numbers along with a carry input.
The rest of connections are exactly same as those done
bl no
But in practice we need to add binary numbers which for the n-bit parallel adder.
are much larger in size than just one bit.
The four-bit parallel adder is a very common logic
The two binary numbers to be added could be 4 bit, 8 circuit. It is normally shown by a block diagram as
bit, 16 bit long. shown in Fig. 5.4.3.
Pu K
(C-6475)
The look-ahead-carry addition will therefore speed up
the addition process.
From this addition it is evident that the addition of 1’s in
The adder with look ahead carry needs additional
the second position produces a carry which is added to
hardware but the speed of that adder is independent of
the bits in the third position and the carry produced in
the number of bits.
ns e
the third position is added to the bits in the fourth
Consider the block diagram of full adder Fig. 5.4.4(a)
position.
io dg
and its AND-OR-EXOR realization shown in Fig. 5.4.4(b).
Therefore the sum bit produced in the MSB position of
the result depends on the carry generated due to
additions in the preceding stages.
at le
The real problem is created due to this.
generated.
For example if n = 16 then the total time required for (C-389) Fig. 5.4.4
Q. 1 Draw and explain 4-bit full adder. How will you and Ci = Gi + PiCi – 1 …(5.4.4)
generate look ahead carry for your circuit. The carry output Gi of the first half adder is equal to 1 if
(May 06, 6 Marks) Ai = Bi = 1 and a carry is generated at the ith stage of
Q. 2 What do you mean by carry generate and carry the parallel adder. That means Ci = 1.
propagate ? Explain with suitable equations and This variable Gi is known as Carry Generate and its
block diagram. Write an equation for C3 using carry
value does not depend on the input carry i.e. Ci – 1.
generate and carry propagate. Simplify this
The variable Pi is called as Carry Propagate because
equation to get minimum gate delay. How many
gate delays are needed to generate C3 with this this term is associated with the propagation of carry
principle ? Explain your calculations. from Ci – 1 to Ci. Now consider Equation (5.4.4)
i.e. Ci = Gi + Pi Ci – 1.
(Dec. 07, 10 Marks)
Q. 3 Draw and explain the loop ahead carry generator. Using this equation, we can write the expression for the
(May 14, 6 Marks) carry output of each stage in a 4 bit parallel adder as
follows :
Pi = Ai Bi
0 C0 = G0 + P0 C– 1 …(5.4.5)
1 C1 = G1 + P1C0
= G1 + P1 (G0 + P0 C–1)
ns e
Stage expression for carry output
io dg
2 C2 = G2 + P2C1 = G2 + P2 ( G1 + P1G0 + P0P1C–1)
at le + P3P2P1P0C–1 ...(5.4.8)
Equation (5.4.2).
(C-390) Fig. 5.4.5 : Logic diagram of the look
Pu K
possible to produce the carry outputs C0, C1, C2 and C3 The block diagram of a four bit parallel adder using the
Te
etc.
like ripples.
Logic diagram :
The Pi and Gi variables (i.e. P0, P1, ....., P3 and G0, G1 .....G3) Cin 0 is the input carry and Cout 3 represents the output
are obtained from the inputs Ai and Bi (i.e. A3 A2 A1 A0 carry. S3, S2, S1, S0 represent the sum outputs with S3
and B3 B2 B1 B0) by using the EX-OR and AND gates MSB.
respectively.
Pin diagram :
The carry outputs C0 through C3 are produced
simultaneously by the look ahead carry generator, as The pin diagram of IC 7483 is shown in Fig. 5.4.8.
explained earlier. This IC adds the two four bit words A and B and the bit
These carry outputs and Pi variables are EX-ORed to at Cin 0 and produces a four bit sum output along with
produce the sum outputs S0 through S3. carry output at Cout 3.
ns e
For example S3 is produced by Ex-ORing P3 and C2, then
S2 is produced by Ex-ORing P2 and C1 and so on.
io dg
5.4.5 MSI Binary Adder IC 74 LS 83 / 74 LS
283 : SPPU : Dec. 10, Dec. 11.
University Questions.
at le
Q. 1
Q. 2
Explain for IC 74LSXX various characteristics in
brief. (Dec. 10, 4 Marks)
What is the use of 7483 chip ? (Dec. 11, 4 Marks)
ic w
Block diagram :
The most common binary parallel adder in the (C-392) Fig. 5.4.8 : Pin diagram of IC – 7483
bl no
Functional symbol :
Both these words are applied at the inputs of the I.C. Fig. 5.4.10 shows an 8-bit adder using two 4-bit adders.
7483/74 LS 283. Similarly it is possible to connect a number of adders to
make an n-bit adder.
ns e
produce the subtraction at its sum outputs S3 S2 S1 S0.
io dg
The word S3 S2 S1 S0 represent the result of binary
subtraction (A – B) and carry output Cout represents the
polarity of the result.
(C-394) Fig. 5.4.10 : 8-bit addition using 4-bit adders
If A > B then Cout = 0 and the result is in true binary
at le
The Cout 3 i.e. the carry output of adder-1 is connected to
Cin 0 input of adder-2. The second adder adds this carry
form but if A < B then Cout = 1 and the result is negative
and in the 2’s complement form.
ic w
and the four MSB bits of the two numbers.
5.5.2 4-Bit Binary Parallel Adder / Subtractor
Cout 7 of adder-2 acts as the final output carry and the Using IC 7483 :
bl no
The subtraction can be carried out by taking the 1’s or can be obtained using the same circuit shown in
Pu K
Fig. 5.5.2.
2’s complement of the number to be subtracted.
It makes use of the MSI 4 bit adder IC 7483 and it is
For example we can perform the subtraction (A – B) by
ch
ns e
EX-OR gates. The carry input Cin is connected to M, Fig. 5.6.1.
Cin = 0.
io dg
Therefore the adder adds A + B + Cin = A + B since
Cin = 0. Thus with M = 0 addition of A and B will take
place.
at le
Operation as Subtractor (M = 1) :
M = 1.
bl no
Since M = 1, one input of each EX-OR gate is now 1. (C-76) Fig. 5.6.1 : Illustration of case 1 in BCD addition
Hence each EX-OR gate acts as an inverter. This is
Case 2 : Sum greater than 9 but carry = 0
because 1 0 = 1 and 1 1 = 0.
In this case the sum of the two BCD numbers is greater
Pu K
The carry input terminal Cin is connected to M, So we have to correct the sum by adding decimal 6 or
Cin = 1 BCD 0110 to it. After doing this we get the correct BCD
Te
The inverted number B adds with Cin = 1 to give the 2’s sum.
complement of B. Hence the adder will add A with the Case 2 of BCD addition is illustrated in Fig. 5.6.2.
2’s complement of B and the result is actually the
subtraction A – B.
Soln. :
A nonzero carry indicates that the answer is wrong and
(79)BCD + (16)BCD :
needs correction.
Add (6)10 or (0110) BCD to the sum to correct the
answer.
(C-5229)
ns e
Add (6)10 to invalid BCD.
io dg
at le (C-5230)
ic w
(79)BCD + (16)BCD = (95)BCD ...Ans.
SPPU : Dec. 10, May 11, May 12, Dec. 12, May 18,
University Questions.
1. A 4-bit binary adder to add the given two 4-bit BCD Write K-map :
numbers A and B.
2. A combinational circuit to check if sum is greater than 9
or carry = 1.
3. Another 4-bit binary adder to add six (0110) to the
incorrect sum if sum > 9 or carry = 1.
ns e
So we have to design the combinational circuit that
finds out whether the sum is greater than 9 or carry = 1.
io dg
(C-398) Fig. 5.6.5 : K-map for Y output
The output of combinational circuit is to be used as final Step 1 : Obtain the 9’s complement of number B (i.e the
output carry and the carry output of adder-2 is to be number to be subtracted).
ignored. Step 2 : Add A and 9’s complement of B.
Operation : Step 3 : If a carry is generated in step 2 then add it to the
Case I : Sum 9 and carry = 0 sum to obtain the final result. The carry is called
as end around carry.
The output of combinational circuit Y = 0. Hence
Step 4 : If carry is not produced then the result is negative
B3 B2 B1 B0 = 0000 for adder-2.
and in the 9’s complement form. So take 9’s
Hence output of adder-2 is same as that of adder-1. complement of the result.
ns e
Case II : Sum > 9 and carry = 0 Ex. 5.7.1 : Subtract (3)10 from (7)10 in BCD.
io dg
If S3 S2 S1 S0 of adder-1 is greater than 9, then output Y Soln. :
of combinational circuit becomes 1. Step 1 : Obtain 9’s complement of (3)10 :
B3 B2 B1 B0 = 0110 (of adder-2) 9’s complement of (3)10 is 9 – 3 = 6.
Hence six (0110) will be added to the sum output of Step 2 : Add 7 and nine’s complement of 3 : (C-6258)
at le
adder-1.
We get the corrected BCD result at the sum output of
ic w
adder-2.
1. Using 9’s complement 2. Using 10’s complement The circuit diagram of a 4-bit BCD subtractor is shown
in Fig. 5.7.1. It consists of four binary parallel adders
5.7.1 BCD Subtraction using 9’s (IC 7483).
Complement : Adder – 1 obtains the 9’s complement of number B.
The 9’s complement of a BCD number can be obtained Adders – 2 and 3 form the normal 4-bit BCD adder with
a facility to add (6)10 i.e (0110)2 for correction.
by subtracting it from 9.
For example 9’s complement of 1 is 8. The 9’s Adder – 2 adds number A with the 9’s complement of
complement of various digits are given in Table 5.7.1. number B. The combinational circuit associated with
adder – 3 will correct the sum by adding (6)10 or (0110)2
Table 5.7.1 : 9’s complement of various decimal digits if necessary.
Decimal digit 0 1 2 3 4 5 6 7 8 9 The output of this combinational circuit is used further
9’s complement 9 8 7 6 5 4 3 2 1 0 as a carry. At the output of adder – 3 we get the correct
BCD sum of A and 9’s complement of B.
Procedure for BCD subtraction :
Adder – 4 is used to either add 1 to the output of adder
The BCD subtraction using 9’s complement is performed – 3 or take the 9’s complement of the output of
as follows : adder – 3 depending on the status of carry as follows :
ns e
complement. The 10’s complement can be used to
perform the BCD subtraction as follows :
io dg
at le
ic w
bl no
Pu K
ch
Te
(C-401) Fig. 5.7.1 : 4-bit BCD subtractor using 9’s complement method
Ex. 5.7.2 : Perform the subtraction (9)10 – (4)10 in BCD 5.7.4 4-bit BCD Subtraction using 10’s
using the 10’s complement. Complement Method :
Soln. : (C-6193)
SPPU : Dec. 09.
University Questions.
Q. 1 Draw and explain 4-bit BCD subtractor using IC
7483. (Dec. 09, 5 Marks)
Circuit diagram :
ns e
10’s complement method is shown in Fig. 5.7.2.
io dg
This circuit consists of four, 4-bit binary adders
(IC 7483).
at le
ic w
bl no
Pu K
ch
Te
ns e
Adder - 1
Adder – 1 performs the following operations.
io dg
Adder – 1 produces the 10’s complement of number B.
Number B is inverted using the EX-OR gates and then
Cin = 1 is added to it to obtain 2’s complement of B.
A3 A2 A1 A0 = 1010 i.e. (10)10. Adder – 1 adds 1010 and
at le
2’s complement of number B. So actually it performs the
subtraction (10 – B) to obtain the 10’s complement of B.
Thus we obtain 10’s complement of B at the output of (C-403) Fig. 5.8.1 : Block diagram of an n-bit comparator
ic w
Adder – 1. 5.8.1 A 2-Bit Comparator : SPPU : Dec. 12.
Adders- 2 and 3
bl no
University Questions.
Adders 2 and 3 together form the normal BCD adder
Q. 1 Design 2-bit magnitude comparator using logic
discussed in the earlier sections.
gates. Assume that A and B are 2-bit inputs. The
Adder – 2 adds number A to 10’s complement of B.
outputs of comparator should be A > B,
Adder – 3 adds six (0110) to the result of this addition, if
Pu K
Depending on the status of carry obtained from the (C-406(a)) Table 5.8.1 : Truth table for a 2-bit comparator
combinational circuit, the adder behaves in the
following manner.
If carry = 0, then due to inverter used A3 A2 A1 A0
= 1010 i.e. (10)10 and carry input Cin = 1. Also the EX-OR
gates will act as inverters.
Hence adder 4 will add (10)10 and the 2’s complement
of adder 3 output. So what it actually does is, it takes
the 10’s complement (10 – result) of the result.
But if carry = 1 then adder - 4 will pass the adder - 3
output unchanged.
K-maps : For A B
For A < B :
Simplified expression :
– – – –
A < B = A1 A0 B0 + A1 B1 + A0 B1 B0
For A > B
Simplified expression :
– – – –
A > B = A0 B1 B0 + A1 A0 B0 + A1 B1
ns e
Simplification for output A = B :
io dg
at le
For A = B : – – – – – –
= A0 B0 (A1 B1 + A1 B1) + A0 B0 ( A1 B1 + A1B1)
– – – –
= (A1B1 + A1 B1) (A0 B0 + A0 B0 )
ic w
(A = B) = (A1 B1) (A0 B0) where = EX-NOR
bl no
(c).
ns e
Multiplexer is a special type of combinational circuit.
io dg
The block diagram of an n-to-1 multiplexer is shown in
Fig. 5.9.1(a).
As shown, there are n-data inputs (D0, D1, .....Dn-1), one In most of the electronic systems, the digital data is
output Y and m select inputs, S0, S1 .....Sm–1. available from more than one sources. It is necessary to
route this data over a single line.
The relation between the number of data inputs (n) and
Te
number of select inputs (m) is as follows : Under such circumstances we require a circuit which
m selects one of the many sources at a time.
n = 2
This circuit is nothing else but a multiplexer, which has
A multiplexer is a digital circuit which selects one of the
many inputs, one output and some select inputs.
n data inputs and routes (connects) it to the output.
Multiplexer improves the reliability of the digital system
The selection of one of the n inputs is done with the
because it reduces the number of external wired
help of the select inputs.
connections.
To select n inputs we need m select lines such that
m
2 = n. 5.9.2 Advantages of Multiplexers :
Depending on the digital code applied at the select 1. It reduces the number of wires, required to be used.
inputs, one out of n data sources is selected and
2. A multiplexer reduces the circuit complexity and cost.
transmitted to the single output Y.
3. We can implement many combinational circuits using
E is called as a strobe or enable input which is useful for
MUX.
cascading.
4. It simplifies the logic design.
It is generally an active low terminal, that means it will
perform the required operation when it is low. 5. It does not need the K maps for simplification.
1. 2 : 1 multiplexer. 2. 4 : 1 multiplexer.
3. 8 : 1 multiplexer. 4. 16 : 1 multiplexer.
5. 32 : 1 multiplexer.
5.10.1 2 : 1 Multiplexer :
ns e
Block diagram :
io dg
Fig. 5.10.1(a). It has two data inputs D0 and D1, one (C-415) Fig. 5.10.2 : Realization of 2 : 1 MUX using gates
select input S, an enable input and one output. 5.10.2 A 4 : 1 Multiplexer : SPPU : May 07.
The truth table of this MUX is shown in Fig. 5.10.1(b).
at le University Questions.
Truth table :
(C-416) Fig. 5.10.3 : 4 : 1 multiplexer
Write a more elaborate truth table for 2 : 1 MUX as
(C-416)(a) Table 5.10.2 : Truth table
shown in Table 5.10.1.
We can write down the Boolean expression by taking The truth table tells us that if S1 S0 = 00, the data bit D0
is selected and routed to output.
into consideration these two conditions as follows :
–– –– Y = D0 …. when S1 S0 = 00
Y = E S D0 + ESD1 = E ( S D0 + SD1)
the output.
Y = D1 …. when S1 S0 = 01
Y = D2 for S1 S0 = 10 and
Y = D3 for S1 S0 = 11
ns e
etc.) is 1. Hence the logical expression for output in the
io dg
SOP form is as follows :
– – – –
Y = S 1 S 0 D0 + S 1 S0 D1 + S1 S 0 D2 + S1 S0 D3
(a) Block diagram
at le
This expression can be realized using basic gates as
Operating principle :
(C-417) Fig. 5.10.4 : Realization of 4 : 1 multiplexer For example if S2 S1 S0 = 0 1 1 then the data input D3 is
using basic gates selected and output Y will follow the selected input D3.
It has eight data inputs, one enable input, three select as follows :
1. It is used as a data selector to select one out of many
inputs and one output.
data inputs.
2. It is used for simplification of logic design.
3. In the data acquisition system. These chips consist of two 4 : 1 multiplexers. Each
4. In designing the combinational circuits.
multiplexer has a separate strobe (enable) input.
5. In the D/A converters.
There are two select inputs A and B which are common
6. To minimize the number of connections in a logic
to both the 4 : 1 multiplexers inside the ICs.
circuit.
5.11 Study of Different Multiplexer ICs : The strobe lines 1G and 2G are the active low enable
The multiplexer ICs available in market are given in lines for the two 4 : 1 multiplexer inside the IC. The truth
ns e
Table 5.11.1. table for each 4 : 1 MUX is shown in Table 5.11.2.
io dg
Table 5.11.1
(C-6185) Table 5.11.2 : Truth table of 4:1 MUX
IC Number Description Output
at le
74158
74153
Quad 2 : 1 Mux
Dual 4 : 1 Mux
(No inversion)
Inverted output
Same as input
ic w
(No inversion)
Fig. 5.11.1(a).
Multiplexer :
ns e
Ex. 5.12.1 : Implement a 16 : 1 multiplexer using 4 : 1
io dg
multiplexers.
Dec. 05, 6 Marks, Dec. 07, 7 Marks,
Dec. 08, May 11, 8 Marks
Soln. :
at le
Refer Fig. P. 5.12.1 for implementation.
ic w
Description : Ex. 5.12.2 : Design 12 : 1 mux using 4 : 1 multiplexers
(with enable inputs). Explain the truth table of
The select inputs S1 and S0 of the multiplexers 1, 2, 3 your circuit in short.
bl no
and 4 are connected together. Dec. 09, 8 Marks, Dec. 17, 6 Marks.
Soln. :
The select inputs S3 and S2 are applied to the select
The 12:1 MUX using 4 : 1 MUXes is as shown in
inputs S1 and S0 of MUX-5.
Fig. P. 5.12.2 and its truth table is as shown in
Table P. 5.12.2.
Pu K
The outputs Y1 ,Y2 ,Y3 ,Y4 are applied to the data inputs
D0, D1 , D2 and D3 of MUX-5 as shown in Fig. P. 5.12.1. (C-8134) Table P. 5.12.2 : Truth table
ch
Te
The operation can be summarized using Table P. 5.12.1. are connected together as shown in Fig. P. 5.12.2.
ns e
io dg
at le
ic w
bl no
Soln. :
The truth table of 14 : 1 MUX is as shown below :
in Fig. P. 5.12.3(b).
Design procedure :
(C-6264)
ns e
logic 1 level.
Step 3 : All the other input lines of multiplexer are
io dg
connected to logic 0 level.
Step 4 : The inputs (A, B, C) are to be connected to the
select inputs.
at le
ns e
Step 1 : Identify the decimal number corresponding to each
minterm. The decimal numbers corresponding the
io dg
minterms are 0, 2, 4 and 6.
Step 2 : Connect the data input lines 0, 2, 4 and 6 to logic 1
and the remaining lines (1, 3, 5, 7) to logic 0 as
(C-430) Fig. P. 5.13.2 : Implementation of a logic (C-432) Fig. P. 5.13.3(b) : Design table
expression using a multiplexer The data inputs D0 to D3 have been written at the top of
–
Te
Step 3 : Connect the variables A, B and C to the select the table and the two possible values A and A of the
inputs. unused variable A have been written.
In the eight boxes we enter the decimal numbers
Note : We can use IC 74151 which is an 8 : 1 multiplexer
corresponding to the minterms (0 to 7) serially.
to implement Boolean equations using 8 : 1 MUX.
Encircle those minterms corresponding to which the
5.13.2 Use of 4 : 1 MUX to Realize a 4 Variable output is 1 (minterms 1, 3, 4, 6).
Function :
Step 3 : Check each column in the design table :
It is very easy to use the 4 : 1 MUX to implement a 3 The columns of design table are inspected using the
variable function. following rules :
Rule 1 : If both the minterms in a column are not circled,
The procedure to be followed for this has been
then apply logic 0 to the corresponding data input.
illustrated through the following examples.
Note that there is no such column in our
Ex. 5.13.3 : Implement the logic function f (A, B, C) implementation table.
= m (1, 3, 4, 6) using a 4 : 1 multiplexer. Rule 2 : If only the minterm in the second row is encircled
(see columns 1 and 3) then “A” should be applied to
Soln. :
that data input. Hence we should apply A to the D0 and
The logic function to be implemented is D2 inputs.
Rule 3 : If only the minterm in the first row is encircled, (see Step 3 : Inspect each column in the design table :
columns 2 and 4), then A
– should be connected to that
Rule 1 : If both the minterms in a column are not circled,
data input. Hence we should apply A
– to the D1 and D3
then apply logic 0 to the corresponding data
inputs.
input. Note that there is no such column in our
Rule 4 : If both the minterms in a column are encircled, then
implementation table.
apply a logic 1 to the corresponding data input.
Rule 2 : If only the minterm in the second row is encircled
Note that there is no such column in our
then “A” should be applied to that data input.
implementation table.
Hence we should apply A to the D6 input.
ns e
Step 4 : Draw the logic diagram :
Rule 3 : If only the minterm in the first row is encircled,
The logic diagram is as shown in Fig. P. 5.13.3(a). –
then A should be connected to that data input
io dg
Note : We can use IC 74153 which is a dual 4 : 1
–
.Hence we should apply A to the D4, D5 and D7 inputs.
multiplexer in order to implement the Boolean
expressions using 4 : 1 MUX. Rule 4 : If both the minterms in a column are encircled,
then apply a logic 1 to the corresponding data 1
at le
5.13.3 Use of 8 : 1 MUX to Realize a 4 Variable
Function : input. Note that there is no such column in our
implementation table.
ic w
It is very easy to use the 8 : 1 MUX to implement a 4
Step 4 : Draw the logic diagram :
variable function.
The logic diagram is as shown in Fig. P. 5.13.4(b).
bl no
using 8 : 1 multiplexer.
f(A, B, C, D) = m (2, 4, 5, 7, 10, 14)
ch
Soln. :
The design table is as shown in Fig. P. 5.13.4(a). Ex. 5.13.5 : Implement the following function using 4 : 1
Soln. :
ns e
io dg
Step 3 : Implementation using 8 : 1 multiplexer :
The implementation is as shown in Fig. P. 5.13.6.
at le
ic w
(C-1330) Fig. P. 5.13.5
then proceed.
5.13.5 Implementing a Standard POS
Example 5.13.6 illustrates this concept. Expression using Multiplexer :
Ex. 5.13.6 : Implement the following Boolean expression
Te
ns e
(C-439) Fig. P. 5.13.7 : Implementation of
io dg
standard POS expression
(C-443)
We know that the don’t care conditions can be treated Step 2 : Implementation using 4:1 MUX :
as logic 1’s or 0’s.
ns e
Soln. : (C-7936) Fig. P. 5.13.11(a) : Implementation using 4:1 MUX
io dg
The don’t care conditions are assumed to be logic 1’s.
Ex. 5.13.12 : Implement the following expression using
The design table is shown in Fig. P. 5.13.10(a) and the
8 : 1 multiplexer : f(A, B, C, D) = m (2, 4, 6,
corresponding logic diagram is shown in 7, 9, 10, 11, 12, 15) (Dec. 12, 8 Marks)
at le
Fig. P. 5.13.10(b). Soln. :
f(A, B, C, D) = m (2, 4, 6, 7, 9, 10, 11, 12, 15)
ic w
Step 1 : Write the design table :
(C-3599)Table P. 5.13.12
bl no
(b) Implementation
(C-445) Fig. P. 5.13.10
ns e
io dg
Step 2 : Implementation using 8 : 1 multiplexer :
(C-7439) Fig. P. 5.13.14
5.14 Demultiplexers :
at le 5.14.1 Demultiplexer Principle :
ic w
Definition :
A de-multiplexer is a combinational circuit with one
data input, n outputs and m select inputs, which selects
bl no
Block diagram :
The block diagram of a 1 to n demultiplexer is shown in
ch
Soln. :
The enable input will enable the demultiplexer. If the Din is connected to Y0 if S0 = 0 and E = 1. Similarly Din is
enable (E) input is not active, then the demultiplexer connected to Y1 if S0 = 1 and E = 1.
does not work.
If E = 0, then both the outputs will be 0 irrespective of
The relation between the n output lines and m select the inputs, because the DEMUX is disabled.
lines is as follows :
m
n = 2
Equivalent Circuit :
ns e
way switch as shown in Fig. 5.14.1(b).
(C-448) Fig. 5.15.1 : A 1:2 demultiplexer
As shown in Fig. 5.14.1(b) a demultiplexer is equivalent
io dg
to a digitally controlled single pole, multiple way switch. Truth Table :
The data input gets connected to only one of the n The truth table of the 1 : 2 demultiplexer is as follows :
outputs at given instant of time via the single pole (C-7620) Table 5.15.1 : Truth table of demux 1 : 2
at le
multiple throw rotary switch as shown.
communication channel.
The 1 : 4 demultiplexer is shown in Fig. 5.15.2 and its
It is necessary to separate out this data at the receiving
truth table is given in Table 5.15.2.
ch
end.
Under such circumstances we require a circuit which
separates the multiple signals from the common one.
Demultiplexer improves the reliability of the digital (C-451) Fig. 5.15.2 : Block diagram of 1 : 4 demultiplexer
system because it reduces the number of external wired
(C-6189) Table 5.15.2 : Truth table for 1 : 4 demultiplexer
connections.
5.15.1 1 : 2 Demultiplexer :
Block diagram :
The block diagram of 1 : 2 demultiplexer is shown in Din is connected to Y0 when S1S0 = 00, it is connected to
Fig. 5.15.1. It has one data input Din, one select input S0, Y1 when S1S0 = 01 and so on. The other outputs will
one enable (E) input and two outputs Y0 and Y1. remain zero.
The enable input needs to be high in order to enable A0 , A1 , A2 are the three address lines or the three input
the demux. If E = 0 then all the outputs will be low
lines. We have to apply the 3-bit binary data to these
irrespective of everything.
inputs.
5.15.3 1 : 8 Demultiplexer :
But when this IC is to be used as a 1 : 8 DEMUX we have
Block diagram and truth table :
to use these lines as the three select lines.
The block diagram of 1 : 8 demux is shown in
– –
Fig. 5.15.3(a). O0 to O7 are the 8-output active low lines.
ns e
– –
There are three enable inputs out of which E1 and E2 are
io dg
the active low enable inputs whereas E3 is an active high
enable input.
– –
We have to make E1 = E2 = 0 and E3 = 1 in order to
(C-8103) Fig. 5.15.3(b) : Truth table for 1 : 8 demux Fig. 5.15.4(b) : Pin names and description
The pin configuration of the 3 : 8 decoder IC 74138 is The truth table for IC 74138 is shown in Table 5.15.3.
shown in Fig. 5.15.4.
ns e
(C-460) Fig. P. 5.16.1 : 1 : 8 demultiplexer using
io dg
two 1 : 4 demultiplexers
A0 = LSB, A2 = MSB, H = HIGH Voltage Level,
The select lines S1 and S0 of the two 1 : 4 demultiplexers
L = LOW Voltage Level = Don’t care condition. are connected in parallel with each other and S2 is used
at le
The connection diagram of IC 74138 as 1:8 DEMUX is as
shown in Fig. 5.15.5.
for selecting one of the two 1 : 4 demultiplexers.
This concept will be clear by solving the following demultiplexers are available.
Ex. 5.16.1 : Obtain a 1 : 8 line demultiplexer using two 1 : 4 expressions representing a combinational circuits.
line demultiplexers.
Thus it is possible to design and implement many
Soln. :
combinational circuits by using demultiplexer and a few
The 1 : 8 demultiplexer using two 1 : 4 demultiplexers is
logic gates.
shown in Fig. P. 5.16.1. This is similar to cascading of
multiplexers.
ns e
Step 3 : Implementation using a 1 : 8 demultiplexer :
2. We have to follow the design procedure given below to
Fig. P. 5.16.2 shows the implementation.
io dg
use DEMUX for implementing the given logical
expression.
Design procedure :
at le
Step 1 : Identify the decimal number corresponding to
each minterm
illustrated below.
in the given expression as
ic w
If Y = ABC + ABC + ABC
(C-6264)
0 5 7
bl no
Step 3 : The inputs (A, B, C) of the combinational circuit using 1:8 demultiplexer..
being designed are connected to the select
Ex. 5.16.3 : Implement the full subtractor using a 1 : 8
inputs. demultiplexer.
Te
The following example will demonstrate this concept. (Dec. 09, Dec. 10, 4 Marks, May 10, 3 Marks)
Soln. :
Ex. 5.16.2 : Implement a full adder using demultiplexer.
Step 1 : Write the truth table of the full subtractor :
Dec. 09, Dec. 10, 4 Marks, May 10, 3 Marks,.
The truth table of a full subtractor is as follows :
Soln. :
Step 1 : Write the truth table of the full adder : (C-7461) Table P. 5.16.3 : Truth table of a full subtractor
ns e
Connect Din to logic 1 permanently and connect A, B
and Bin to the select inputs S2 , S1 , S0 respectively as
io dg
shown in Fig. P. 5.16.3.
(C-1323) Fig. P. 5.16.4
demultiplexer :
f1(A, B, C) = m (0, 3, 7)
f2(A, B, C) = m (1, 2, 5)
Soln. :
inputs.
Implementation using 1 : 16 demultiplexer is as shown
Step 2 : Implement the circuit :
in Fig. P. 5.16.5.
Fig. P. 5.16.4 shows the implementation.
University Questions.
Q. 1 What is priority encoder ? (Dec. 08, 2 Marks)
Definition :
This is a special type of encoder with priorities assigned
to all its input lines. If two or more input lines are “1” at
ns e
the same time, then the input line with highest priority
will be considered.
io dg
Block diagram and truth table :
The block diagram of a priority encoder is shown in
Fig. 5.18.1(a) and its truth table is shown in
(C-1969) Fig. P. 5.16.5 : 2 bit comparator using 1 : 16 DEMUX
Fig. 5.18.1(b).
5.17 at leEncoders :
ic w
Definition :
Encoder is a combinational circuit which is designed to
perform the inverse operation of the decoder.
bl no
input.
(a) Block diagram of a priority encoder
Block diagram :
ch
(C-466) Fig. 5.17.1 : Block diagram of an encoder (b) Truth table of a priority encoder
(C-467) Fig. 5.18.1
The encoder accepts an n input digital word and
converts it into an m bit another digital word. Priorities are given to the input lines. If two or more
For example a BCD number applied at the input can be input lines are “1” at the same time, then the input line
converted into a binary number at the output. with highest priority will be considered.
The internal combinational circuit of the encoder is There are four inputs, D0 through D3 and two outputs
designed accordingly. Y1 and Y0 . Out of the four inputs D3 has the highest
5.17.1 Types of Encoders : priority and D0 has the lowest priority.
That means if D3 = 1 then Y1 Y0 = 11 irrespective of the
The types of encoders which we are going to discuss are
other inputs. Similarly if D3 = 0 and D2 = 1 then
as follows :
Y1 Y0 = 10 irrespective of the other inputs.
1. Priority encoders.
Carefully go through the truth table shown in
2. Decimal to BCD encoder.
Fig. 5.18.1(b) to get the feel of priority encoder
3. Octal to binary encoder.
operation.
ns e
(C-472) Fig. 5.18.3
in Fig. 5.18.2.
A1 to A9 are the active low inputs and A, B, C, D are the
io dg
The truth table for a decimal to BCD encoder is as given
in Table 5.18.1. active low outputs. Therefore bubbles have been added
in Fig. 5.18.3(a).
(C-470(a))Table 5.18.1 : Truth table for
decimal to BCD encoder Truth table :
Pin configuration and logic symbol : (C-8309)Fig. 5.18.3(c) : Truth table for 74147
ns e
n
“n” inputs and to a maximum 2 outputs.
The Boolean expressions for the four outputs are,
io dg
–– –
D0 = A B D1 = A B
–
D2 = A B and D3 = A B
at le
(C-479) Fig. 5.19.1 : Block diagram of a decoder
5.19.2 Demultiplexer as Decoder :
We can use a demultiplexer as a decoder.
ic w
Decoder is identical to a demultiplexer without any data Let us see how to operate a 1 : 4 demux as 2 : 4
input. It performs operations which are exactly opposite decoder.
bl no
to those of an encoder.
Consider Fig. 5.19.3 which shows a 1 : 4 demultiplexer.
Typical applications :
Din is the data input, S1 S0 are the select lines and Y3
1. Code converters.
through Y0 are the outputs.
Pu K
Block diagram :
Block diagram :
(C-480) Fig. 5.19.2 : Block diagram of a 2 line to
4 line decoder
The block diagram of a 3 to 8 line decoder is shown in
Truth table :
Fig. 5.19.4.
Table 5.19.1 shows the truth table which explains the
A, B and C are the three inputs whereas D0 through
operation of the decoder. It shows that each output is
D7 are the eight outputs.
“1” for only a specific combination of inputs.
Truth table :
ns e
The truth table of 3 to 8 line decoder is shown in
Table 5.19.2.
io dg
(C-6477) Table 5.19.2 : Truth table for 3 to 8 line decoder 5.19.5 IC 74138 / IC 74238 as 3 : 8 Decoder :
at le
shown in Fig. 5.19.6.
– –
O0 to O7 are the 8-output active low lines. These lines
Boolean expressions :
ch
5.19.4 1 : 8 DEMUX as 3:8 Decoder : the active low enable inputs whereas E3 is an active high
Te
enable input.
In order to operate 1 : 8 Demux as a 3:8 line decoder,
– –
the connections are to be made as shown in Fig. 5.19.5. We have to make E1 = E2 = 0 and E3 = 1 in order to
The data input Din is connected to logic 1 permanently. enable the IC.
The three select inputs S0, S1 and S2 will act as three
input lines of the decoder and Y0 to Y7 are the 8-output
lines.
Pin names Description Ex. 5.19.1 : Implement the following Boolean function
E3 Enable Input (Active HIGH) using a 3 : 8 decoder and external gates.
– – f (A, B, C) = (2, 4, 5, 7)
O0 – O7 Outputs (Active Low)
Soln. :
(b) Pin names and description
The decoder produces minterms. The outputs Y2 , Y4 , Y5
(C-489) Fig. 5.19.6
and Y7 are ORed to produce the required output.
The truth table for IC 74138 is shown in Table 5.19.4.
The logic implementation of the given logic function is
(C-6190) Table 5.19.4 : Truth table of 74138
shown in Fig. P. 5.19.1.
ns e
io dg
at le
ic w
(C-486) Fig. P. 5.19.1 : Implementation of Boolean
bl no
A0 = LSB, A2 = MSB, H = HIGH Voltage Level, equation using decoder and gate
L = LOW Voltage Level, = Don’t care condition
Ex. 5.19.2 : Implement the following multiple output
Fig. 5.19.7 shows the connection diagram for IC 74138
used as a 3 : 8 decoder. function using a suitable decoder.
Pu K
f2 (A, B, C) = m (1, 5, 6)
ch
Soln. :
Ex. 5.19.3 : Design a full adder using 3 : 8 decoder IC Ex. 5.19.4 : Implement a 3-bit binary to gray code
74138. Dec. 13, Dec. 16, 6 Marks converter using decoder IC 74138.
Soln. : May 11, 8 Marks.
The truth table of a full adder is shown in Soln. :
Table P. 5.19.3. Step 1 : Write the truth table relating the binary and
(C-7621)Table P. 5.19.3 : Truth table of full adder gray codes :
ns e
(C-8138) TableP. 5.19.4 : Truth table relating the
binary and gray codes
io dg
at le
ic w
bl no
All the enable terminals are connected to their But the outputs of IC 74138 are active low. Hence we
G2 = O4 + O5 + O6 + O7 = O 4 + O5 + O6 + O7
––––– – –
But A + B = A · B
––––––––––––––
– – –
G2 = O4 · O5 · O6 · O7 …(1)
––––––––––––––
– – –
Similarly G1 = O2 · O3 · O4 · O5 …(2)
(C-491) Fig. P. 5.19.3 : Full implemented using IC 74138
––––––––––––––
– – –
And G0 = O1 · O2 · O5 · O6 …(3)
Logic diagram :
ns e
io dg
(C-492) Fig.
P. 5.19.4 : Binary to gray code
converter using decoder 74138
at le
Fig. P. 5.19.4. Note that each equation represents a 4-
decoder.
Truth table : gates. Hence the output equations for “Borrow” and
The truth table for full subtractor is as shown in “Difference” are modified as,
–– –– –– ––
Table P. 5.19.5.
Difference D = O1 O2 O4 O7
Te
–– –– –– ––
= O1 O2 + O4 O7
–– –– –– ––
Borrow B = O1 O2 O3 O7
–– –– –– ––
= O1 O2 + O3 O7
–– –– –– ––
The A, B, Bin inputs are applied to the A0, A1 and A2
= O1 O2 O3 O7
inputs of the decoder.
The outputs of IC 74138 are active low. Hence they Ex. 5.19.6 : Implement Gray to Binary code converter
using suitable decoder.
should be applied to NAND gates as shown in
realizing a single Boolean function because a NAND For example if we want to display the number 7 then
the segments a, b and c should be turned on and all
gate is required to be used on the output side.
other segments should be off.
But the decoder can be preferred over a MUX in
Similarly the other numbers can be displayed as shown
realizing multiple Boolean functions simultaneously.
ns e
in Fig. 5.20.2.
5.20 Case Study : Combinational Logic
io dg
Design of BCD to 7 Segment
Display Controller :
5.20.1 Seven Segment LED Display :
at le
All of us know what a seven segment display is.
(B-2342) Fig. 5.20.1 : Standard form of seven segment display 5.20.3 Common Anode Display :
We can display any number from 0 to 9 by turning on
A common anode seven segment display is as shown in
various combinations of the segments as shown in
Fig. 5.20.3.
Te
Table 5.20.1.
Here as the name indicates the anode terminals of all
(B-2343(a)) Table 5.20.1
LED segments are connected together.
ns e
(C-496) Fig. 5.20.4 : Common cathode LED display The decoder accepts a four bit BCD count from a
io dg
As the name indicates, the cathode terminal is made counter, converts it to a seven bit code suitable for the
– –
common and all the anode terminals are brought out seven segment display ( a to g ) and drives the display.
connected to positive supply voltage VCC. 5.20.6 BCD to Seven Segment Display
Driver (Common Anode Display) :
5.20.5 Use of a Decoder for Driving the Seven
Segment Display : Let us assume that the type of display is common
Pu K
anode.
The use of a BCD to seven segment decoder / driver for
Hence the output of the converter should be “0” if a
ch
ns e
io dg
at le
ic w
bl no
Pu K
ch
Te
Realization :
ns e
truth table of half subtractor.
io dg
Q. 10 What is meant by a full subtractor ? Draw a full
subtractor circuit.
at le Q. 12
using IC 7483.
Justify.
Q. 1 Explain the working of a half adder ? Draw its logic its working.
Q. 24 Explain with diagram the working of 1 to 16 Q. 28 With a neat block diagram explain the function of an
demultiplexer. encoder.
Q. 25 Explain with pin-diagram the following ICs : Q. 29 What is meant by a priority encoder ? Give example.
ns e
IC74151, IC 74138. Q. 30 What do you mean by a ‘Decoder’ ? Give its
applications.
io dg
at le
ic w
bl no
Pu K
ch
Te
Chapter
6
ns e
io dg
at le Flip Flops
ic w
bl no
Syllabus
Introduction to sequential circuits : Difference between combinational circuits and sequential circuits;
Pu K
FF to another, Study of flip flops with regard to asynchronous and synchronous, Preset & clear, Master
slave configuration; Study of 7474, 7476 flip flop ICs.
Case study : Use of sequential logic design in a simple traffic light controller.
Te
Chapter Contents
6.1 Introduction 6.11 Master Slave (MS) JK Flip Flop
6.2 Triggering Methods 6.12 Preset and Clear Inputs
6.3 Gated Latches (Level Triggered SR Flip Flop) 6.13 Various Representations of Flip Flops
6.4 The Gated S-R Latch (Level Triggered S-R 6.14 Excitation Table of Flip-Flop
Flip Flop)
6.5 The Gated D Latch (Clocked D Flip Flop) 6.15 Conversion of Flip Flops
6.6 Gated JK Latch (Level Triggered JK Flip Flop) 6.16 Applications of Flip Flops
6.7 Edge Triggered Flip Flops 6.17 Study of Flip-Flop ICs
6.8 Edge Triggered D Flip Flop 6.18 Analysis of Clocked Sequential Circuits
6.9 Edge Triggered J-K Flip Flop 6.19 Design of Clocked Synchronous State
Machine using State Diagram
6.10 Toggle Flip Flop (T Flip Flop) 6.20 Case Study : Use of Sequential Logic Design
in a Simple Traffic Light Controller
6.1 Introduction : Fig. 6.1.1 shows the block diagram of a sequential circuit
which includes the memory element in the feedback
Combinational circuits : path.
Definition :
Present state of sequential circuit :
A combinational circuit is a logic circuit the output of
The data stored by the memory element at any given
which depends only on the combination of the inputs.
instant of time is called as the present state of the
The output does not depend on the past value of inputs
sequential circuit.
or outputs.
Next state :
Hence combinational circuits do not require any
ns e
The combinational circuit shown in Fig. 6.1.1 operates
memory (to store the past values of inputs or outputs).
on the external inputs and the present state to produce
Till now we have discussed only the combinational
io dg
new outputs.
circuits.
Some of these new outputs are stored in the memory
The output of combinational circuit at any instant of
element and called as the next state of the sequential
time, depends only on the levels present at input circuit.
at le
terminals. It does not depend on the past status of
inputs.
The most important part of the sequential circuit seems
to be the memory element. The memory element of
Fig. 6.1.1 is known as Flip Flop (FF). It is the basic
ic w
The combinational circuits do not use any memory.
Therefore the previous states of input does not have memory element.
any effect on the present state of the circuit.
6.1.1 Clock Signal :
bl no
In the sequential circuit, the timing parameter also The clock signal repeats itself after every T seconds.
needs to be taken into consideration. Hence the clock frequency is f = 1/T.
Te
ns e
1. Output Inputs present at Present inputs
depends that instant of and past conclusions :
io dg
on time. inputs/outputs. –
The outputs of the circuit (Q and Q ) will always be
2. Memory Not necessary Necessary –
3. Clock Not necessary Necessary complementary. That means if Q = 0 then Q = 1 and
input –
vice versa. They will never be equal Q = Q = 0 or 1 is an
4.
at leExamples Adders,
subtractors, code
converters
Flip flops, shift
registers,
counters
invalid state.
This circuit has two stable states.
–
ic w
One of them corresponds to Q = 1, Q = 0 and it is
6.1.3 1-Bit Memory Cell called as 1 state or set state. Whereas the other state
(Basic Bistable Element) : SPPU : Dec. 18 –
bl no
Flip-flop is also known as the basic digital memory This property of the circuit shows that it can store 1 bit
circuit. of digital information. Therefore it is called as a 1-bit
It has two stable states namely logic 1 state and logic 0 memory cell.
state. We can design it either using NOR gates or NAND 6.1.4 Latch :
Te
gates.
The cross coupled inverter of Fig. 6.1.3 is capable of
Circuit diagram :
locking or latching the information. Hence this circuit is
A flip-flop can be designed by using the fundamental also called as a latch.
circuit shown in Fig. 6.1.3. The disadvantage of the cross coupled inverter circuit is
NAND gates 1 and 2 are basically acting as inverters. that we cannot enter the desired digital data into it.
Hence this circuit is called as a cross coupled inverter. This disadvantage can be overcome by modifying the
Output of gate 1 is connected to the input of gate-2 circuit as shown in Fig. 6.1.4.
and output of gate 2 is connected to input of gate-1 as This modification will allow us to enter the desired
shown in Fig. 6.1.3. digital data into the circuit.
Operation :
Case I : S = R = 0 (No change) :
The outputs of gates 1 and 2 will become 1.
–
Let Q = 0 and Q = 1 initially. Hence both the inputs to – –
Qn and Qn : Present states Qn + 1 and Qn + 1 : Next states
gate 3 are 1 and the inputs to gate-4 are (01).
(C-569) (a) Symbol
–
So gate-3 output i.e. Q = 0 and gate-4 output i.e. Q = 1.
ns e
outputs.
io dg
Case II : S = 1, R = 0 (Set) :
If S = 0, R = 1 then one of the inputs to gate-4 will be 0. The summary of operation of an S-R latch is as follows :
–
This will force the Q output to 1. For S = R = 0 the latch output does not change.
Pu K
Hence both the inputs to gate 3 will be 1. This forces Q S = 0, R = 1 is called as the “Reset” condition as Q = 0
–
to 0. and Q = 1.
ch
If S = R = 1 then outputs of gates 1 and 2 will be zero. unpredictable. This condition should therefore be
avoided.
Hence one of the inputs to gates 3 and 4 will be 0.
– Race condition :
So both the outputs Q and Q will try to become 1. It is
The condition S = R = 1 is called as “Race” condition.
–
not allowed as Q and Q should be complementary.
When any one input to a NOR gate is 1, its output
Hence S = R = 1 condition is prohibited.
becomes 0. Thus both the outputs will try to become 0.
6.1.5 Symbol and Truth Table of S-R Latch :
This is called as the RACE condition.
The symbol and truth table of S-R latch are as shown in
Figs. 6.1.5(a) and (b) respectively.
6.1.6 Characteristic Equation :
– One way of explaining the behaviour of a latch or flip
In the truth table Qn and Qn represent the present states
of outputs i.e. these are the outputs before applying a flop is to use its truth table.
new set of inputs. The truth table is also called as excitation table.
–
Qn + 1 and Qn + 1 represent the next states of outputs i.e. Another way of doing it is to use special type of
the outputs after apply a new set of inputs. equations called characteristic equations.
Refer to the truth table of SR latch and write the K-map (C-572) Fig. 6.1.8(a) : S = 0, R = 0
ns e
for the next state of output i.e. Qn + 1 as shown in
When any one input of a NAND gate becomes 0, its
Fig. 6.1.6. output is forced to 1.
io dg
–
Here S = R = 0. Q and Q both will be forced to be
equal to 1.
This is an undeterminate state and hence should be
at le
avoided.
This is also called as Race condition.
ic w
(C-570) Fig. 6.1.6 : K-map for next state Qn + 1 of SR latch Case II : S = 0, R = 1 : Reset
NAND Gates] :
Logic diagram : (C-573) Fig. 6.1.8(b)
ch
Hence Q = 0.
connected, in an identical manner as that in the NOR Thus with S = 0 and R = 1 the outputs are Q = 0 and
latch. –
Q = 1.
Fig. 6.1.7(a) shows the NAND latch and Fig. 6.1.7(b)
This is the reset condition.
shows its truth table.
Case III : S = 1, R = 0 : Set
Thus with S = 1 and R = 0 the outputs are Q = 1 and 1. Level triggered circuits.
–
Q = 0. 2. Edge triggered circuits.
ns e
called as level triggered latches or flip-flops.
io dg
(C-574) Fig. 6.1.8(d) falling edges of clock. They only respond to the steady
–––––
– – HIGH or LOW levels of the clock signal.
Qn + 1 = R Qn and Qn + 1 = S Qn
Fig. 6.2.1(a) shows the symbol of a level triggered SR flip
at le
Using De-Morgan’s theorem,
Qn + 1
– – – –
= R + Qn and Qn + 1 = S + Qn.
flop and Fig. 6.2.1(b) shows the clock signal applied at
its input.
ic w
– –
Substitute R = 0 and S = 0 to get,
– – –
bl no
Qn + 1 = 0 + Qn = Qn and Qn + 1 = 0 + Qn = Qn.
become 1. This is RACE condition and should be (C-576) Fig. 6.2.1 : Concept of level triggering
available.
6.2.2 Types of Level Triggered Flip-flops :
–
For S = 0, R = 1, the outputs are Q = 0, Q = 1 and it is
There are two types of level triggered flip-flops :
Te
ns e
triggered flipflops. A level voltage (0 or 1) or a clock signal can be applied
to the enable (E) input.
io dg
These flip-flops are therefore said to be edge sensitive
or edge triggered rather and not level triggered. These flipflops will respond to the inputs if and only if
we apply an active level at the enable input. This active
level can be either 0 or 1 depending on the type of
at le
flip-flop.
Such flipflops are called as level triggered flipflops or
gated latches or clocked flipflops.
ic w
6.3.1 Types of Level Triggered (Clocked)
Flip Flops :
bl no
(C-577) Fig. 6.2.2 There are two types of level triggered latches :
1. Positive level triggered.
The rectangular signal applied to the clock input of a
2. Negative level triggered.
Pu K
The edge triggered flip-flops do not respond to the The gated S-R latch is shown in Fig. 6.4.1. It is also called
steady state high or low level in the clock signal at all. as clocked SR flip flop.
6.2.4 Types of Edge Triggered Flip Flops : It is basically the S-R latch using NAND gates with an
additional “enable” (E) input. It is also called as level
There are two types of edge triggered flip flops :
triggered S-R FF.
1. Positive edge triggered flip flops.
The outputs of basic S-R latch used to change instantly
2. Negative edge triggered flip flops. in response to any change made at the input. But this
Positive edge triggered flip flops will allow its outputs to does not happen with the gated S-R latch.
change in response to its inputs only at the instants For this circuit, the change in output will take place if
corresponding to the rising edges of clock (or positive and only if the enable input (E) is made active.
spikes). This circuit being positive level triggered, will respond to
Its outputs will not respond to change in inputs at any changes in input only if the enable input is held at logic
Negative edge triggered flip flops will respond only to In short this circuit will operate as an S-R latch if E = 1
(Enable input is active) but there is no change in the
the negative going edges (or spikes) of the clock.
outputs if E = 0 (Enable input is inactive).
The symbol and truth table of the gates S-R latch are as
(C-578) Fig. 6.4.1 : Gated S - R latch
shown in Fig. 6.4.2(a) and Fig. 6.4.2(b) respectively.
Operation :
ns e
Case I : S = X, R = X, E = 0 (No change)
io dg
Since enable E = 0, the outputs of NAND gates 3 and 4
will be forced to be 1 irrespective of the values of S and
(C-579) Fig. 6.4.2(a) : Symbol for S - R latch
R.
(C-7827) Fig. 6.4.2(b) : Truth table of gated S - R latch
That means R = S = 1. These are the inputs of the basic
at le
S-R latch enclosed in the dotted box in Fig. 6.4.1.
–
Hence the outputs of NAND latch i.e. Q and Q will not
ic w
change. Thus if E = 0, then there is no change in the
output of the gated S-R latch.
bl no
Case II : S = R = 0 E = 1 : No change
there will be no change in the state of outputs. Fig. 6.4.3(a) shows the circuit diagram of a negative level
Thus for S = R = 0 the output state of this flip-flop triggered SR flip-flop.
remains unchanged. It is the same circuit that we discussed in the previous
Te
Since S = 0, output of NAND-3 i.e. R = 1. And as R = 1 Due to the additional inverter connected to the enable
and E = 1 the output of NAND-4 i.e. S = 0. terminal, this circuit becomes sensitive to the low level
Output of NAND 3 i.e. R = 0 and output of NAND 4 the square input is at its low (0) level.
i.e. S = 1.
Case V : S = 1, R = 1, E = 1 (RACE)
Truth table :
Truth table for the gated D latch is given in Table 6.5.1.
ns e
(C-8061) Table 6.5.1 : Truth table for the gated D latch
(b) Symbol
io dg
(C-580) Fig. 6.4.3 : Negative level triggered S - R latch
at le
In some applications, the S and R inputs will always be
complementary. i.e. when S = 0, R = 1 and when S =1, Operation :
ic w
–
R = 0. That means S = R. If E = 0 then the latch is disabled. Hence there is no
For such applications we can use the gated D latch. change in output.
bl no
This is also called as level triggered D flip-flop or On the other hand if E = 1 and D = 1, then S = 1 and
clocked D flip-flop. –
R = 0. This will set the latch and Qn + 1 = 1, Qn + 1 = 0
ch
Note that D latch is the simple gated S-R latch with a irrespective of the present state.
small modification. A NAND inverter is connected From the truth table it is evident that Q output is same
between its S and R inputs as shown in Fig. 6.5.1(a). as D input, in otherwords Q output follows the D input.
Te
University Questions
Q. 1 Draw JK flip flop using gates and explain race
around condition with the help of timing diagram.
ns e
(May 19, 6 Marks) 6.6.1 Race Around Condition in JK Latch :
SPPU : May 07, Dec. 13, May 15, Dec. 15, May 19.
io dg
Logic diagram :
.University Questions.
The JK latch using NAND gates is shown in Fig. 6.6.1(a). Q. 1 What is race-around condition ? How will you
It consists of the basic SR latch and an enable input. It is eliminate it ? Explain with the help of necessary
also called as level triggered JK flip-flop. circuit diagram and timing diagram.
at le –
Note the outputs Q and Q have been fed back and
connected to the inputs of NAND gates 4 and 3
Q. 2
(May 07, 8 Marks)
What is race around condition ? How it can be
avoided ? Convert D flip-flop to T flip-flop.
ic w
respectively. (Dec. 13, 6 Marks)
The JK latch of Fig. 6.6.1 (a) responds, to the input Q. 3 What is race around condition ? Explain with the
changes if a positive level is applied at the enable (E) help of timing diagram. How is it removed in basic
bl no
Hence the latch is disabled and there is no change in Q. But flip flop is a sequential circuit which generally
Interval t1 - t2 : samples its inputs and changes its outputs only at
particular instants of time and not continuously.
During this interval J = 1, K = 0 and E = 1.
The flip-flops are therefore said to be edge sensitive or
Hence this is a set condition and Q becomes 1.
edge triggered rather than being level triggered like
Interval t2 - t3 : Race around
latches.
ns e
At instant t2, J = K = 1 and E = 1 Hence the JK latch is in
6.7 Edge Triggered Flip Flops :
–
io dg
the toggle mode and Q becomes low (0) and Q = 1.
We have already discussed the concept of edge
These changed outputs get applied at the inputs of
triggering.
NAND gates 3 and 4 of the JK latch. Thus the new
inputs to Gates 3 and 4 are : For the edge triggered flip flops, it is necessary to apply
at le –
NAND - 3 : J = 1, E = 1, Q = 1,
NAND - 4 : K = 1, E = 1, Q = 0.
the clock signal (timing signal) in the form of sharp
positive and negative spikes instead of in the form of a
ic w
rectangular pulse train.
Hence R will become 0 and S will become 1.
Such sharp spikes are shown in Fig. 6.7.1. These spikes
Therefore after a time period corresponding to the
bl no
These changed output again get applied to the inputs Thus the passive differentiator acts as a pulse shaping
of NAND-3 and 4 and the outputs will toggle again. circuit.
Thus as long as J = K = 1 and E = 1, the outputs will
ch
Interval t3 - t4 :
Circuit diagram : Due to these sharp positive spikes applied at point “A”,
Fig. 6.7.2(a) shows the circuit diagram of a positive edge the gated S-R latch in the S-R flip flop will be enabled
triggered S-R flip flop and Fig. 6.7.2(b) shows the logic only for a short duration of time when the positive spike
symbol for it. is present at A. (at instants t1, t2 … in Fig. 6.7.3)
ns e
will also be identical to that of the gated S-R latch with
io dg
only one change.
(a) Positive edge triggered S-R flip flop The “enable” input will now be replaced by clock input.
And the outputs will change only if a positive edge is
applied to the clock input.
Operation :
is shown in Table 6.7.1.
Note that the SR flip flop of Fig. 6.7.2(a) consists of a
(C-6260) Table 6.7.1 : Truth table of a positive
differentiator circuit and a gated S-R latch (level edge triggered SR flip flop
Pu K
negative spikes.
ns e
earlier. The NAND gates 1 through 5 form a D latch.
io dg
The differentiator circuit is slightly modified in order to The differentiator converts the clock pulses into positive
make this flip flop respond to the negative (falling) and negative spikes and the combination of D and R2
edges of the clock input. will allow only the positive spikes to pass through to
point “A”, by blocking the negative spikes.
Fig. 6.7.4 shows the circuit symbol of the negative edge
at le
triggered S-R flip flop and Table 6.7.2 shows its truth
table.
ic w
bl no
(C-590(a)) Fig. 6.7.4 : Circuit symbol of negative (a) Positive edge triggered D-flip flop
edge triggered SR FF
Pu K
(b) Symbol
(C-592(a))Fig. 6.8.1
Operation :
Te
Transparent latch :
ns e
6.9 Edge Triggered J-K Flip Flop :
io dg
Edge triggered J-K flip flops are of two types :
From the truth table it is clear that the Q output of the flip 1. Positive edge triggered JK flip flop
flop follows the D input. 2. Negative edge triggered JK flip flop
at le
6.8.2 Negative Edge Triggered D Flip Flop :
Symbol :
6.9.1
Logic diagram :
Positive Edge Triggered JK Flip Flop :
ic w
The symbol for negative edge triggered D flip flop is The circuit diagram of the positive edge triggered JK flip
shown in Fig. 6.8.2.
flop is shown in Fig. 6.9.1(a) and its circuit symbol is
bl no
(C-593)
clock pulses. This is how it is different from the positive
edge triggered flip flop.
Otherwise, the operation of the negative edge triggered
Te
Operation :
– –
NAND gates 1 and 2 form the basic S-R latch. The other If the previous state of Q and Q is Q = 1 and Q = 0
two NAND gates (3 and 4) have three inputs each. The then
–
outputs Q and Q are fed back to the inputs of gates 4 S = 1 · 1 · 1 = 0 and R = 0 · 0 · 1 = 1
and 3 respectively.
Therefore according to the operation of S-R latch if
Referring to Fig. 6.9.1(a) we can write the mathematical –
S = 0, R = 1 then Q = 0, and Q = 1 ·
expressions for S and R as follows : –
––––––––– Q = 0 and Q = 1
–
ns e
S = K · Q · CLK and R = J · Q · CLK Thus with J = 0, K = 1 and positive going clock, the JK
flip flop will reset.
Let us now understand the operation step by step.
io dg
–
If Q = 0 and Q = 1 before the application of clock
Case I : CLK = 0 or 1 i.e. level
pulse, then there will not be any change in their state.
If CLK = 0 or 1 i.e. level and no pulse this flip flop is
Case V : CLK = , J = 1, K = 0 (Set)
at le
disabled, because the output of differentiator is zero for
any level input.
–
– –
If the previous state of Q and Q is Q = 0 and Q = 1
then ,
ic w
Therefore Q and Q output do not change their state. S = 0 · 0 · 1 = 1 and R = 1 · 1 · 1 = 0
For the falling edge of the clock, the rectifier (D – R2) and R = 0 then,
will block the negative spike. Hence voltage at point A is –
Q = 1 and Q = 0…. i.e. the flip flop is set.
logic 0.
–
Pu K
Since J = 0, R = 1 and as K = 0, S = 1.
–
As S = R = 1 the outputs Q and Q will not change
their state even though a positive edge of the clock
pulse is being applied.
– –
(C-6399)
J = 0, K = 0, Qn + 1 = Q and Qn + 1 = Q.
From the operation discussed above, we conclude that
No change in output. when J = K = 1 and a positive clock edge is applied,
Case IV : CLK = , J = 0, K = 1 (Reset) – –
then Q and Q outputs are inverted i.e. Qn + 1 = Qn and
Recall the expressions for S and R, –
Q n + 1 = Q n.
–
S = K · Q · CLK and R = J · Q · CLK This is called as the TOGGLING mode. This is a very
important mode of operation.
.University Questions.
Q. 1 Discuss methods to avoid race around condition in
JK flip-flop. (Dec. 07, 4 Marks)
Q. 2 How is race around condition avoided ?
(Dec. 08, Dec. 09, 3 Marks)
ns e
For the race around to take place, it is necessary to have
io dg
the enable input high along with J = K = 1.
at le
known as Race condition.
Refer to the truth table of JK flip flop (Table 6.9.2) and Hence by the time the toggled outputs (Q and Q) return
write the K-map for Qn + 1 as shown in Fig. 6.9.2. back to the inputs of NAND gates 3 and 4, the positive
(C-8062) Table 6.9.2 : Truth table clock spike has died down to zero.
Pu K
condition.
Symbol :
The truth table of negative edge triggered JK FF is as (C-6839) Table 6.10.1 : Truth table of a Toggle FF
(positive edge triggered)
shown in Table 6.9.3.
ns e
io dg
Operation :
–
at le
When T = 0, J = K = 0. Hence the outputs Q and Q
won’t change.
But if T = 1, then J = K = 1 and the outputs will toggle
ic w
corresponding to every leading edge of clock signal.
6.10 Toggle Flip Flop (T Flip Flop) :
This has been illustrated in Ex. 6.10.1.
bl no
Relation between input and output frequency : 6.11 Master Slave (MS) JK Flip Flop :
Referring to Fig. P. 6.10.1(b), SPPU : Dec. 06, Dec. 10, May 14.
ns e
1 Q. 2 What is the advantage of M-S flip-flop ? Explain
But ( ) = fCLK
T
io dg
working of MS J-K flip-flop in detail.
fCLK (Dec. 10, 8 Marks)
Output frequency fo = …Ans.
2
Q. 3 Draw and explain the behavior of M-S JK flip flop
The T flip flop divides the clock frequency by 2. Hence a with waveform. (May 14, 6 Marks)
at le
T flip flop can be used as a frequency divider.
Logic diagram :
ic w
6.10.2 Negative Edge Triggered T Flip Flop : Fig. 6.11.1 shows the master slave JK flip flop.
Symbol :
bl no
triggered toggle flip flop The clocked JK latch acts as the master and the clocked
Truth table : SR latch acts as the slave.
(C-6840) Table 6.10.2 : Truth table
Master is positive level triggered. But due to the
circuits.
means S = 0 and R = 1.
Clock = 0 : Slave active, master inactive
–
Outputs of the slave become Q = 0 and Q = 1. This is
the RESET operation.
Again if clock = 1 : Master active, slave inactive.
–
Even with the changed outputs Q = 0 and Q = 1 fed
ns e
–
back to master, its outputs will be Q1 = 0 and Q1 = 1.
io dg
That means S = 0 and R = 1.
Hence with clock = 0 and slave becoming active, the
–
Operation : outputs of slave will remain Q = 0 and Q = 1.
at le
We will discuss the operation of the master slave JK FF
with reference to its truth table.
Thus we get a stable output from the Master slave.
(C-6395(b))
ic w
We must always remember one important thing that in
Clock = 1 : Master active, slave inactive.
the positive half cycle of the clock, the master is active –
Outputs of master become Q1 = 1 and Q1 = 0
bl no
The outputs will not change if J = K = 0. This avoids the multiple toggling which leads to the race
around condition.
(C-6395)
Thus the master slave flip flop will avoid the race around
This condition has been already discussed in case I.
condition.
(C-6395(a))
The waveforms for the master slave flipflop are shown in
Clock = 1 : Master active, slave inactive. Fig. 6.11.3.
ns e
io dg
at le
(C-608) Fig. 6.11.3 : Waveforms of master slave JK flip flop
(b) Symbol
6.12 Preset and Clear Inputs :
ch
ns e
Thus making CLR = 0 will reset the flip flop. Note that
io dg
with CLR = 0 and PR = 1, the FF is reset irrespective of
the status of S, R or clock inputs.
Case IV : PR = 0, CLR = 0
Inputs :
Both these inputs are active low and have higher priority
The operation of SR flip flop with preset and clear inputs
than all the other inputs.
is summarized in Table 6.12.1.
Fig. 6.12.2
Preset and clear inputs can be of two types :
Each representation is suitable for a different (C-6571) Table 6.14.1 : Truth table of SR flip flop
application.
1. Characteristic equations.
ns e
6.13.1 Characteristic Equations : Condition 1 :
Sn and Rn = 0 Refer first row of Table 6.14.1.
io dg
We have already introduced the concept of
characteristic equations earlier in this chapter and we Condition 2 :
have written the characteristic equations for various flip Sn = 0 Rn = 1 Refer second row of Table 6.14.1.
flops.
at le
6.14 Excitation Table of Flip-Flop :
From the two conditions mentioned above we conclude
that Sn input should be equal to 0 and Rn input can be 0
The truth tables are also known as the characteristic and R inputs) for all the possible situations that may
But while designing the sequential circuits, sometimes The table containing all these output situations and the
Pu K
the present and next state of a circuit are given and we corresponding input conditions is called as the
are expected to find the corresponding input condition. “excitation table” of a flip flop.
ch
We need to use the excitation tables of flipflops to do The excitation table of SR flip flop is shown in
this. These tables are different from the characteristic Table 6.14.2.
tables.
For example the outputs of an S-R FF before clock pulse Description of excitation table of SR FF :
–
are Qn = 0 and Qn = 1 and it is expected that these We have already discussed case I.
outputs should remain unchanged after application of Case II : Q should change from 0 to 1
clock. Then what must be the values of inputs Sn and Rn
This is nothing but the set condition.
to achieve this ?
Hence Sn = 1 and Rn = 0 should be the inputs.
Refer to the truth table of SR FF to answer this question.
Case III : Q should change from 1 to 0
The answer is, for the following two conditions the
– This is nothing but the reset condition.
outputs remain unchanged at Q = 0 and Q = 1.
Hence Sn should be 0 and Rn should be 1.
Condition 2 : Sn = 1 and Rn = 0.
ns e
Hence the inputs corresponding to this case is Sn =
and Rn = 0.
io dg
Similarly we can write the excitation tables for the other 6.15 Conversion of Flip Flops :
flip flops.
Concept :
6.14.2 Excitation Table of D Flip Flop : The conversion from one type of flip flop to the other
at le
Excitation table of a D flip flop is given by Table 6.14.3.
Then we draw the K map for each output and obtain the Logic diagram :
simplified expressions. The conversion logic is then The logic diagram for SR FF to D FF is shown in
implemented using gates. Fig. 6.15.4.
.University Questions.
Q. 1 Convert the basic SR-flip-flop (SR-FF) into D FF.
(May 12, 2 Marks)
ns e
Refer Fig. 6.15.2. Here the given FF is SR FF and the
io dg
required FF is D FF.
The truth table for the conversion logic is shown in (C-614) Fig. 6.15.4 : SR flip flop to D flip flop
Table 6.15.1. The inputs are D and Q whereas outputs
are S and R. 6.15.2 Conversion of JK FF to T FF :
at le
The truth table is prepared by combining the excitation
tables of D F/F and SR FF. University Questions.
SPPU : Dec. 09, May 17.
ic w
Q. 1 Convert J-K flip-flop into T-FF. Show the truth
table. (Dec. 09, 4 Marks)
bl no
Now write the K maps for the S and R outputs as shown Step 2 : K maps and simplification :
in Figs. 6.15.3(a) and (b). The K maps for outputs J and K are shown in Fig. 6.15.5.
(a) K map for S (b) K map for R (a) K map for J output (b) K map for K output
(C-613) Fig. 6.15.3 (C-615) Fig. 6.15.5
Step 3 : Draw the logic diagram : Step 3 : Draw the logic diagram :
The logic diagram is shown in Fig. 6.15.6. The logic diagram is shown in Fig. 6.15.8.
ns e
io dg
(C-616) Fig. 6.15.6 : Logic diagram for conversion (C-618) Fig. 6.15.8 : Conversion from SR flip flop to T flip flop
of JK FF to T FF
6.15.4 SR Flip Flop to JK Flip Flop :
6.15.3
at le SR Flip Flop to T Flip Flop :
.University Questions.
SPPU : May 12, Dec. 16, Dec. 18.
University Questions.
SPPU : May 12, Dec. 14, May 15.
ic w
Q. 1 Convert the basic SR-flip-flop (SR-FF) into T-FF Q. 1 Convert the basic SR-flip-flop (SR-FF) into JK-FF.
(May 12, 2 Marks)
bl no
flip-flop. Show the design. (Dec, 16, 6 Marks) (Dec. 14, 6 Marks)
Q. 3 Design and implement T flip-flop using SR flip-flop.
Pu K
ns e
Step 3 : Logic diagram :
io dg
The logic diagram of SR to JK flip flop is given in
Fig. 6.15.10.
at le
ic w
(b) Logic diagram
bl no
University Questions
Q. 1 What is race around condition ? How it can be
Te
ns e
(C-622) Fig. 6.15.12(b) : Logic diagram of T to D flip flop
from JK FF to D FF
conversion
io dg
6.15.8 JK Flip Flop to SR Flip Flop
6.15.7 JK Flip Flop to D Flip Flop Conversion :
Conversion : SPPU : May 10, Dec. 13, Dec. 19
SPPU : Dec. 09, May 14.
University Questions.
University Questions
at le
Q. 1 Convert J-K flip-flop into D-FF. Show the truth
table. (Dec. 09, 4 Marks)
Q. 1
Q. 2
Design SR flip-flop using JK flip-flop.
(May 10, 4 Marks)
Explain the difference between combinational and
ic w
Q. 2 Explain the difference between asynchronous and sequential circuit. Design S-R flip-flop using
synchronous counter and convert J-K flip flop into J-K flip-flop. (Dec. 13, 6 Marks)
D-FF. Show the design. (May 14, 6 Marks)
bl no
(C-6390) Table 6.15.7 : Truth table for JK to Step 1 : Write the truth table for JK to SR :
Pu K
(C-8045)
Te
ns e
(C-626) Fig. 6.15.15(c) : JK to SR FF conversion
io dg
6.15.9 D FF to SR FF Conversion : Step 2 : K maps and simplification for T output :
at le
(C-7832) Table 6.15.9 : Truth table for D to S-R conversion
ic w
(C-629) Fig. 6.15.18 : K-map and simplification
bl no
–– –– ––
T = S R Qn + S R Qn
Inputs Outputs
Operating Mode – – –
SD CD D Q Q
Set L H X H L
Reset (Clear) H L X L H
*Undetermined L L X H H
ns e
Load “0” (Reset) H H l L H
(C-1342(a))Fig. 6.15.20
io dg
Step 3 : Logic diagram : H, h = HIGH Voltage Level
2. As a memory element.
Te
Inputs Outputs
Operating Mode – – –
SD CD J K Q Q
Set L H X X H L
Reset (Clear) H L X X L H
*Undetermined L L X X H H
Toggle H H H h – q
q
Load “0” (Reset) H H L h L H
ns e
Load “1” (Set) H H H l H L
Hold H H L l q –
io dg
q
(C-3624) Fig. 6.17.2
74 8.0
– –
Pin configuration : Both outputs will be HIGH while both SD and CD are
–
ch
Fig. 6.17.3 shows the pin configuration of IC 7474. LOW, but the 7 output states are unpredictable if SD
–
and CD go HIGH simultaneously.
H, h = HIGH Voltage Level
L, l = LOW Voltage Level
Te
X = immaterial
Logic diagram and pin configuration of IC 7476 is as The basic building block of a sequential circuit is a flip
flop. The outputs QA, QB … etc. of such flip flops will be
shown in Fig. 6.17.5.
used as state inputs to a sequential circuit. They are
also called as state variable.
In addition to this “x” represents an external input and Y
represents the output of the sequential circuit.
Y will be dependent on the state variables (QA, QB, ...)
and the external input x.
The general format of a state table is shown in
ns e
Table 6.18.1.
io dg
(C-7833) Table 6.18.1 : General format of state table
at le
ic w
Definition :
bl no
of inputs and present state. State diagrams depict the permitted states and
The analysis of the given clocked sequential circuit transitions as well as the events that effect these
includes writing the state table and drawing the state transition.
diagram for the given circuit. The information available in the state table is
In this section, we will introduce the concepts such as represented graphically using the state diagram.
state diagram, state table, state equation and input
The state diagram is drawn by using the state table as a
equations and the step by step analysis procedure.
reference. Such a state diagram is shown in
6.18.1 State Table : Fig. 6.18.1.
The circle represents the present state. The arrows Ex. 6.18.1 : For the clocked D FF write the state table,
draw the state diagram and write the state
between the circles define the state transition say from
equation.
00 to 01 or 01 to 11. Soln. :
If there is directed line connecting the same circle then Step 1 : Write the truth table :
it means that the next state is same as the present state.
Table P. 6.18.1(a) represents the truth table for a clocked
The lines joining the circles are labeled with a pair of D flip flop.
binary numbers with a “ / ” in between. For example the (C-6212) Table P. 6.18.1(a) : Truth table of a clocked D FF
line joining 00 and 01 is labeled as 1/0.
ns e
Note that 00 to 01 transition takes place when x = 1 and
io dg
Y = 0 (see row-1 of the state table). Hence 1 in 1/0
corresponds to x and 0 corresponds to y.
Don’t care condition in the state diagram :
at le
Sometimes the same next state is reached for more than
one present states.
ic w
This is called as don’t care condition in the state
Step 2 : Write the state table :
diagram, as shown in Fig. 6.18.2.
Table P. 6.18.1(b) represents the state table for the
bl no
clocked D FF.
Table P. 6.18.1(c).
State equation is also called as application equation. (C-7835) Table P. 6.18.1(c) : Excitation table for D FF
We can derive the state equation from the state table
using the K maps. The state equation is the Boolean
function with time included into it.
Step 5 : Write the state equation : Step 4 : Write the excitation table :
The K-map for output Q is shown in Fig. P. 6.18.1(a). The excitation table of JK FF is as shown in
n+1
Table P. 6.18.2(c).
ns e
(C-656) Fig. P. 6.18.1(a) : K-map and state equation
io dg
Step 5 : Write the state equation :
draw the state diagram and write the state
The K-map for output Q is shown in Fig. P. 6.18.2(b).
equation. n +1
Soln. :
Step 1 : Write the truth table :
at le
Table P. 6.18.2(a) represents the truth table for a clocked
JK flip flop.
ic w
(C-6213) Table P. 6.18.2(a) : Truth table of JK FF
(C-658) Fig. P. 6.18.2(b) : K-map state equation
bl no
Ex. 6.18.3 : For a toggle FF write the state table, draw the
state diagram and write the state equation.
Soln. :
Pu K
(C-657) Fig. P. 6.18.2(a) : State diagram of a JK flip flop The state diagram is shown in Fig. P. 6.18.3(a).
ns e
be clear by solving the following example.
io dg
Ex. 6.19.1 : For the state diagram given in Fig. P. 6.19.1(a)
draw the clocked sequential circuit using
T flip-flops.
Step 5 : Write the state equation :
The
at le
K-map for Qn +1 output is
Fig. P. 6.18.3(b). The state equation can be obtained
shown in
ic w
from this K-map.
bl no
Soln. :
Pu K
logic diagram.
behaviour of the circuit that is to be designed. As seen from the state table, there are no equivalent
Step 2 : Draw the state table. states. So there won’t be any reduction in the state
diagram.
Step 3 : The number of states can be reduced by state
reduction methods. The circuit goes through four states. Hence we need to
use 2 flip-flops.
Step 4 : Assign binary values to each state for the states in
steps 2 and 3 using state assignment technique. Step 3 : Write the circuit excitation table :
Step 5 : Determine the number of flip-flops required and The circuit excitation table is shown in Table P. 6.19.1(b).
assign a letter symbol to each one. The type of FF used is T type FF.
ns e
The shaded portion of the circuit excitation table
corresponds to the excitation table of a T FF.
io dg
Step 4 : K maps and simplifications :
Fig. P. 6.19.1(b) shows the K-maps and corresponding
simplified expressions for TA TB i.e. inputs of the two T
at le
flip-flops and the Y output.
ic w
bl no
(C-6100)
(C-6103) Fig. 6.20.1(c) : State diagram of a traffic light
ns e
controller
This is as shown in Fig. 6.20.1(c).
io dg
State table :
State B :
We can derive the state table from the state diagram.
After some time in state B, the controller will move to The four states requires 2 flipflops (D-type). The state
the next state i.e. state C which is defined as follows : table of this controller is as shown in Table 6.20.1(a).
State C :
(C-6102)
K-Maps and simplification :
State D :
(C-8310)
ns e
io dg
at le
ic w
bl no
Pu K
ch
Review Questions
Te
Q. 2 What is a flip-flop ?
(C-6104) Fig. 6.20.2 : K-maps Q. 5 Explain with truth table the working of clocked RS
flip-flop.
Realization :
Q. 6 State the disadvantages of RS flip-flop. How can
The traffic light controller is as shown in Fig. 6.20.3.
they be avoided ?
Q. 8 Give reason why D flip-flop is called as data latch ? 4. D type and T type.
Q. 9 Draw the circuit using logic gates of a T-type Q. 22 Design a conversion logic to convert a JK flip-flop to
flip-flop. Draw its symbol and write its truth table. a D flip-flop.
Q. 10 Explain S-R flip flop using NOR gates. Q. 23 Write a short note on race around condition in JK
R-S flip-flop ? Write its truth table. Q. 24 Draw neat circuit diagram of clocked JK flip-flop
Q. 12 What are the various types of flip-flops ? using NAND gates. Give its truth table and explain
ns e
race around condition.
Q. 13 Draw the circuit of SR flip-flop using NAND gate.
io dg
Q. 14 Draw the schematic diagram of JK flip-flop and Q. 25 What is race around condition ? How does it gets
eliminated in master slave JK FF ? Explain.
describe its working. Write down its truth table.
Q. 26 Explain how JK FF is converted into :
Q. 15 What is race around condition ?
Q. 16 at le
Draw the circuit of J-K flip-flop using NAND gate.
Q. 27
1. D FF
Carry out
2.
the
T FF
2. D to S-R
Q. 18 Explain the working of the master slave JK flip-flop.
3. J-K to S-R
Q. 21 Explain the following flip-flops : Q. 29 What is the basic difference between pulse-triggered
Chapter
7
ns e
io dg
at le
ic w
Counters
bl no
Pu K
Syllabus
Application of flip-flops : Counters - Asynchronous, Synchronous and modulo n counters, Study of
ch
7490 modulus n counter ICs & their applications to implement mod counters.
Te
Chapter Contents
7.1 Introduction 7.8 Synchronous Counters
7.2 Asynchronous / Ripple Up Counters 7.9 Modulo – N Synchronous Counters
7.3 Asynchronous Down Counters 7.10 UP / DOWN Synchronous Counter
7.4 UP / DOWN Counters 7.11 Lock Out Condition
7.5 Modulus of the Counter (MOD-N Counter) 7.12 Bush Diagram
7.6 Ripple Counter IC 7490 (Decade Counter) 7.13 Applications of Counters
7.7 Problems Faced by Ripple Counters
ns e
Counters count the number of clock pulses. Therefore and down counter.
with some modifications we can use them for measuring
io dg
7.2 Asynchronous / Ripple Up
frequency or time period.
Counters :
7.1.1 Types of Counters : SPPU : May 08.
Logic diagram :
at le
University Questions.
Q. 1 What do you mean by binary ripple counter ?
Fig. 7.2.1 shows the logic diagram of a 2-bit ripple up
counter.
ic w
(May 08, 2 Marks)
2. Synchronous counters.
1. Asynchronous or ripple counters :
In synchronous counters all the flip-flops receive the flip-flops. Thus a 4 bit counter will use four flip-flops.
external clock pulse is applied to all the flip-flops The toggle (T) flip-flops are being used. But we can use
Te
Depending on the way in which the counter outputs the clock input of the next flip-flop i.e. FF-B.
change, the synchronous or asynchronous counters are Operation of the counter :
classified as follows :
Initially let both the flip-flops be in reset condition.
QB QA = 00
There is no change in the status of QB because FF-B is a QB Q A = 11 ….. After the third CLK pulse
negative edge triggered FF. th
At the 4 negative clock edge :
Therefore after the first clock pulse the counter outputs th
On the 4 falling clock edge, FF-A toggles and
are
QA changes from 1 to 0.
QB QA = 01 ….. After the first CLK pulse
This negative going change in QA acts as a negative
At the second falling edge of clock : clock pulse for FF-B. Hence it toggles to change QB from
On the arrival of second falling clock edge, FF-A toggles 1 to 0.
ns e
again and QA changes from 1 to 0. QB QA = 00 ….. After the fourth CLK pulse
nd
QA = 0 Corresponding to 2 negative clock edge.
io dg
So the counter has reached the original state. The
This change in QA (from 1 to 0) acts as a negative clock counter operation will now repeat.
edge for FF-B. So it will also toggle, and QB will change
Table 7.2.1 summarizes the operation of the counter
from 0 to 1.
and Fig. 7.2.3 shows the timing waveforms.
at le
QB = 1
On arrival of the third falling edge, FF-A toggles again Why is it called counter ?
and QA becomes 1 from 0. See Fig. 7.2.3. The decimal count corresponds to the
Since this is a positive going change, FF-B does not number of clock pulses, which counter has received.
respond to it and remains inactive. So QB does not
Thus this circuit counts the clock pulses. Hence it is
change and continues to be equal to 1.
called as counter.
As seen from Table 7.2.1, this counter has four distinct We can apply all the basic concepts which were
states of output namely 00, 01, 10 and 11. In general the introduced for the 2-bit ripple up counter to the 3-bit
n
number of states = 2 where n is equal to the number of ripple up counter.
flip-flops.
Fig. 7.2.5 shows the logic diagram of a 3-bit ripple up
Maximum count : counter. Since it is a 3-bit counter, we need to use
As seen from Table 7.2.1, the maximum count is 3-flip-flops.
3 (decimal) i.e. 1 1 binary.
ns e
2
Maximum count = 3 = 2 – 1
io dg
n
In general the maximum count = (2 – 1), where
n = Number of flip-flops.
at le
Logic diagram :
Truth table :
Pu K
using JK flip-flops
The 3 bit ripple up counter can have 8 distinct states Since it is a 4 bit ripple up counter, we need to use four
i.e. QC QB QA can take up values from 000, 001, 010, flip flops.
........110, 111. Initially all the flipflops have zero output.
ns e
For example QA to CLK of FF-B, QB to CLK of FF-C and so
The timing diagram of a 3-bit ripple up counter are as on.
io dg
shown in Fig. 7.2.6.
Truth table :
University Questions.
Q. 1 Draw and explain 4-bit binary up counting with this
concept. Also draw the necessary timing diagram.
Te
Number of states :
The number of state through which the output of a 4 bit
up counter passes is 16 (from 0 to 15).
Maximum count :
Timing diagram :
ns e
io dg
(C-778) Fig. 7.2.7(b) : Waveforms of a 4 bit asynchronous up counter
7.2.4 at le
State Diagram of a Counter : But the counters which can count in the downward
direction i.e. from the maximum count to zero are called
ic w
The state diagram of a counter represents the states of down counters.
a counter graphically. The countdown sequence for a 3-bit asynchronous
bl no
For example for a 2-bit up counter the state diagram is down counter is as follows :
shown in Fig. 7.2.8(a) and for a 2-bit down counter the
(C-6970) Table 7.3.1 : Truth table of 3 bit down counter
state diagram is shown in Fig. 7.2.8(b).
ns e
Logic diagram : counters separately.
io dg
But in practice both these modes are generally
A 3-bit asynchronous down counter is shown in
combined together and an UP/DOWN counter is
Fig. 7.3.2. The clock input is applied directly to the clock
–– –– formed.
input of FF-A. But QA is connected to clock of FF-B, QB
A mode control (M) input is also provided to select
at le
to clock of FF-C and so on.
either up count or down count mode of operation.
The negative going change in QA acts as a clock to FF-B. The LSB flip-flop receives clock directly. But the clock to
Hence FF-B will change its state. So QB becomes 1 and –
every other FF is obtained from Q or Q output of the
–– previous FF.
QB changes from 1 to 0.
–– UP counting mode (M = 0) :
This negative going change in QB acts as a clock to FF-C.
Hence FF-C will change its state. So QC becomes 1 and The CLK signal is applied directly to the clock input of
–– the LSB flip-flop.
QC becomes 0.
For the remaining flip-flops, the Q output of the
Thus after the first clock pulse the output of counter are,
st
preceding FF is connected to the clock of the next stage
QC QB QA = 1 1 1 ….. After the 1 CLK pulse
if up counting is to be achieved. For this mode, the
Corresponding to the second falling clock edge, FF-A mode select input M is at logic 0 (M = 0).
––
toggles. QA becomes 0 and QA becomes 1. This positive
DOWN counting mode (M = 1) :
––
going change in QA does not alter the state of FF-B. So
The clock signal is applied directly to the clock input of
–– –
QB remains 1 and QB remains 0. So there is no change in the LSB flip-flop. For the remaining flip-flop, the Q
the state of FF-C. Hence after the second clock pulse the output of the preceding FF is connected to the clock of
counter outputs are, the next FF.
This will operate the counter in the down counting Table 7.5.1 shows the relation between 2, 3 and 4 bit
mode. For down counting mode the mode select input counters and their modulus.
M is kept at logic 1 (M = 1). (C-8049)Table 7.5.1
ns e
7.5.1 Design of Asynchronous MOD
Q. 2 What is the advantage of MOD counter ?
Counters :
io dg
(Dec. 10, 2 Marks)
Ex. 7.5.1 : Design MOD-5 asynchronous counter, and
Definition : also draw the waveforms.
Modulus (MOD) of a counter represents the number of Soln. :
at le
states through which the counter progresses during its
operation. It is denoted by N.
Thus MOD-N counter means the counter progresses
Step 1 : Draw the state diagram :
The state diagram of MOD-5 ripple counter is as shown
in Fig. P. 7.5.1(a).
ic w
through N states.
In general m number of flip-flops are required to Table P. 7.5.1 shows the truth table for the reset logic.
m
construct mod-n counter, where N 2 . (C-6216) Table P. 7.5.1 : Truth table for the reset logic
ch
combinational logic called reset logic. The K map is as shown in Fig. P. 7.5.1(b).
Soln. :
The states 0 through 4 are valid states and the output Y
Step 1 : Write the truth table :
of reset logic (Y) is inactive (1) for them.
(C-6268) Table P. 7.5.2
The states 5, 6 and 7 are invalid states. If counter enters
ns e
The logic diagram of a MOD-5 ripple counter is shown
in Fig. P. 7.5.1(c).
io dg
at le
ic w
Step 2 : Draw the K map :
–– –– –– ––
Y = QD + QC QA + QB ( )
(C-797) Fig. P. 7.5.1(d) : Timing diagram of Step 3 : Draw the circuit diagram :
MOD-5 ripple counter
7.5.2 Frequency Division Taking Place in The one cycle period of QB is 4 T. Hence the frequency
Asynchronous Counters :
of QB is given by,
SPPU : May 05, May 08, Dec. 11 fCLK
1 1
fB = = ….. since = fCLK
University Questions. 4T 4 T
Q. 1 Assume 16 MHz clock source in a system. How will
Similarly the frequency of QC output is given by
you divide this frequency by a factor 8 ? Explain
your logic with suitable circuit diagram. 1 fCLK
fC = =
(May 05, Dec. 11, 8 Marks) 8T 8
ns e
Q. 2 What do you mean by binary ripple counter ? Draw
Note : In any counter, the signal at the output of last FF
and explain 4-bit binary up counting with this
io dg
concept. Also draw the necessary timing diagram. (i.e. MSB) will have a frequency equal to the input
Is there any frequency division concept in it ? clock frequency divided by the MOD number of the
Comment on frequency generated at the output of
counter.
each flip-flop. (May 08, 10 Marks)
at le
In the chapter on flip-flops, we have seen that a flip-flop
in toggle mode divides the clock frequency by 2.
7.5.3 Disadvantages of Ripple Counters :
ic w
– Every flip-flop has its own propagation delay. In ripple
That means the frequency of Q or Q output waveform
of a toggle flip-flop is exactly half of the clock counter the output of the previous FF is used as clock
bl no
frequency.
ns e
(C-1374) Fig. 7.6.1 : The basic internal structure of IC 7490
io dg
IC 7490 is a MOD-10 or decade counter. It is a 14 pin IC
and its pin configuration as shown in Fig. 7.6.2.
at le
ic w
bl no
Description :
Pu K
ripple counter, which is negative edge (C-6217) Table 7.6.2 : Reset/count truth table
triggered.
R0(1), R0(2) Gated zero reset inputs.
VCC + 5 V DC
R9 (1), R9 (2) These are gated set to nine inputs.
QD, QC , QB Outputs of internal MOD-5 counter with QD
as MSB.
QD QC QB QA = 0 0 0 0
7.6.1 The Internal Diagram of IC 7490 :
2. If both the preset inputs R9(1) and R9(2) are at logic 1 then
A simplified internal diagram of IC 7490 is shown in
the counter output is set to decimal 9.
Fig. 7.6.3.
ns e
Soln. :
The connection diagram for IC 7490 as decade counter
io dg
is shown in Fig. P. 7.6.1.
at le
ic w
bl no
to the input B which is the clock input of the internal Some of the other applications of IC 7490 are as follows:
Pu K
2. Hence QA will toggle on every falling edge of the clock 2. Divide by two (MOD-2) and divide by 5 (MOD-5)
counter will increment from 000 to 100 on the low 7.6.3 Symmetrical Bi-quinary Divide by
Te
3. Table P. 7.6.1 summarizes the operation of the 7490 as In this application the clock input is applied to the B
input and QD output is connected to A input as shown
decade counter. Due to cascading of MOD-2 and
in Fig. 7.6.4.
MOD-5 counters, the overall configuration becomes a
The output is obtained at QA output. It is a perfect
MOD - 10 i.e. decade counter configuration.
square wave with a 50% duty cycle at a frequency equal
4. The reset inputs R0(1), R0(2) and the preset inputs R9(1), to fCLK / 10.
inactive.
The operation of this counter has been summarized in The total number of states is 6.
Table 7.6.3. Note that QC and QB are connected to R01 and R02
(C-6219) Table 7.6.3 : Summary of operation of symmetrical
respectively.
bi-quinary divide by 10 counter
So when the counter output is (6)10 i.e. QC QB QA = 110
ns e
io dg
at le
ic w
(C-1380) Fig. P. 7.6.2 : MOD 6 counter using IC 7490
Soln. :
The waveform at QA output is shown in Fig. 7.6.5. It
MOD - 7 counter counts through the 7 states as shown
shows that QA output is a perfect square wave.
in Fig. P. 7.6.3(a).
Pu K
ch
(C-1379) Fig.
7.6.5 : Waveforms showing a
Te
(C-6220) Table P. 7.6.3 : Truth table of the reset logic Ex. 7.6.4 : Draw basic internal architecture of IC 7490.
Design a divide by 20 counter using same.
Dec. 11, 8 Marks.
Soln. :
Soln. : Solve it yourself.
ns e
Soln. :
io dg
Refer section 7.6.1 for internal diagram of IC 7490. Refer
Ex. 7.6.3 for MOD-7 counter.
MOD 98 counter :
(C-1382) Fig. P. 7.6.3(b) : K-map for the output of reset logic The 3, 4 or at the most 5 variable K-maps are practically
Pu K
possible to handle.
Note that for all the states beyond 1001, we have
But here, there will be 8 FFs and so there will be 8-
entered a logic 1 in the K-map treating all those states
ch
the ICs.
ns e
(C-3602) Fig. P. 7.6.5(a) (C-3604) Fig. P. 7.6.6(b)
2. MOD 45 counter :
io dg
Ex. 7.6.6 : Design the following using IC7490 :
Upto MOD - 100, two IC 7490s will be sufficient. AND gate as shown in Fig. P. 7.6.6(c) and then connect
the AND output to the R0(1) and R0(2) i.e. reset inputs of
Step 2 : Design of reset logic :
both the ICs.
Pu K
reset logic.
ch
(C-6007)
AND gate as shown in Fig. P. 7.6.6(a) and then connect
Step 3 : Draw the logic diagram :
the AND output to the R0(1) and R0(2) i.e. reset inputs of
The logic diagram of the MOD-45 counter is shown in
both the ICs. Fig. P. 7.6.6(d).
The logic diagram of the MOD-97 counter is shown Ex. 7.6.7 : Design and draw logic diagram of Mod-82
Refer previous example for the procedure to design the MOD-11 counter using IC7490 :
reset logic.
Step 1 : To design a divide by 11 (MOD-11) counter we
So in order to reset the counter at 82, connect the Q have to use two IC 7490 counter ICs.
ns e
outputs which are equal to 1 in the count of 82 to an
Step 2 : Design of reset logic. Both ICs should reset as
AND gate as shown in Fig. P. 7.6.7(a) and then connect
io dg
soon as the count is equal to 11 decimal.
the AND output to the R0(1) and R0(2) i.e. reset inputs of
Timing diagram :
ns e
io dg
(C-803) Fig. P. 7.6.8(d)
Soln. :
at le
Ex. 7.6.9 : Design and draw logic diagram of Mod 72 counter using IC 7490. Dec. 16, 6 Marks
ic w
Step 1 : Number of ICs required :
logic.
Pu K
Ex. 7.6.10 : What is Mod counter ? Explain MOD-26 Step 2 : Design of reset logic :
counter using IC 7490. Draw design for the
Refer previous example for the procedure to design the
same. Dec. 17, 6 Marks
reset logic.
Soln. :
So in order to reset the counter at 56, connect the Q
Refer section 7.5 for definition of Mod counter.
outputs which are equal to 1 in the count of 56 to an
Step 1 : Number of ICs required :
AND gate as shown in Fig. P 7.6.11(a) and then connect
Upto MOD - 100, two IC 7490s will be sufficient.
the AND output to the R0(1) and R0(2) i.e. reset inputs of
ns e
Step 2 : Design of reset logic : both the ICs.
io dg
Refer previous example for the procedure to design the
reset logic.
at le
outputs which are equal to 1 in the count of 26 to an
1. MOD 93 counter :
Ex. 7.6.11 : Design and draw MOD 56 counter using IC reset logic.
7490 and explain its operation. So in order to reset the counter at 93, connect the Q
May 18, 6 Marks outputs which are equal to 1 in the count of 93 to an
Soln. : AND gate as shown in Fig. 2(a) and then connect the
Step 1 : Number of ICs required : AND output to the R0(1) and R0(2) i.e. reset inputs of both
ns e
The logic diagram of the MOD-93 counter is shown in Fig. P. 7.6.12(b).
io dg
at le
ic w
bl no
Counters :
7.8 Synchronous Counters : The JA and KA inputs of FF-A are tied to logic 1. So FF-A
will work as a toggle flip-flop. The JB and KB inputs are
Definition : connected to QA.
A 2-bit or MOD-4 synchronous counter is shown in As soon as the first negative clock edge is applied, FF-A
Fig. 7.8.1. will toggle and QA will change from 0 to 1.
But at the instant of application of negative clock edge, 7.8.2 3-Bit Synchronous Binary up Counter :
QA = 0 JB = KB = 0. Therefore FF-B will not change its
Logic diagram :
state. So QB will remain 0.
We can extend the principle of operation of a 2-bit
QB QA = 0 1 ….. After the first clock pulse
synchronous counter to a 3-bit counter shown in
nd
At the 2 negative clock edge : Fig. 7.8.3.
At the instant when we apply the second negative clock
edge, FF-A toggles again and QA changes from 1 to 0.
But at this instant QA was 1. So JB = KB = 1 and FF-B also
ns e
will toggle. Hence QB changes from 0 to 1.
io dg
QB QA = 1 0 ….. After the second clock pulse
at le
FF-A will toggle from 0 to 1 but there is no change of
state for FF-B.
QB QA = 1 1 ….. After the third clock pulse
(C-817)
QB QA = 0 0 ….. After the fourth clock pulse QA and QB are ANDed and the output of AND gate is
applied to JC and KC.
This is the original state. The operation of counter will
repeat after this. The operation is summarised in Hence when QA and QB both are simultaneously high,
then JC = KC = 1 and FF-C will toggle. Otherwise there is
Pu K
st
synchronous counter 1 clock pulse :
ns e
negative edge of the clock.
io dg
Hence on application of this clock pulse, FF-C will toggle
and QC changes from 0 to 1.
Since QA was equal to 1 earlier, FF-B will also toggle to (C-818) Fig. 7.8.4 : Timing diagram for a
at le
make QB = 0.
QC Q B Q A = 1 0 0
th
…After the 4 clock pulse 7.8.3
3-bit synchronous counter
th
8 clock pulse, all the flip-flops toggle and change their .University Questions.
th
outputs to 0. Hence QC QB QA = 0 0 0 after the 8 pulse
Q. 1 Design 3-bit synchronous up-counter using MS
and the operation repeats.
JK-flip-flop. (May 12, 6 Marks)
Pu K
Steps to be followed :
(C-7842) Table 7.8.3(a) : Excitation table of a T FF For example consider the shaded columns of
Table 7.8.3(c). i.e. QC, QC+1 and TC
ns e
Table 7.8.3(b) and Fig. 7.8.5 shows the corresponding
state diagram.
io dg
(C-6226) Table 7.8.3(b)
at le
ic w
bl no
Pu K
Table 7.8.3(c) shows the circuit excitation table. Step 5 : Draw the logic diagram :
(C-7838) Table 7.8.3(c) : Circuit excitation table
By using the simplified equations for TA, TB and TC we
ns e
io dg
at le
Step 3 : State diagram and circuit excitation table : (C-1509) Fig. 7.8.9 : K-maps and simplified equations for
University Questions.
Q. 1 Draw a 4-bit synchronous counter. Also explain
timing diagram for the same. (Dec. 10, 10 Marks)
Logic diagram :
Step 4 : K maps and simplified expressions for all FF
inputs : All the concepts of a synchronous counter are extended
Fig. 7.8.11.
ns e
(C-1391) Fig. 7.8.11 : A four bit synchronous counter
Timing diagram :
io dg
The timing diagram of a four bit synchronous counter is as shown in Fig. 7.8.12.
Note that the principle of frequency division is applicable to the synchronous counters as well.
at le
ic w
bl no
Pu K
ch
University Questions.
For the design of a synchronous decade counter follow
Q. 1 What is MOD counter ?
the steps given below :
(May 06, Dec. 06, 5 Marks)
We have already discussed the MOD-N asynchronous Step 1 : Write the excitation table for T FF and circuit
ns e
io dg
at le (C-848) Fig. 7.9.2 : Logic diagram of a
synchronous decade counter
ic w
Step 2 : K-maps and simplifications : Ex. 7.9.1 : Design a synchronous counter for the
K-maps for TD , TC ,TB ,TA and their simplified expressions sequence shown in Fig. P. 7.9.1(a).
bl no
are given in Figs. 7.9.1(a), (b), (c) and (d). May 06, 8 Marks..
Soln. :
Excitation table of JK FF :
ns e
(C-7840) Table P. 7.9.1(b) : Excitation table of JK FF
io dg
at le
Circuit excitation table :
(C-7841) Table P. 7.9.1(c) : Circuit excitation table The logic diagram of MOD-5 synchronous counter is
bl no
K-maps for the J and K inputs of all the FFs and the
corresponding simplified equations are shown in
Fig. P. 7.9.1(b).
3 flip-flops.
ns e
The excitation table of a T flip-flop is shown in
io dg
Table P. 7.9.2(a).
at le (C-853)
table.
Pu K
Steps to be followed :
Step 3 : K-maps and simplifications :
Step 1 : Write the circuit excitation table.
K-maps for the TA, TB and TC inputs of the three FFs and
Step 2 : Write K-maps and obtain the simplified
the corresponding simplified equations are shown in
expressions.
Fig. P. 7.9.2(a).
Step 3 : Draw the logic diagram.
University Questions.
ns e
Q. 2 Explain with a neat diagram working of 3-bit
up-down synchronous counter. Draw necessary
io dg
timing diagram. (May 10, 10 Marks)
toggle flip-flops.
at le
Let upcounting takes place with M = 0 and down
ic w
counting takes place for M = 1.
Step 1 : Write the circuit excitation table : (C-855) Fig. 7.10.1 : K-maps and simplified expressions
bl no
The circuit excitation table is shown in Table 7.10.1. Step 3 : Draw the logic diagram :
(C-6228) Table 7.10.1 : Excitation table for a 3-bit up/down The logic diagram for a 3-bit synchronous up down
synchronous counter counter is shown in Fig. 7.10.2.
Pu K
ch
Te
Step 2 : K-maps and simplified equations : 2. The total propagation delay between the instant of
application of clock edge and the instant at which the
The K-maps and simplified equations for TC , TB and TA
MSB output changes is equal to sum of propagation
are shown in Fig. 7.10.1.
delay of one FF and that of one AND gate.
ns e
frequency the long shorter
SPPU : May 12, Dec. 12, of operation propagation propagation
io dg
May 14, Dec. 16, May 18. delay. delay.
University Questions.
7.11 Lock Out Condition :
Q. 1 What is the difference between synchronous
counters and asynchronous counters ? SPPU : Dec. 08.
at le
Q. 2
(May 12, Dec. 12, 4 Marks)
Q. 3 Explain the difference between asynchronous and If it enters into an unused or unwanted state, then it is
synchronous counter and convert SR flip-flop into T expected to return back to a desired state.
flip-flop. Show the design. (Dec. 16, 6 Marks)
Instead if the next state of an unwanted state is again an
Pu K
Q. 4 Compare Asynchronous counter with Synchronous unwanted state as shown in Fig. 7.11.1(b) then the
counter. Design MOD.11 up counter using IC counter is said to have gone into the lockout conditions.
74191. (May 18, 6 Marks)
ch
1. Circuit Logic circuit is With increase in (a) State diagram showing (b) State diagram showing
complexity simple number of states, the desired sequence the lockout condition
the logic circuit
becomes (C-831) Fig. 7.11.1
complicated.
7.11.1 Bushless Circuit : SPPU : Dec. 08.
2. Connection Output of the There is no
pattern preceding FF, connection University Questions.
is connected to between output of
clock of the preceding FF and Q. 1 How lock out condition avoided ?
next FF. CLK of next one. (Dec. 08, 3 Marks)
3. Clock input All the FFs are All FFs receive
Definition :
not clocked clock signal
simultaneously. simultaneously. The sequential circuit which enters into the lockout
condition is called as the bushless circuit.
state of the counter is its initial state, if it enters into an that the next state of all the unused states (1, 3 and 5) is
undesired state. forced to be the initial state (0).
Thus the next state of each unwanted state should be This is essential in order to avoid the lockout condition.
the initial state as shown in Fig. 7.11.2(a).
ns e
io dg
(C-833) Fig. P. 7.11.1(a) : State diagram of desired counter
counter.
Pu K
(C-8053)Table P. 7.11.1(a)
ch
(b)
(C-832) Fig. 7.11.2 : Ways to avoid lockout condition
Fig. 7.11.2(a).
0–2–4–6–7–0
ns e
Fig. P. 7.12.1(a) : Given state diagram
io dg
Soln. :
Refer Figs. P. 7.12.1(b) to (d).
at le
(C-834) Fig. P. 7.11.1(b) : K-maps and simplification
Fig. P. 7.11.1(c).
bl no
Pu K
ch
(C-835) Fig. P. 7.11.1(c) : Logic circuit of the required counter Similarly for the state diagram shown in Fig. P. 7.12.1(c),
Te
7.12 Bush Diagram : it will take only one clock pulse to bring the counter into
of lock-out condition.
If it is in the state “5” then it will return to 0 state after 0, 2, 4 and 5 are the unused states. They are forcibly
two clock pulses and if it is in the state “1” then it will terminated into 1, 3, 7 and 6 respectively.
return to “0” state after three clock pulses. Step 1 : Number of FFs and the type of FF :
ns e
Step 2 : Write the circuit excitation table :
io dg
Let us obtain the circuit excitation table from the bush
(C-842) Fig. P. 7.12.1(d) : Entry from invalid to valid state diagram of Fig. P. 7.12.2(b).
after 1, 2 or 3 clock cycles
The excitation table for a T flip-flop is shown in
at le
Ex. 7.12.2 : Design a synchronous counter from the state
diagram shown in Fig. P. 7.12.2(a). Avoid
lockout condition. Draw the bush diagram to
Table P. 7.12.2(a) and the circuit excitation table is
(C-843) Fig. P. 7.12.2(a) : Given state diagram (C-6229) Table P. 7.12.2(b) : Circuit excitation table
ch
Soln. :
expressions :
Review Questions
ns e
“asynchronous”.
io dg
Q. 3 What is the difference between synchronous and
asynchronous counter ?
at le synchronous
counter ?
counter over the asynchronous
ic w
Q. 5 State the procedure for designing mod counters
(C-845) Fig. P. 7.12.2(c) : K-maps and simplifications
from N-bit ripple counter.
Step 4 : Logic diagram :
bl no
counter.
5 counter.
(C-846) Fig. P. 7.12.2(d) : Logic diagram of
the required counter Q. 13 Explain the working of 4-bit ripple counter
Q. 17 Draw the circuit for mod-12 counter. Explain the Q. 20 Design a 3 bit synchronous counter using JK fillip
same with neat waveforms. flops.
Q. 18 Compare synchronous and ripple counters. Q. 21 What is lock out ? How is it avoided ?
Q. 22 Explain decade counter.
Q. 19 Compare counters and registers.
ns e
io dg
at le
ic w
bl no
Pu K
ch
Te
Chapter
8
ns e
io dg
at le Registers
ic w
bl no
Pu K
Syllabus
Shift register types (SISO, SIPO, PISO & PIPO) & applications.
ch
Te
Chapter Contents
8.1 Introduction 8.7 Parallel In Serial Out Mode (PISO)
8.1 Introduction : Registers are classified on the basis of how the data bits
are entered and taken out from a register. There are four
Definition : possible modes as follows :
ns e
To increase the storage capacity in terms of number of
bits, we have to use a group of flip-flops. Such a group
io dg
of flip-flops is known as a register.
8.2
at le
of storing an “n bit” word.
Data Formats :
(C-729(i))
serial out.
Logic diagram :
Operation : Conclusions :
Let us assume that the word to be stored is Some of the important conclusions from the discussion
B3 B2 B1 B0 = 1 0 1 0. till now are as follows :
ns e
pulse, the outputs of all the D flip-flops will follow their instant of time.
io dg
respective inputs. 3. Therefore this way of applying the input and taking the
output is called as Parallel Input Parallel Output (PIPO)
Q3 Q2 Q1 Q0 = B3 B2 B1 B0 = 1 0 1 0
and the mode of operation is called as parallel shifting.
at le 8.5
Definition :
Shift Registers :
ic w
The binary data in a register can be moved within the
register from one flip-flop to the other or outside it with
bl no
Sr. Illustrative
Mode Comments
No. diagram
Sr. Illustrative Q. 2 Draw and explain 4 bit SISO and SIPO shift
Mode Comments register. Give applications of each.
No. diagram
(May 18, 6 Marks)
2. Serial input Refer Fig. B Data bits shift from
Q. 3 Draw and explain SISO and PIPO type of shift
serial right to left by 1
register. Give application of each.
output position per clock.
(May 19, 6 Marks)
(serial shift
left) Principle :
3. Serial input Refer Fig. C All output bits are Data bits shift from right to left by 1 position per clock
ns e
parallel made available cycle as shown in Fig. 8.5.1(a).
output simultaneously after 4-
io dg
clock pulses.
5. at le
Parallel
input
Refer Fig. E All inputs are loaded
simultaneously and are
Logic diagram :
The serial input serial output type shift register with shift
ic w
parallel available at the output left mode is shown in Fig. 8.5.1(b).
output simultaneously.
bl no
(C-731) Fig. A
Pu K
Important note :
ns e
As soon as the next negative edge of the clock hits,
io dg
FF-1 will set and the stored word changes to,
Q3 Q2 Q1 Q0 = 0 0 1 1
at le
ic w
(C-738) Fig. 8.5.4 : Waveforms for the shift left operation
bl no
University Questions.
Apply the clock pulse. As soon as the third negative
Q. 1 Draw and explain the circuit diagram of 3-bit
clock edge hits, FF-2 will be set and the output get register with Serial Right Shift. (May 05, 5 Marks)
ch
modified to, Q3 Q2 Q1 Q0 = 0 1 1 1.
Principle :
Similarly with Din= 1, and with the fourth negative clock
Data bits shift from left to right by 1 position per clock
edge arriving, the stored word in the register is, Q3 Q2
Te
Logic diagram :
The serial input serial output type shift register with shift
ns e
input of the next flip-flop i.e. D2 and so on. (C-744) Table 8.5.3 : Summary of shift right operation
io dg
Operation :
Before application of clock signal let Q3 Q2 Q1 Q0 = 0 0 0
0 and apply LSB bit of the number to be entered to Din.
at le
So Din = D3 = 1.
Apply the clock. On the first falling edge of clock, the FF-
3 is set, and the stored word in the register is,
ic w
Q3 Q2 Q1 Q0 = 1 0 0 0
Q3 Q2 Q1 Q0 = 1 1 0 0
Q. 2 Draw and explain SISO and PIPO type of shift As soon as the loading is complete, and all the flip-flops
register. Give application of each.
contain their required data, the outputs are enabled so
(May 19, 6 Marks)
that all the loaded data is made available over all the
The transmission of data from one place to the other
output lines simultaneously.
takes place in serial manner as shown in Fig. 8.5.9. The
serial shifting of data transmits one bit at a time per Number of clock cycles required to load a four bit word
clock cycle. is 4. Hence the speed of operation of SIPO mode is
It takes a longer time for serial transmission, because
ns e
same as that of SISO mode.
the time required to transmit one bit is equal to the time
io dg
corresponding to one clock cycle.
University Questions.
Pu K
(C-746) Fig. 8.5.9 : Application of serial operation Q. 1 Draw and explain the circuit diagram of 3 bit
register with the following facility :
ch
Q. 1 Draw and explain 4 bit SISO and SIPO shift serial out 4-bit shift register. Draw necessary timing
register. Give applications of each.
diagram. (May 10, Dec. 17, 6 Marks)
(May 18, 6 Marks)
Principle :
Principle :
In this operation the data is entered serially and taken In this mode, the bits are entered in parallel
out in parallel as shown in the following figure. i.e. simultaneously into a shift register as shown below.
(C-731)
ns e
register. Give application of each.
(May 2019, 6 Marks)
io dg
(C-748) Fig. 8.7.1 : Parallel in serial out shift register Principle :
at le
next one via a combinational circuit.
The binary input word B0, B1, B2, B3 is applied through
When the shift / load line is low (0), the AND gates 2, inputs D0, D1, D2 and D3 respectively of the four
4 and 6 become active. flip-flops.
They will pass B1, B2 and B3 bits to the inputs D1, D2 and
As soon as a negative clock edge is applied, the input
Te
Shift mode :
When the shift / load line is high (1), the AND gates 2,
4, 6 become inactive. Hence the parallel loading of the (C-749) Fig. 8.8.1(b) : Parallel in parallel out shift register
Applications of PIPO shift register : Hence if we want to use the shift register to multiply and
divide the given binary number, then we should be able
PIPO shift registers can be used as :
to move the data in either left or right direction as and
1. A temporary storage device.
when we want.
2. As a delay element.
Such a register is called as a bi-directional register. A
8.9 Bidirectional Shift Register :
four bit bi-directional shift register is shown in Fig. 8.9.1.
SPPU : May 06, Dec. 11, Dec. 13. There are two serial inputs namely the serial right shift
University Questions. data input DR and the serial left shift data input DL
ns e
Q. 1 Draw and explain 4-bit shift register having shift left alongwith a Mode control input (M).
io dg
and shift right facilities. Explain any one
application of such register. (May 06, 6 Marks) Operation :
Q. 2 Draw and explain 4 bit bidirectional shift register. With M = 1 : Shift right operation
(Dec. 11, 8 Marks)
If M = 1, then the AND gates 1, 3, 5 and 7 are enabled
at le
Q. 3 Explain with a neat diagram working of 3 bit
bidirectional shift register. (Dec. 13, 6 Marks)
pulses.
one position then it is equivalent to dividing the original
Thus with M = 1 we get the serial right shift operation.
number by 2. This is illustrated below.
With M = 0 : Shift left operation
Let a four bit number Q3 Q2 Q1 Q0 = 0010 = (2)10 is
Pu K
existing in a shift register. When the mode control M is connected to “0” then the
Now with a 0 applied at the input, if we shift these AND gates 2, 4, 6 and 8 are enabled while 1, 3, 5 and 7
ch
Q0 = 0100 = (4)10. Therefore the data at DL (shift left input) is shifted left bit
Thus the shift left is equivalent to multiplying by 2. Now by bit from FF-0 to FF-3 on application of the clock
Te
(1)10. Thus shifting right is equivalent to dividing by 2. Thus with M = 0 we get the serial left shift operation.
University Questions.
Logic diagram :
This circuit is same as the 4 bit bi-directional shift 4. Serial to parallel converter.
register discussed in the previous section. 5. Parallel to serial converter.
The only modification is the inclusion of an inverter 6. Ring counter.
between the J and K inputs of each flip flops.
7. Twisted ring counter or Johnson counter.
With M = 1 the shift right operation will take place as
8. Serial data transmission.
discussed in the previous section and with M = 0 the
We have already seen the use of shift register for
shift left operation is going to take place.
temporary storage of data and for multiplication or
ns e
division.
io dg
.SPPU : Dec. 08.
University Questions.
Q. 1 Explain how shift registers are used as serial to
at le
parallel converters ? (Dec. 08, 3 Marks)
SPPU : May 06, May 10, May 18. The data communication between two computers takes
Te
Some of the common applications of a shift SPPU : Dec. 08, May 17.
register are : University Questions.
1. For temporary data storage. Q. 1 Write short note on ring counter.
Fig. 8.10.1 shows a typical application of shift registers At the second falling edge of the clock, only FF-2 will be
called Ring Counter. set because D2 = Q1 = 1.
The connections reveal that they are similar to the FF-1 will reset since D1 = Q0 = 0. There is no change in
connections for shift right operation, except for one status of FF-3 and FF-0.
ns e
Q3 Q2 Q1 Q0 = 0 1 0 0.
io dg
Q3 Q2 Q1 Q0 = 1 0 0 0.
at le
(C-871) Fig. 8.10.1 : A four bit ring counter
Q3 Q2 Q1 Q0 = 0 0 0 1.
Operation :
The number of output states for a ring counter will
Initially a low clear (CLR) pulse is applied to all the always be equal to the number of flip-flops. So for a
Hence FF-3, FF-2 and FF-1 will reset but FF-0 will be The operation of a four bit ring counter is summarized
in Table 8.10.1.
ch
edge triggered.
Logic diagram :
ns e
––
if Q3 is connected to K0 and Q is connected to J0 then
3
io dg
the circuit is called as twisted ring counter or Johnson’s
counter.
(C-1360) Fig. 8.10.2 : Waveforms of a four bit ring counter
The Johnson’s counter is shown in Fig. 8.10.4.
at le
Applications of ring counter :
flops, as shown in Fig. 8.10.3. The clear inputs of all the flip-flops are connected
Operation :
But there is no change in the status of any other (C-6295) Table 8.10.2 : Summary of operation
of Johnson’s counter
flip-flop.
Q3 Q 2 Q 1 Q 0 = 0 0 0 1
ns e
Before the second negative going clock edge, Q3 = 0
io dg
––
and Q = 1. Hence J0 = 1 and K0 = 1. Also Q0 = 1.
3
Hence J1 = 1.
at le
Hence as soon as the second falling clock edge arrives,
Note that there are 8 distinct states of output.
Similarly after the third clock pulse, the outputs are, in Fig. 8.10.5.
Q3 Q2 Q1 Q0 = 0 1 1 1.
ch
Q3 Q2 Q1 Q0 = 1 1 1 1.
Te
––
Note that now Q = 0 i.e. J0 = 0 and K0 = 1.
3
But the outputs of the other flip-flops will remain (C-875) Fig. 8.10.5 : Waveforms of Johnson counter
Soln. :
This operation will continue till we reach the all zero
Since there are 5 bits in the given initial state, we have
output state. (i.e. Q3 Q2 Q1 Q0 = 0 0 0 0). to use 5 flip flops as shown in Fig. P. 8.10.1.
The operation of Johnson’s counter is summarised in When we apply a low going clear (CLR) pulse, then flip
flops 0, 1, B and 3 will be preset to 1 output while flip
Table 8.10.2.
flops 2 and 4 are reset to 0 output.
Q4 Q3 Q2 Q1 Q0 = 01011 ...Initially
The remaining states are as shown in Table P. 8.10.1.
ns e
The operation of a three bit ring counter is summarized
in Table P. 8.10.4.
io dg
(C-3278)Table P. 8.10.4 : Summary of operation of a ring
Ex. 8.10.2 : Explain the ring counter design for the initial counter
condition 10110. May 11, 4 Marks.
Soln. :
at le
Similar to the previous example.
ic w
Ex. 8.10.3 : Explain the Johnson’s counter design for
initial state 0110. From initial state explain
and draw all possible states.
bl no
Fig. P. 8.10.3.
ch
Q3 Q 2 Q 1 Q 0 = 0 1 1 0 …initially
Te
Since there are 4 bits in the given initial state, we have 3-bit Ring Counter:
to use 4 flip flops as shown in Fig. P. 8.10.5. The 3 bit ring counter is as shown in Fig. P. 8.10.6(a).
When we apply a low going clear (CLR) pulse, then
flip-flops 2 and 3 will be preset to 1 output while flips-
flops 0 and 1 and are reset to 0 output.
Q3 Q2 Q1 Q0 = 1100 …initially
ns e
io dg
(C-3277) Fig. P. 8.10.6(a) : A three bit ring counter
at le
(C-7148) Fig. P. 8.10.5 : A four bit ring counter
(C-3278) Table P. 8.10.6(a) : Summary of operation of a ring
counter
ic w
(C-7149) Table P. 8.10.5 : Ring counter states
bl no
Pu K
Refer Sections 8.10.3 and 8.10.4 for Ring and Twisted The required Johnson’s / Twisted Ring counter is shown
Counters 1 and 2 are reset to 0 while counter 0 preset to The PRBS generator consists of a number of flip-flops
1 initially. and a combinational circuit which provides a suitable
Q2 Q1 Q0 = 001 …initially feedback.
Applications of PRBS generator :
The other possible states of this Johnson’s counter are
listed in Table P. 8.10.6(b). Since the sequence produced is random, the PRBS
Table P. 8.10.6(b) : All possible states generator is also called as a Pseudo Noise (PN)
of a Johnson’s counter generator and the generated signal is called noise.
This noise can be used to test the noise immunity of the
ns e
system under test.
PRBS generator is an important part of the data
io dg
encryption system. Such a system is required to protect
the data from data hackers.
Review Questions
at le
8.10.5 Sequence Generator :
Q. 1 What is the function of a shift register ? Give its
applications.
ic w
Definition :
Q. 2 State the types of shift registers.
A sequence generator is a sequential circuit which
Q. 3 With a neat diagram explain the operation of 4-bit
bl no
A sequence detector is a synchronous FSM which is Q. 5 With a neat diagram explain the operation of 4-bit
designed to detect a specific sequence of bits arriving at Serial-In-Parallel Out (SIPO) register. Give the truth
its input. table and timing diagram.
Te
The detector will produce a logic “1” at its output (Y) as Q. 6 With a neat diagram explain the operation of 4-bit
soon as it detects the specified sequence of bits. This
Parallel-In-Serial-Out (PISO) register. Give the truth
concept is illustrated in Fig. 8.11.1(b).
table and timing waveforms.
As shown in Fig. 8.11.1(b), in the long data string at the
Q. 7 What is meant by “Universal shift register” ?
input (X), the desired input sequence may sometimes
get overlapped. Q. 8 What do you understand by a bi-directional shift
Fig. 8.11.1(a) shows the block diagram of a sequence register ? Explain its operation.
detector. Q. 9 List any one shift register IC. Sketch its schematic
diagram and pin configuration. Give its
specifications.
Another important application of a shift register is the bi-directional shift register using D flip flop.
pseudo random generator. It is used for generating the
random sequences.
Chapter
9
ns e
io dg
at le Computer Organization
& Processor
ic w
bl no
Syllabus
Pu K
Computer organization and computer architecture, Organization, Functions and types of computer
units - CPU (Typical organization, Functions, Types), Memory (Types and their uses in computer), IO
ch
(Types and functions) and system bus (Address, data and control, Typical control lines, Multiple-Bus
Hierarchies); Von Neumann and Harvard architecture; Instruction cycle.
Processor : Single bus organization of CPU; ALU (ALU signals, functions and types); Register
(Types and functions of user visible, control and status registers such as general purpose, address
Te
registers, data registers, flags, PC, MAR, MBR, IR) & control unit (Control signals and typical organization
of hard wired and microprogrammed CU). Micro Operations (Fetch, Indirect, Execute, Interrupt) and
control signals for these micro operations.
Case Study : 8086 processor, PCI bus.
Chapter Contents
9.1 Introduction 9.5 CPU Architecture and Register Organization
9.2 Basic Organization of Computer and Block 9.6 Instruction, Micro-instructions and
Level Description of Functional Units Micro-operations : Interpretation and Sequencing
9.3 Von Neumann and Harvard Architecture 9.7 Control Unit : Hardwired Control Unit Design
Methods
9.4 Basic Instruction Cycle 9.8 Control Unit : Soft Wired (Micro programmed)
Control Unit Design Methods
ns e
memory to store the programs and data, and the Input /
Those attributes of a computer that are necessary to be
Output (I/O) devices. The functions of these are
io dg
known to a system designer or a programmer are called explained below :
as the architectural features of the computer.
at le
certain things about the processor to the system
designer and the programmer using the datasheets for
their processor chips.
ic w
Those attribute of a computer or moreover the
processor, that are just used for the designing purposes
bl no
No.
1. The components of a computer are central processing
1. It refers to those It refers to the unit, Input and Output (I/O) devices and memory.
ch
attributes of a system implementation of these 2. The three units are connected with the help of system
programmer. not known to the user. 3. Memory is used to store code (programs) and data. It
can be various kinds of like semiconductor memory
Te
2. Instruction set, number of Control signals, using ICs, magnetic memory or optical memory etc.
bits used for data interfaces, memory 4. I/O devices are used to accept a input or give an output
representation, technology etc. form the by the CPU. There are various input devices like
addressing techniques part of the computer keyboard, mouse, scanner; and various output devices
like CRT, printer etc.
etc. form the part of organization.
5. The CPU is further divided into three units as shown in
computer architecture.
the Fig. 9.2.2.
3. For example, is there a For example, is there a
multiply instruction? dedicated hardware
multiply unit or it is done
by repeated addition?
6. The components of the Central Processing Unit (CPU) 4. In Fig. 9.2.4, we can see that the computer is divided
are Arithmetic and Logic Unit (ALU), Control Unit(CU) into three main components namely data storage
and CPU Registers, which are also connected with the facility, data movement facility and the data processing
internal buses. facility. These are the different functions can perform a
7. ALU is used to perform arithmetic operations like computer.
addition, subtraction etc. and logical operations such as
AND, OR etc.
8. CPU Registers are used to store the data temporarily in
ns e
the CPU to save memory access time.
9. The CU is further divided in three parts as shown in
io dg
Fig. 9.2.3.
at le
ic w
bl no
11. The control memory stores the microinstructions loads There are two ways of memory interfacing architectures
ch
it into the control unit register and the sequencing logic for a processor depending on the processor design.
gives these signals in a proper sequence to execute a The first one is called Von Neumann architecture and
later Harvard architecture.
instruction.
9.3.1 Von Neumann Architecture :
Te
The name is derived from the mathematician and early The instructions from the programs are taken by the
computer scientist John Von Neumann. processor, decoded and executed accordingly.
The computer has a common memory for data as well
The data is also stored in the memory.
as code to be executed.
The processor needs two clock cycles to complete an The data is taken from memory and the operation is
instruction, first to get an instruction and second to get performed on that data, as well as the results are stored
the data. in the memory.
This system has three units CPU, Memory and I/O In some cases the input to an operation and the result
devices. The CPU has two units Arithmetic Unit and
ns e
may also be from input and output devices.
Control unit. Let us discuss these units in detail.
Memory in the Von Neumann system has a special
io dg
1. Input unit :
organization wherein the data and instructions are
A computer accepts inputs from the user through these
stored in the same memory.
devices i.e. input devices.
We will see about this in the subsequent section.
The commonly used input devices are keyboard and
at le
mouse. Besides that, there are devices like joystick,
camera, scanner etc. which are also input devices.
Key features of a Von Neumann machine :
unit.
The result is given back by the computer to the user Each location of the memory has a unique address i.e.
through an output device. Input devices and output
no two locations have the same memory address.
devices are also called as human interface devices,
Execution of instruction in Von Neumann machine is
Pu K
The mainly used output devices are monitor and printer. altered by the program itself) from one instruction to
ch
But there are many other output devices like plotter, the next.
speaker etc. Detailed structure of the CPU :
3. Arithmetic and Logic Unit (ALU) : The block diagram of the computer proposed by Von
Te
Arithmetic or logic operations like multiplication, Neumann have a minimal number of registers along
addition, division, AND, OR, EXOR etc. are performed by with the above blocks.
ALU.
This computer has a small set of instruction and an
Operands are brought into the ALU, where the
instruction was allowed to contain only one operand
necessary operation is performed.
address.
4. Control unit :
Fig. 9.3.2 gives the detailed structure of IAS CPU.
The control unit as we know is the main unit that
The structure shown in Fig. 9.3.2 consists of the
controls all the operations of the system, inside and
outside the processor. following registers.
computer to perform the operation according to the It normally provides one of the operand to ALU and
instruction given to it. stores the result.
5. Memory unit : Data Register (DR) :
Memory is used to store the programs and data for the It acts as buffer storage between the CPU and the main
computer. memory or I/O devices.
ns e
Fig. 9.3.3 : Instruction pointed by PC is fetched by the CPU
io dg
for execution. Subsequently PC is made to point to the
next instruction
Fetching an instruction :
registers :
ic w
It always contains the address of the next instruction to 1. MAR (Memory Address Register) : It provides address
It holds the current instruction i.e. the opcode and 2. DR (Data Register) : It acts as buffer storage between
operand of the instruction to be executed. the main memory and the CPU.
Pu K
Memory Address Register (MAR) : The function and operation of these registers will be
The address from which the data or instruction is to be understood by the example below.
ch
fetched is provided by the processor through MAR. The instruction to be executed is brought from the
It also is used to forward the address of memory memory to the CPU, through the following steps :
location where data is to be stored. 1. The address of the instruction is transferred from PC to
Te
A register, PC (Program counter) always points to the 2. MAR puts this address on the address bus for selection
first instruction of the program when the computer of the required location of the memory.
––––
starts. 3. Control Unit generates the RD (read control signal)
CPU fetches the instruction pointed by PC. PC contents signal to perform read operation on memory. Required
are automatically incremented to point to the next instruction is given on data bus by the memory.
The instruction specifies what action the CPU has to provides the value in the given address (which is the
take. instruction) to MBR.
The CPU interprets the instruction and performs the 2. The PC value has to be incremented to point to the next
required action. The action could be : instruction (Sometimes the value of PC may have be
1. Data transfer between CPU and memory. completely changed in case of some special instructions
2. Data transfer between CPU and I/O. called as branching instructions).
3. The CPU may perform an arithmetic or logic 3. The instruction is loaded into Instruction Register (IR)
operation on data. from the MBR.
4. Finally the processor interprets or decodes the
ns e
9.3.2 Harvard Architecture :
instruction. The processor performs required operations
io dg
Fig. 9.3.4 shows the connection for Harvard computer in the execute cycle.
architecture.
In the execute cycle the operation asked to be
performed by the instruction is done. It may comprise of
one or more of the following operations :
at le Fig. 9.3.4
instruction cycle.
The instruction cycle is a representation of the states
There is one more state i.e. Interrupt cycle.
that the computer or the microprocessor performs
ch
Fig. 9.4.1 shows the basic instruction cycle. It comprises Interrupt cycle as discussed earlier is added to
huge number of instructions, until it reaches the halt During this cycle the processor checks for interrupt, and
instruction. if present and enabled services the same.
The fetch cycle comprises of the following operations : If no interrupt is present then it fetches the next
instruction else if interrupt pending then it performs the
1. Program Counter (PC) holds address of next instruction
following operations :
to fetch; hence the CPU (Processor) fetches instruction
1. Suspend the execution of current program.
from memory location pointed to by PC. This is done by
providing the value of the PC to the MAR and giving the 2. Save the context of the current program under
Read control signal to the memory. On this the memory execution.
3. Set the PC value to start address of interrupt handler routine also called as interrupt service routine. Interrupt service
routine is a small program which when executed, services the interrupting source.
4. Process the interrupt service routine (ISR) and then.
5. Restore the context and continue execution of the interrupted program.
Thus the complete basic instruction cycle with Interrupts can be as shown in the Fig. 9.4.2.
ns e
io dg
Fig. 9.4.2 : Complete basic instruction cycle
at le
You will notice in Fig. 9.4.2, the interrupts are checked for, after the execute cycle and processed if enabled and exist; else,
it fetches the next instruction.
The detailed instruction cycle is shown in Fig. 9.4.3.
ic w
bl no
Pu K
ch
Te
In Fig. 9.4.3, there are some states drawn on the upper Again to fetch the operands, we require the buses.
side, while some on the lower side. After fetching the operand, if more operands are
The ones on the upper side are the operations carried required for multiple operand instructions, then the next
out on the buses or are external operations, while the state is again calculate the operand address i.e. the
ones at the lower level are the operations carried out address of the next operand.
inside the CPU or are internal operations. Once all the operands are fetched, the data operation is
The instruction cycle begins from the “Instruction carried out as per the operation indicated in the
address calculation” state, wherein the address of the instruction.
next instruction is calculated or the value of the PC is Now for the result storage again the address of operand
updated. is calculated and the result is stored in the specified
Then the instruction is fetched, which requires the location of the memory.
operation on the buses. In case of multiple operands again the calculation and
The instruction fetched is then decoded. Until this state, storage process for the operand continues until all the
it is the fetch cycle.
operands are stored.
In the execute cycle, the operand address is calculated
Now begins the interrupt cycle, wherein the first step is
and the operands are fetched from the calculated
address. to check the presence of an enabled interrupt.
If there is none, then the next state as seen in the It consists of PIPO (Parallel in parallel out) register as
Fig. 9.4.3 is the calculation of next instruction address shown in Fig. 9.5.2.
i.e. executes the next sequential instruction. This section is also called as scratch pad memory. It
stores data and address of memory.
But in case the interrupt is present and enabled then the
servicing of the same is done as discussed earlier in this The register organization affects the length of program,
the execution time of program and simplification of the
section.
program.
In the Fig. 9.4.3, you will also notice that there are two
To achieve better performance, the number of registers
paths from the end of the previous instruction.
should be large.
ns e
The one that goes to the state “Instruction address
The architecture of microcomputer depends upon the
calculation” for the next instruction; and the one that
io dg
number and type of the registers used in
goes to the “Operand address calculation” for vector
microprocessor.
instructions.
It consists 8-bit registers or 16 bit registers.
Vector instructions are those instructions wherein the
The register section varies from microprocessor to
operation is same but the data on which the operation
at le
is to be performed in a huge block of data or an array of
data.
microprocessor.
The registers are used to store the data and address.
These registers are classified as :
ic w
Hence in the second case, the instruction is already
fetched and decoded i.e. the operation is already know, o Temporary registers
and the operation is to be performed on a block of data. General purpose registers
bl no
o
After completing the operation on one set of operands, o Special purpose registers.
the CPU returns to the next operand address calculation
1. Register section :
state, wherein it calculates and fetches the next
operand.
Pu K
Fig 9.5.1 shows the architecture of microprocessor. This Fig. 9.5.2 : 8 bit register
ns e
It operates with reference to clock signal.
instruction consists of a series of steps that make up the
This accepts information from instruction decoder and
instruction cycle i.e. fetch, decode, etc. Each of these
io dg
generates micro steps to perform it.
steps is, in turn, made up of a smaller series of steps
In addition to this, the block accepts clock inputs,
called micro-operations or micro-instructions.
performs sequencing and synchronising operations.
Control signals are issued to perform these
The synchronization is required for communication
at le
between microprocessor and peripheral devices.
To implement this it uses different status and control
signals.
micro-operations and micro-instructions are these
control signals.
Fig. 9.6.1 shows the structure of the CPU with these
ic w
The basic operation of a microprocessor is regulated by micro-instructions or the control signals.
this unit. It also shows those registers as already seen in
bl no
It synchronizes all the data transfers. section 9.2 like PC, MAR, MBR, etc.
This unit takes appropriate actions in response to There are some registers like the register ‘Y’ to provide
external control signals. one of the operand to the ALU as shown in the
Fig. 9.6.1.
9.5.1 Interrupt Control :
Pu K
Another register is the ‘Z’ register, which is used to store To perform this operation the control signals given are
the result given by the ALU. PCout and MARin.
A “temp” register or the temporary register to store This will make the PC register give out its data and the
some temporary data. MAR register accept this data.
The set of registers R0 to Rn (the value of ‘n’ depends on Also the memory is indicated to perform a read
the registers in the CPU) for general purpose operation from memory hence the signal “Read”.
operations. To increment the value of PC, the various operations are
There is also an instruction decoder for decoding the performed on ALU signals i.e. Clear Y, Set Cin, Add, Zin.
instructions stored in the instruction register and in turn The ‘Y’ register is cleared and the carry flag is set. Now
ns e
provides the micro-instructions or the control signals for when the ALU is said to perform the “ADD” operation it
the resources inside and outside the CPU.
io dg
will add the contents of the ‘Y’ register, carry flag and
The ALU also gets the control signals from this decoder the contents of the internal data bus.
indicating the operation to be performed like Add, Sub, The contents of the internal data bus are nothing but
and AND etc. the value given out by the PC register.
at le
The ALU also has an extra input called as Cin i.e. the
carry input as required for adder.
To execute any instruction as seen earlier it is to be
Hence the PC is added with ‘1’ i.e. the carry flag and
hence incremented value of PC is given to the ‘Z’
register.
ic w
divided into three cycles viz. fetch, execute and interrupt In the second clock pulse the CPU has to wait for the
cycles.
memory operation, but in the same time it can transfer
The execute cycle will differ based on the operation to
bl no
Fetch cycle is concerned to fetch (i.e. read from pulse, but as many as required can accept the data.
memory) the instruction. In the final t-state, the contents received from the
It involves following operations in different t-states (t- memory i.e. the instruction is transferred to its correct
state is a time state and is equal to one clock pulse) and place i.e. the instruction register.
Te
hence the mentioned microinstructions in Table 9.6.1. This is done by the control signals namely MBRout and
Table 9.6.1 : Microinstructions for the fetch cycle IRin. This also completes the entire fetch operation of
the instruction.
Operation Microinstructions
Table 9.6.2 : Microinstructions for the execute cycle of direct cannot be given simultaneously on the data bus in the
addressed mode of operand access
same t-state.
Operation Microinstructions And the contents of memory location with the address
T1 IR MAR IRout(address), MARin, Read, ‘X’ are already put on the data bus in the third t-state.
Clear Cin The fourth t-state is thus required to transfer the data
T2 M MBR R1out, Yin, Wait for memory from register ‘Z’ to register R1 using the signals
Zout, R1in.
read cycle
Another execute cycle we will be studying in this
ns e
T3 MBRout, Add, Zin
MBR + R1 R1 sub-section is for the indirect addressed operand. In this
T4 Zout, R1in
io dg
case, the address given in the instruction is the memory
In this case of direct addressing mode, the address of location that contains the address of the operand.
the memory operand is in the instruction itself.
Table 9.6.3 shows the micro-operations required for
The instruction as we have seen in the fetch cycle
such an execute cycle for an example instruction ADD
at le
reaches the IR register.
addition operation.
Operation Microinstructions
Since the instruction expects addition of the register ‘R1’
ch
and the data at memory location with address ‘X’, the T1 IR MAR IRout(address), MARin, Read,
Clear Cin
contents of register ‘R1’ are transferred to the
‘Y’ register, which is one of the operands for any ALU T2 M MBR R1out, Yin, Wait for memory
read cycle
Te
operation.
T3 MBR MAR MBRout(address), MARin,
To perform this transfer operation the control signals
Read
given are R1out and Yin.
T4 M MBR Wait for memory read cycle
Also by the end of the second t-state, the data operand
T5 MBRout, Add, Zin
required from the memory will be available in the MBR
MBR + R1 R1
register. T6 Zout, R1in
In the third t-state the contents of the MBR, which is the 9.6.3 Interrupt Cycle :
content of memory location with the address ‘X’, is
It is concerned to perform the test for any pending
placed on the internal data bus and the ALU is indicated
interrupts at the end of every instruction execution and
to perform the addition operation.
if an interrupt occurs.
It adds the contents of the ‘Y’ register and the contents
It involves the different micro-operations for various
of the internal data bus, and the result is given to the ‘Z’
t-states as shown in Table 9.6.4 points to the top of the
register. stack.
An extra t-state is required to send the data from the This stack is used to store the return address of the
‘Z’ register to the register R1, as seen earlier two data interrupted program.
Table 9.6.4 : Microinstructions for the interrupt cycle T-state Operation Microinstructions
ns e
T4 ISR address PC ISR address out, PCin (new SUB, Zin,
address), Wait for memory T8 SP SP – 1 Zout, SPin, MARin
io dg
write cycle T9 PC MDR PCout, MDRin, WRITE
The control signals are to be generated using the T10 MDR [SP] Wait for mem access
control unit. The design of this control unit can be done T11 PC ISR addr PCin ISR addr out
at le
in two ways namely: Hardwired Control Unit and
Microprogrammed Control Unit. We will see these two
methods in the subsequent sections.
3. Write a microprogram for the instruction:
MOV R3 , [R4] OR LOAD R3 , [R4]
ic w
T-state Operation Microinstructions
9.6.4 Examples of Microprograms :
T1 PC MAR PCout, MARin, Read,
1. Write a microprograms for the instruction : MOV R3, R4 Clear y, Set Cin, Add, Zin
bl no
PC PC + 1 T6 MDR R3 MDRout , R3 in
T3 MBR IR MBRout, IRin T7 Check for intr Assumption enabled intr
pending
T4 R3 R4 R4 out, R3 in
CLRX, SETC, SPout,
Te
T6 SP SP – 1 Zout, SPin, MARin T11 PC ISR addr PCin ISR addr out
T7 PC MDR PCout, MDRin, WRITE 4. Write a microprogram for the instruction : ADD R3, [R4]
T9 PC ISR addr PCin ISR addr out T1 PC MAR PCout, MARin, Read, Clear
y, Set Cin, Add, Zin
2. Write a microprogram for the instruction : ADD R3, R4
T2 M MBR Zout, PCin, Wait for memory
T-state Operation Microinstructions PC PC + 1 fetch cycle
T1 PC MAR PCout, MARin, Read,
T3 MBR IR MBRout, IRin
Clear y, Set Cin, Add, Zin
T4 R4 MAR R4 out, MARin, READ, CLRC
T2 M MBR Zout, PCin, Wait for
PC PC + 1 memory fetch cycle T5 Mem MDR Wait for men access
ns e
Zin,
SUB, Zin,
T9 SP SP – 1 Zout, SPin, MARin
T8 SP SP – 1 Zout, SPin, MARin
io dg
T10 PC MDR PCout, MDRin, WRITE
T9 PC MDR PCout, MDRin, WRITE
T11 MDR [SP] Wait for mem access
T10 MDR [SP] Wait for mem access
T12 PC ISR addr PCin ISR addr out
5. at le
Write a microprogram for the instruction :
ADD R3, [ [R4] ]
7.
T11 PC ISR addr PCin ISR addr out
Clear y, Set Cin, Add, Zin Clear y, Set Cin, Add, Zin
T2 M MBR Zout, PCin, Wait for T2 M MBR Zout, PCin, Wait for
PC PC + 1 memory fetch cycle memory fetch cycle
PC PC + 1
T3 MBR IR MBRout, IRin
MBR IR
Pu K
T3 MBRout, IRin
T4 mem MDR Wait for mem access
T4 IR addr MAR IRout, MARin, D, CLRC
T5 MDR ALU MDRout, Zin, ADD
ch
T5 R3 X R3 out, Xin
T6 Z R3 Zout, R3 in
T5 mem MDR Wait for mem access
T7 Check for intr Assumption enabled intr
pending T6 MDR ALU MDRout Zin, ADD
Te
ns e
X Temp Xout, Tin
io dg
T7 ZX Zout, Xin
at le CLRX,
SUB, Zin,
SETC, SPout,
ic w
T9 SP SP – 1 Zout, SPin, MARin
ADD X, [[400]]
computers.
Symbolic
T-state Microinstruction 2. In Operating system : Microprograms can be used to
ch
operations
implement some of the primitives of operating system.
T1 PC MAR PCout, MARin, READ,
This simplifies operation system implementation and
CLRT, SETC, ADD, Z
also improves the performance of the operating system.
T2 mem MDR Wait for mem access
Te
ns e
(b) Moore Type
for implementation on silicon wafer i.e. the IC Fig. 9.7.1 : State tables for a finite-state machine
io dg
(Integrated Circuit), since the components required are
2. Delay element method :
lesser.
This method is implemented using delay elements
The only disadvantage is that modifications to the
i.e. D-flipflops.
design are slightly difficult.
at le
The use of hardwired control unit is majorly found in the
RISC designs.
A flipflop is made to give output logic ‘1’ after the
specific event or in a t-state in sequence and the
outputs of these flipflops are used to generate control
ic w
There are different methods to implement hardwired signals or the micro-instructions i.e. two operations that
control unit : require a delay of 1 t-state between them are separated
1. State table method. by a D flipflop between them. Fig. 9.7.2 shows this
bl no
state is being determined. Each state with a set of Fig. 9.7.2 : Use of D flip flop as a delay element between two
microinstructions to be issued to various components of sets of control signals
the processor as well as external control signals.
This state table is then implemented using flip-flops and
combinational circuit to generate different control
signals.
An example state table implementation is shown in
Fig. 9.7.1.
Inputs
State I I2 …….. Im
S1 S1, 1, O1, 2 S1, 2, O1, 2 …….. S1, m, O1, m
S2 S2, 1, O2, 1 S2, 2, O2, 2 …….. S2, m, O2, m
Fig. 9.7.3 : Use of OR gate in delay element method of
: ……..
Hardwired control unit
:
Sn Sn, 1, On, 1 Sn, 2, On, 2 Sn, m, On, m The signals that activate the same control signal are
……..
ORed together i.e. if a signal has to be activated from
Fig. 9.7.1(a) : Mealy
the outputs of multiple flipflops then an OR gate is used inputs to the AND array is from various control signals
as shown in Fig. 9.7.3. generated and the output of the OR array is given as
In case if a decision is to be made then it is control signals to various components of the processor
implemented using a If-Then-Else circuit i.e. two AND as well as the external control signals required.
gates coupled to a OR gate. This is shown in Fig. 9.7.4. Fig. 9.7.6 shows the implementation of the PLA method
of implementation of control unit.
ns e
io dg
Fig. 9.7.4 : Implementation of If-Then-Else in delay element
method of Hardwired control unit
at le
In this method, multiple clock signals are derived from
the master clock using a standard counter-decoder
approach as shown in the Fig. 9.7.5. These signals are 9.8
Fig. 9.7.6 : PLA Technique
The Instruction Register (IR), Status flag and condition 9.8.1 Wilkie’s Microprogrammed Control Unit :
codes are read by the sequencer that generates the First working model of a micro-programmed control
unit was proposed by Wilkie’s in 1952.
address of the control memory location for the
In the above design, a microinstruction has two major
corresponding instruction in the IR.
components :
This address is stored in the Control address register
Control field
that selects one of the locations in the control memory
Address field
having the corresponding control signals.
ns e
register, decoded and then given to the individual
io dg
components of the processor and the external devices. Fig. 9.8.2 : A typical microinstruction
at le
ic w
bl no
Pu K
ch
Te
Then the control information 0100110 indicates that on execution of above microinstruction, control signals C1, C4 and C5
will be activated. Address field contains the address of the next microinstruction.
Thus, after execution of the above instruction, the next instruction to be executed is one which is at the address 010.
The control field tells the control signals which are to be activated and the address field provide the address of the next
microinstruction to be executed.
The Control Memory Access Register (CMAR) can be loaded from an external source (instruction register) as well as from
the address field of a microinstruction.
This address is once again fed to the CMAR resulting in Ability to handle Difficult Easier
ns e
activation of another control line and address field. Complex
instructions
This cycle is repeated till the execution of the instruction
io dg
is achieved. Design process Complicated Systematic
For example, as shown below, if the machine instruction Decoding and Complex Easy
under execution causes the decoder to have an entry sequencing logic
at le
address for a machine instruction in control memory at
line 000. The decoder activates the lines in the sequence
Applications
Small
CISC p
Large
ic w
given below :
Control memory Absent Present
Line Control signal Address of next
Chip area required Less More
bl no
If the condition is true then the address 101 will be Q. 4 With neat block diagram explain general
selected else the address 110 will be selected. architecture of a microprocessor.
9.8.2 Comparison between Hardwired and Q. 5 Exlpain detailed instruction cycle.
Micro-programmed Control :
Q. 6 List and explain applications of microprogramming.
Micro- Q. 7 What are the different methods to implement
Hardwired
Attribute programmed hardwired control unit explain state table method?
Control
Control
Q. 8 What are the different methods to implement
Speed Fast Slow
softwired control unit?
Cost of More Cheaper
implementation Q. 9 Differentiate between hardwired and micro-
Chapter
10
ns e
io dg
at le Processor Instructions &
ic w
Processor Enhancements
bl no
Pu K
Syllabus
ch
Multiprocessor systems : Taxonomy of Parallel Processor Architectures, Two types of MIMD clusters and
SMP (Organization and benefits) and multicore processor (Various alternatives and advantages of
multicores), Typical features of multicore intel core i7.
Case Study : 8086 Assembly language programming.
Chapter Contents
10.1 Instruction Encoding Format 10.7 Pipeline Processing
10.2 Instruction Format and 0-1-2-3 Address 10.8 Instruction Pipelining and Pipelining Stages
Formats
ns e
bytes is decided ? The length of instruction bytes is have following different cases :
dependent upon addresing mode used by programmer.
(a) No additional bytes (Figs. 10.1.1(a), (b), (c) and
io dg
i.e. immediate, register, register relative, based indexed,
(d)).
relative based indexed and so on.
(b) A 2 bits EA (for direct addresing mode
Basically instruction bytes will contain information of :
(Fig. 10.1.1(e)).
(1)
(2)
at le
OPCODE
(c) 1 or 2 bits immediate operand. long, the low order bits always appears first, this is
Intel standard (same was followed by 8085 also).
Pu K
ch
Te
To remember these formats, I will give you only a single 12 bit operatoin with a 12 bit immediate operand : S =
format, from that we get these different formats refer 0, W = 1
Fig. 10.1.2. 12 bit operatoin with a sign extended 8 bit immediate
As shown in Fig. 10.1.2, the first six bits of a multibits operand : S = W = 1
instruction generally contains an opcoding that identifies V bit :
the basic instruction type i.e. ADD, XOR etc.
Used by shift and rotate, to determine single and
variable - bit shifts and rotate.
V = 0 shift/rotate count is one
ns e
= 1 shift/rotate count is specified in CL register.
Z bit :
io dg
This bit is used as a compare bit with zero flag in
conditional repeat (REP) and loop instructions.
Z = 0 Repeat/loop while zero flag is clear.
register
we have MOD, OPCODING and R/M, for some of the
cases we have MOD, REG and R/M. First we will Z 0 Repeat/loop while zero flag is clear
nd 1 Repeat/loop while zero flag is set
concentrate on OPCODING bits in 2 bits of instruction
nd
format. This field is 3 bit wide. Under that we have three Now concentrate on MOD, R/M and REG field in 2 bits
single bit fields, S, V and Z. of instruction format. The second bits of the instruction
S bit : usually identifies the instruction's operands.
MOD :
An 8 bit, 2’s complement number can be extended to a
12 bit 2’s complement number by letting all of the bits The mode (MOD) field indicates whether one of the
in high order bits equal the MSB in low order byte. This operands is in memory or whether both operands are
is referred to as sign extension. register. Table 10.1.2 shows MOD field encoding, this
field is of size 2 bits.
S bit is used in conjunction with W to indicate sign
Table 10.1.2 : MOD field ENCODING
extension of Immediate fields in arithmetic instructions.
S = 0 No sign extension CODE EXPLANATION
= 1 Sign extended 8 bit immediate data to 12 bits if
00 Memory mode, no displacement follows *
W = 1.
Therefore for 8 bit operatoin : S = W = 0 01 Memory mode, 8 bit displacement follows
ns e
The Register (REG) field identifies a register that is one
of the instruction operands. REG field depends upon W 111 BH DI
io dg
bit. Table 10.1.3 shows the selection of register(s) You will find that Table 10.1.3 matches with Table 10.1.4.
depending upon W bit. Secondly when W = 0, you can select ONLY 8 bit source
Table 10.1.3 : REG (Register) field encoding and destination operand. When W = 1, you can select
ONLY
REG W=0 W=1
at le 0001001
001001
AL
CL
AX
CX
12 bit source and destination operand.
1001 CH BP
Thus any such combination with other register(s) is
110 DH SI
INVALID.
111 BH DI
Case II : Memory MODE (8 bit/12 bit or No displacement)
Pu K
Table 10.1.4 : R/M field encoding when MOD = 11 (binary) 101 CH BP 101 (DI) (DI) + D8 (DI) + D16
REG field in this case, as usual identify the register that The CPU has the control unit that provides control
is one of the instruction operand. signals to all the resources inside and outside the CPU.
10.2 Instruction Format and 0-1-2-3 The CPU also has the ALU that performs the arithmetic
ns e
It has to increment always to point to the next
Fig. 10.2.1 shows the basic components of the computer
instruction, which is automatically done. Hence the word
io dg
and their interconnection. Also the internal components
“Counter” in the name.
of the CPU are shown in the Fig. 10.2.1.
The MAR (Memory Address Register) is used to store
The computer consists of three basic components
namely the CPU, memory and I/O devices connected the address to be provided to the memory.
at le
with each other via the buses.
Input devices are required to give the instructions and
data to the system. The output devices are used to give
And MBR (Memory Buffer Register) is used to store the
data to be given or taken from the memory. Similarly for
I/O devices we have IOAR and IOBR.
ic w
the output devices. Another register named as IR (Instruction Register) is
The instructions and the data given by the input device used to store the instruction to be executed by the CPU.
bl no
Elements of an Instruction :
ch
Three address, One address and Zero address STORE P ; M[P] <- AC
ns e
instructions. LOAD A ; AC <- M[A]
Zero address instructions :
ADD B ; AC<- AC + M[B]
io dg
PUSH A ; ToS <- A STORE Q ; M[Q] <- AC
PUSH B ; ToS <- B
LOAD P ; AC <- M[P]
ADD ; ToS <- A + B
DIV Q ; AC <- AC / M[Q]
PUSH C
PUSH D
at le ;
;
ToS <- C
ToS <- D
STORE X ;
i.e. AC <- ((A*B) + (C*D – E)) / (A + B)
ns e
Fig. 10.3.3 shows the structure of an instruction and
The disadvantage is that it has a limited range.
operand access technique for the indirect addressing
io dg
Fig. 10.3.1 shows the structure of an instruction and
mode.
operand access technique for the Immediate addressing
mode.
at le
Fig. 10.3.1 : Immediate addressing mode
ic w
2. Direct addressing mode :
of the operand.
For example ADD AX,[0005H]. This instruction adds the
Fig. 10.3.3
contents of memory location 0005H to accumulator. The
4. Register addressing mode :
operand is taken from the memory location specified in
Pu K
the instruction. In this case the operand is held in the register named in
In this case there is only a single memory reference to operand address field.
ch
access data.
There are limited numbers of registers; hence very small
The advantage is that there are no calculations to work
address field is required. Hence shorter instructions and
out effective address.
faster instruction fetch.
The disadvantage is that this addressing mode can be
Te
In this case a memory location has the address of the Thus the processors that have multiple registers helps in
operand in another memory location i.e. a memory improving the performance of the processor.
ns e
Fig. 10.3.4 : Register addressing mode
io dg
In this case the operand memory address in pointed to
by contents of register R.
It requires one less memory access than indirect
at le
addressing mode as seen in above point number 3.
Fig. 10.3.5 shows the structure of the instruction and the
way to access the operand for the register indirect Fig. 10.3.6 : Displacement addressing mode
ic w
addressing mode. 7. Relative addressing mode :
(b) Immediate : which will be added with the value of program counter.
Thus Z = PC + Y.
In this case the instruction has the address field as 400,
(d) Indexed :
hence the operand itself is 400. This operand is stored in
the immediate next location of the instruction. Since the In this case the address field i.e. Y will have an address
ns e
instruction is stored at location 300, the operand is at that will be added with the value of the register R1
location 301. which has X. Hence the effective address in this case will
io dg
(c) Relative : be the address at location Y + the value of register R1
i.e. X. Thus Z = [Y] + X
In this case the instruction has the address field as 400,
which will be added with the register R1’s value 10.4 Instruction Set of 8085 :
at le
i.e. 200 and hence the effective address will be 600.
(d) Register indirect :
The instruction set of 8085/8088 is divided into number
of groups, of functionally related instructions.
ic w
In this case the register provides the address of the Different groups are :
operand. Since the register R1 has a value 200, the 1. Data transfer group.
effective address in this case will be also 200. 2. Arithmetic group.
bl no
(d) Indexed
Soln. :
Fig. 10.4.1
(a) Direct :
Now we will start with instruction set. The information
In this case the instruction has the address field as Y, presented is from the point of view of utility to the assembly
hence the effective address is Y. Thus Z = Y. language programmer. The information given is :
(b) Indirect : 1. Mnemonic (Syntax of the instruction)
In this case the instruction has the address field as Y, 2. Algorithm
hence the operand is at the address which is stored at 3. Operation of the instruction
location Y, i.e. the address field at W + 1, that has the 4. Examples.
While giving you above information some typical Identifier Used In Explanation
symbols/labels are used. I feel that you should know the
source- String operations Name of a string in
significance and meaning of those labels. string memory that is
Refer Table 10.4.1, which provides, for instruction addressed by register
coding format, different IDENTIFIER where it is used and SI; used only to
explanation of the same. identify string as bits
or word and specify
Table 10.4.1 : Key to instruction coding formats
segment override, if
Identifier Used In Explanation any. This string is
ns e
used in the operation,
Destination data transfer, bit A register or memory
manipulation location that may but is not altered.
io dg
contain data operated Dest-string String operations Name of string in
on by the instruction, memory that is
and which receives (is addressed by register
replaced by) the result DI; used only to
of the operation.
at le
Source data transfer,
arithmetic, bit
A register, memory
location or immediate
identify string as bits
or word. This string
receives (is replaced
ic w
manipulation value that is used in by) the result of the
the operation, but is
operation.
not altered by the
Count Shifts, rotates Specifies number of
bl no
instruction.
bits to shift or rotate;
source-table XLAT Name of memory
written as immediate
translation table
value 1 or register CL
addressed by register
(which contains the
Pu K
BX.
count in the range
Target JMP, CALL A label to which 0-255).
control is to be
ch
Port IN, OUT An I/P port number; Table 10.4.2 : Key to operand types
specified as an
immediate value of 0- Identifier Explanation
255, or register DX (no No operands are written
(which contains port operands)
number in range
0-64k). Register An 8-or 32-bit general register
ns e
(1)
Mem8 An 8-bit memory location 4. Instruction Fixed Variable
(1) Size
io dg
Mem16 A 32-bit memory location
5. Control Unit Hardwired Micro-
source-table Name of 156-bits translate table
programmed
source-string Name of string addressed by register SI 6. Number of Single CPU Multiple CPU
dest-string Name of string addressed by register DI Bus Cycles cycle (for 80% cycles
DX at le
short-label
Register DX
Simple Complex
ic w
of the instruction And
Decoding
Near-label A label in current coding segment
Subsystem
bl no
Memptr16 A word containing the offset of the location in and and less Significant
the current coding segment to which control Probability of probable probability
(1) Design
ch
is to be transferred
Errors
Memptr32 A double word containing the offset and the
10. Complexity Simpler More complex and
segment base address of the location in
of Compiler the results of
another coding segment to which control is to “optimization” may
Te
(1)
be transferred not be most
regptr16 A 32-bit general register containing the offset efficient and the
fastest machine
of the location in the current coding segment
language code
to which control is to be transferred
11. HLL Supported Not Supported
Repeat A string instruction repeat prefix. instructions
(1) Any addresing mode-direct, register indirect, based
10.5.2 RISC Properties :
relative, based indexed or relative based indexed-may be
used. A RISC system must satisfy the following properties :
10.5 Reduced Instruction Set Computer 1. Single-cycle execution of all (or at least 80 percent)
Principles : instructions.
2. Single-word standard length of all instructions.
Sun SPARC is a RISC processor; while all the processors
3. Small number of instructions (<= 128).
studied till now in this book were CISC processors.
4. Small number of instruction formats (<= 4).
Hence before studying the details of Sun SPARC
processor, we will see how RISC processors are different 5. Small number of addressing modes (<= 4).
than CISC and also the special features of RISC 6. Memory access possible by load and store instructions
processor. only.
7. All operations, except load and store, are register to procedures by partial overlapping of the windows. The
register i.e. within the CPU. last N registers of window J will be the first N registers
8. It must have a hardwired control unit. of window J+1.
9. It must also have a relatively large (at least 32) general- 8. If the procedure taking up window J calls a procedure,
purpose CPU register file. which will be assigned the next window J+1, it can pass
ns e
during procedure call. Parameter passing in is possible
by the register window. This policy also allows 10.5.4 Miscellaneous Features or Advantages
io dg
reasonable HLL support in RISC designs.
of RISC Systems :
2. The register file is subdivided into groups of registers, 1. HLL support :
called register windows.
(a) The support for High Level Language (HLL) features is
3.
at le
A certain group of ‘i’ registers, suppose R0 to R(i-1), are
designated as global registers. The global registers are
accessible to all procedures running on the system at all
(b)
mandatory in the design of any computing system.
accessible to other procedures. efficiently the handling of local variables, constants, and
5. The window base (first register within the window) is procedure calls, while leaving less frequent HLL
pointed to by a field called current window pointer operations to instructions sequences and subroutines.
Pu K
(CWP) located in the CPU’s status register (SR). (d) One of the mechanisms supporting the handling of
procedures, and their parameter passing in particular, is
ch
(a) The problem occurs in a system where instructions are Another feature, implemented in not only a CISC but
prefetched, right after a branch. also RISC systems, is separated data and code caches, or
(b) If the branch is conditional, and the condition is not split cache.
satisfied, then the next instruction, which was 7. Instruction Level Parallelism (ILP) :
prefetched, is executed, and since no branch is to be
Superscalar and superpipelined designs are also mostly
performed, no time is lost.
implemented in a RISC design.
(c) But, if the branch condition is satisfied, or the branch is
8. VLSI realization :
unconditional, the next prefetched instruction is to be
ns e
flushed and other instruction pointed to by the branch (a) The chip area, dedicated to the realization of the control
address is to be fetched in its place. The time required unit, is considerably less. Therefore, on a RISC VLSI chip,
io dg
to prefetch the flushed instruction is wasted. there is more area available for other features (cache,
(d) Such waste of time is solved by using the delayed FPU, part of the main memory, memory management
branch approach. unit, I/O ports, etc).
(e)
(f)
at le
In this approach, the instructions are reshuffled such
that the operation does not change the result.
A successful branch is assumed and the execution of the
(b) As a result of the considerable reduction of the control
area, a large number of CPU registers can fit on-chip.
ic w
(c) By reducing the control area on the VLSI chip and filling
branch is delayed until the already prefetched
the area by numerous identical registers, the
instructions are executed. Hence, no time is lost and
regularization factor (which is defined as, ratio of chip
bl no
5. Scoreboarding :
(a) Another problem in instruction pipelines is that of data 9. The computing speed :
ch
dependency. (a) A simpler and smaller control unit in RISC requires fewer
(b) The data in some register put by instruction 1 may be gates. This results in shorter propagation paths for
required by instruction 2; and before the value in the control unit signals, decreasing the delay time for
register is available, the instruction 2 may be ready for control signals and hence yielding a faster operation.
Te
(g) This instruction will now be executed as soon as the 10. Design cost and reliability considerations :
execution of the instruction, which caused bit ‘i’ to be (a) It takes a shorter time to complete the design of a RISC
set, is completed. control unit, because of the smaller instruction set, fixed
instruction length and less instruction formats. Thus 10.6 Polling and Interrupts :
contributing to the reduction in the overall design cost.
Whenever more than one I/O devices are connected to a
(b) A simpler and smaller control unit will obviously have a
microprocessor based system, any one of the I/O devices
reduced number of design errors and, therefore, a
may ask service at any time.
higher reliability.
There are two methods in which the microprocessor can
10.5.5 RISC Shortcomings : service these I/O devices. One method is to use the
1. Since a RISC has a small number of instructions, a polling routine, while the other method employs
interrupt.
ns e
number of functions, performed on CISC’s by a single
instruction, will need more instructions on RISC. Hence, In the polling routine the microprocessor checks
io dg
RISC’s code will be longer. whether any of the I/O devices is requesting for service.
2. More memory will have to be allocated for RISC The polling routine is a simple program that keeps a
programs, and the instruction accesses between the check for the occurrences of interrupt.
memory and the CPU will be increased. For e.g. : Let us assume that our polling routine is
at le
10.5.6 ON-Chip Register File Versus Cache
Evaluation :
servicing I/O ports 1, 2, 3,………….8. The polling routine
will check the status of the I/O ports in a proper
ic w
sequence.
Modern CISC have on-chip cache to compensate the
The polling routine will first transfer the status of the I/O
registers of RISC processors. But, let us evaluate the
bl no
addresses. addresses.
routine will test port 2. The process is repeated till all the
2. Has to be tens of Kbytes About 128 registers (of 8 ports are tested and all the I/O ports those are
to be effective. 4-bytes each, i.e. 512
demanding service are processed.
bytes) will have
Te
ns e
In a non-pipelined system, the processor fetches an
io dg
instruction from memory, decodes it to determine what
the instruction was, read the instructions inputs from the
register file, performs the computation required by the
instruction and writes the result back into
executing an instruction.
ns e
io dg
(b) Timing diagram of (c) Timing diagram of
at le
execution of instructions in
non-pipelined system.
execution of instructions in
Fig. 10.7.1
pipelined systems.
ic w
In case of a system without pipelining the time required
for executing a set of instructions is much more than the
bl no
You will notice that the time required for executing five
Fig. 10.7.2 : Six stage pipeline flowchart
instructions on a non-pipelined system is 10 clock pulses
while that on two stage pipelined processor is This can be shown on a time scale as in Fig. 10.7.3.
Te
ns e
Instruction :
io dg
Fig. 10.7.5 : Branch in a six-stage pipeline
1. has its result being written back, instruction
Ex. 10.7.1 : Draw the space-time diagram for a six-
2. is being executed, instruction
segment pipeline showing the time it takes to
3. has its operands being fetched, instruction process eight tasks.
4.
at le
has the address of operands being calculated,
instruction
Soln. :
Clock cycles
Segment :
1 2 3 4 5 6
1 T1 T2 T3 T4 T5 T6 T7 T8
7 8 9
–
10 11 12 13
– – – –
ic w
5. is being decoded and instruction 2 – T1 T2 T3 T4 T5 T6 T7 T8 – – – –
6. is being fetched. 3 – – T1 T2 T3 T4 T5 T6 T7 T8 – – –
bl no
4 – – – T1 T2 T3 T4 T5 T6 T7 T8 – –
This shows how pipelining is a overlap parallelism of
5 – – – – T1 T2 T3 T4 T5 T6 T7 T8 –
instructions.
6 – – – – – T1 T2 T3 T4 T5 T6 T7 T8
This six stage pipeline system can be implemented with
It takes 13 clock cycles to process 8 tasks.
Pu K
Fig. 10.7.4 : Six stage pipelined architecture connected linearly to perform different operations.
These may perform different operations to execute an
In pipelining, when a branch instruction is executed the
Te
Hence the sequential instructions in the pipeline are to 10.7.3.1 Asynchronous and Synchronous
be cleared and instructions from target are to be Linear Pipelining :
fetched. Clearing the sequential instructions from the In case of an asynchronous linear pipeline system, there
pipeline is called as flushing of the pipeline. is a set of handshaking signals between the two stages.
Whenever a stage (say stage i) completes its operation,
These problems are discussed in detail in section 10.8.
it places the result on the input lines of next stage
Also the solutions to the same are discussed in that
(i.e. stage i + 1) and enables the ready (or strobe) signal.
section. This is as shown in the timing diagram in
The next stage (i.e. stage i + 1) on completing its
Fig. 10.7.5. operation, accepts the data from its input lines and
indicates this to the previous stage (i.e. stage i) by giving The reservation table follows a diagonal line for
an acknowledgement signal. On this, the stage which synchronous linear pipeline as shown in Fig. 10.7.8.
had placed the data (i.e. stage i), also checks its input if it
Reservation table is a space-time diagram showing
has previous stage (i.e. stage i – 1) completed its
streamline pattern. Hence as seen in Fig. 10.7.8, for an n-
operation and is ready with the result.
stage pipeline, n clock pulses are required to execute the
It also repeats the same process as explained with stage i
instruction.
and i + 1. This can be explained as shown in the
Once the pipeline is filled up completely, the processor
Fig. 10.7.6.
completes one instruction execution every clock pulse.
ns e
10.7.3.2 Clocking and Timing Control :
io dg
Fig. 10.7.6 : Asynchronous linear pipelining system The clock cycle, as shown in Fig. 10.7.7, can be
Hence the asynchronous linear pipelined system will calculated as discussed below. Let i , be the delay time
have variable throughput rate and will experience for stage Si. Hence the clock cycle time can be given
different amount of delay at each stage. as :
at le
In case of a synchronous linear pipelined system the
stages are separated by latches. Whenever a stage
K
= max [ i ] + d = m + d
1
The clock signal is synchronously given to all the latches, The data from each stage is latched in the master
bl no
such that on reception of the clock signal each stage flipflop of latch register during the rising edge and given
takes the output of the latch connected to its input. This to the slave flipflop during the falling edge. In fact, m
system is shown in Fig. 10.7.7. >> d; hence we can say that, m.
This frequency f, is also termed as the throughput of the
system as it gives the time required for one instruction
ch
The latches are infact master-slave flipflops. The time because of more than one clock pulses, may be required
required by each stage is expected to be equal; and it is for the initiation of successive instructions.
this time that determines the clock period as well as the The initiation of successive instructions may take more
speed of the pipelined system. clock pulses because of their data or control
stages in a synchronous pipeline can be represented by The clock pulse at each stage is expected to arrive
the reservation table. simultaneously. But, because of the time delay of the
path, different stages get the pulse at different time
offset s; this problem is referred to as clock skewing.
Assuming the shortest logic path to get the clock at a
delay of tmin and the longest logic path to get the clock
pulse at delay of tmax ; to avoid this problem the m tmax
+ s and d tmin – s. Thus the clock period with skew is :
d + tmax + s m + tmin – s
Hence in ideal case, s = 0, tmax = m and tmin = d. Hence,
Fig. 10.7.8 : Reservation table of a synchronous linear
pipeline even with the clock skewing = m + d
10.7.3.3 Speedup, Efficiency and Throughput : number of stages, termed k0 (optimal no. of stages), the
PCR starts reducing as shown in the Fig. 10.7.10.
A linear pipeline of k stages will take k + (n – 1) clock
cycles to execute n instructions; the first instruction will
take k clock cycles and the remaining n – 1 instructions
will take one clock cycle each (assuming there are no
dependency of the instructions). Hence with the clock
cycle width being , the total time required to execute
these n instructions will be
TK = [ k + (n – 1) ] …(10.7.1)
ns e
An equivalent non-pipelined system the time required to
io dg
execute n instructions will be
T1 = nk …(10.7.2)
Thus the speedup factor of a k-stage pipelined system Fig. 10.7.10 : Graph of PCR vs. k
can be given as : To execute a program on sequential non-pipelined
at le Sk =
T1
Tk
=
nk
nk
k + (n – 1)
system, say the time required is ‘t’. Thus to execute this
program on a k-stage pipeline with equal flow-through
t
ic w
= …(10.7.3) the time required = + d, where d is latch delay.
k + (n – 1) k
Hence as the number of instructions n increases, Sk 1
Hence, f =
t
bl no
k
where h is the cost of each latch, c is the cost of all logic
gates of a stage, d is the latch delay, f is the pipeline
Te
Ex. 10.7.2 : Consider the execution of a program of 15,000 10.7.4 Non Linear Pipeline Processors :
instructions by a linear pipeline processor with
a clock rate of 25 MHz. Assume that the A dynamic or multi-function pipeline is called as
instruction pipeline has five stages and that non-linear pipeline. In a linear pipeline the operations
one instruction is issued per clock cycle. The that are being performed are fixed; each stage as a fixed
penalties due to branch instructions and out-of- operation.
sequence executions are ignored.
ns e
But in a non-linear pipeline allows feed forward and
(a) Calculate the speedup factor in using this feedback connections in addition to the streamline
io dg
pipeline to execute the program as
connection. It may also have more than one output
compared with the use of an equivalent
non-pipelined processor with an equal i.e. the output need not be from the last stage. An
amount of flow-through delay. example of three stage non-linear pipeline system is
shown in Fig. 10.7.11.
(b) What are the efficiency and throughput of
Soln. :
at le this pipelined processor ?
ic w
Given : n = 15000, f = 25 MHz, k = 5
nk
Speed up factor (Sk) = Fig. 10.7.11 : A 3-stage non linear pipeline
k + (n – 1)
bl no
= 24.9933 MIPS
There are two examples of different operations say,
Ex. 10.7.3 : A non-pipeline system takes 50 ns to process
X and Y, for which the reservation table is shown in
a task. The same task can be processed in a
Fig. 10.7.11.
six stage pipeline with a clock cycle of 10 ns.
Te
Determine the speedup and the efficiency of Fig. 10.7.12 shows the requirement of different stages at
the pipeline for 100 tasks. What is the different times for doing the corresponding operation.
maximum speedup and efficiency that can be For e.g. the function X, has first to be given to S1, then
achieved ? S2, then S3, then S2, then S3, then S1, then S3 and finally
Soln. : to S1 which will give the output.
Similarly the function Y is first given to S1, then S3, then number of time units (clock cycle) required between the
S2, then S3, then S1 and finally to S3 which will give the two initiations of a function is called as the latency
output. period between them.
The number of columns in a reservation table Some valid latencies or latency sequences that do not
corresponds to the evaluation time of that function. cause any collision are shown in Fig. 10.7.13. The
Hence from Fig. 10.7.12, function X has an evaluation latencies that do not cause collision are called as latency
time of 8 clock cycles while function Y has an evaluation sequence while latencies that cause collision are called
time of 6 clock cycles. as forbidden latencies. Examples of forbidden latencies
A pipeline initiation table consists of different time when are shown in Fig. 10.7.14.
the next time the same function can be initiated. The
ns e
Cycle repeats
io dg
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
S1 X1 X2 X1 X2 X1 X2 X3 X4 X3 X4 X3 X4 X5 X6
S2 X1 X2 X1 X2 X3 X4 X3 X4 X5 …
at le
S3 X1 X2 X1 X2 X1 X2 X3 X4 X3 X4
(a) Latency cycle (1, 8) = 1, 8, 1, 8, 1, 8, …, (With an average latency of 4.5)
Cycle repeats
X3 X4
ic w
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
bl no
S1 X1 X2 X1 X3 X1 X2 X4 X2 X3 X5 X3 X4 X6 X4 X5 X7 X5
S2 X1 X1 X2 X2 X3 X3 X4 X4 X5 X5 X6 X6 X7 …
S3 X1 X1 X2 X1 X2 X3 X2 X3 X4 X3 X4 X5 X4 X5 X6 X5 X6
Pu K
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
S1 X1 X1 X2 X1 X2 X3 X2 X3 X4 X3
Te
S2 X1 X1 X2 X2 X3 X3 X4 …
S3 X1 X1 X1 X2 X2 X2 X3 X3 X3
(c) Latency cycle (6) = 6, 6, 6, 6, …, (With an average latency of 6)
Fig. 10.7.13 : Example latencies of function X that do not cause collision
1 2 3 4 5 6 7 8 9 10 11
S1 X1 X2 X3 X1 X4 X1, X2 X2, X3
Stages S2 X1 X1, X2 X2, X3 X3, X4 X4 …
S3 X1 X1, X2 X1, X2, X3 X2, X3, X4
1 2 3 4 5 6 7 8 9 10 11
S1 X1 X1 X2 X1
Stages S2 X1 X1 X2 X2 …
S3 X1 X1 X1 X2 X2
As shown in Fig. 10.7.13, forbidden latencies are 2 and 5. bit Ci is ‘1’ if the latency i causes a collision, else it is ‘0’.
Besides, 4 and 7 are also forbidden latencies. To detect For e.g., the reservation tables seen in Fig. 10.7.12, will
have the collision vectors as Cx = (1011010) and
the forbidden latency, you need to check the distance
Cy = (1010). Thus for Cx, there is a permissible latency 1,
between the two marks on the reservation table in a
3 and 6 while forbidden latencies of 7, 5, 4 and 2.
row.
2. State Diagrams :
For e.g. in case of function X, as shown in
Fig. 10.7.12(a), the distance between two marks in S1 is 5 From the collision vector, we can make the state
diagram for the pipeline.
ns e
or 2. Average latency of a latency cycle is defined as the
ratio of sum of all latencies to the number of latencies The collision vector Cx, achieved above is called as the
io dg
along the cycle. For e.g. (1,8) latency as shown in initial collision vector. When loaded in a register and
Fig. 10.7.13(a), has an average latency of (1+8)/2 = 4.5. shifted right, each bit at the output corresponds to an
increase in latency.
The average latency of a constant cycle, i.e. latency cycle
A ‘1’ at the output indicates collision, while a ‘0’ indicates
at le
that contains only one latency value, is same as the
constant value. For e.g. the average latency of the cycle
3 and 6, as shown in Figs. 10.7.13(b) and (c) are
no collision. A ‘0’ is inserted from the left for every clock
cycle. This can be implemented as said by a right shift
ic w
register and OR gates, as shown in
3 and 6 respectively.
Fig. 10.7.15.
10.7.4.1 Collision Free Scheduling or Job
bl no
Sequencing :
would be collision.
As seen in the previous section, we can separate the The state diagrams for the collision vectors Cx and Cy
permissible latencies and the forbidden latencies using are as shown in Fig. 10.7.16.
ns e
(b) State diagram for collision vector Cy
Fig. 10.7.16 : State diagrams for collision (a) Collision Vector = (100)
x1 x2 x3 x1 x4 x2 x5 x3 x4
io dg
A transition example can be explained as below. x5
For e.g. the three bit shifts with the initial vector of x1 x2 x3 x4 x5 … …
function X, will result in 0001011; this when ORed with
x1 x2 x3 x4 x5 …
the initial collision vector results in 1011011.
at le
The state diagram (Refer Fig. 10.7.16(a)) shows this
transition for the function X. If the number of shifts is
greater than m, the next state is same as the initial
x1
Collision with latency 1
x1 x2 x2 x3 x3
ic w
collision vector. For e.g. if the number of shifts is 8 or x1 x2 x3 …
more in Fig. 10.7.16(a), it comes back to the initial x1 x2 x3 …
+
bl no
A cycle that travels for more than one time through the
same state is a greedy cycle. some greedy cycles in the
Fig. 10.7.16(a) are (1,8,3,8), (1,8,6,8), (3,6,3,8,6) etc.
4. Minimal Average Latency (MAL) :
Te
1 2 3 4
S1 X X
S2 X
S3 X
Fig. P. 10.7.4(a)
(c) As seen in the state transition diagram, For a 5-stage pipelined (Y) processor, the time required
to execute n-instruction (Tk) = [k + (n – 1)] .
Simple cycles : (2), (4), (1,4), (1,1,4) (2,4) etc.
where k = 5 stages
Greedy cycles : (1,4,2,4) , (1,1,4,2,4) etc.
n = 100 instructions
(d) Optimal constant latency (2) 1 1
= = = 0.05 sec
f 20MHz
Minimum average latency (MAL) = 2
Tk = [5 + (100 – 1)] 0.05 sec
(e) Throughput
Cycle repeats = 5.2 sec
ns e
T1 16 sec
Speedup = = = 3.07
Tk 5.2 sec
S1 x1 x2 x1 x3 x2 x4 x3 x5 …
io dg
(b) Non-pipelined processor (X) :
S2 x1 x2 x3 x4 …
S3 x1 x2 x3 x4 … Time taken Instructions
16 sec 100
at le
As seen in the above table, one instruction is executed
0 1 2 3 4 5 6 7 8
processors, what is the speedup of
processor Y compared with that of 1
processor X? 2
(b) Calculate the MIPS rate of each
3
processor.
4
Soln. :
(a) Program has 100 instruction : 5
(Since the total number of tasks are 9, there are 8 bits in Soln. :
collision vector i.e. C8 to C1) 1. Forbidden latencies = (2, 4, 6)
2. State transition diagram : Collision vector C = (101010)
(Since the total number of tasks are 7, there are 6 bits in
the collision vector i.e. C6 to C1)
ns e
io dg
3.
at le
Latency cycles :
Fig. P. 10.7.6
3. Latency cycles :
Fig. P. 10.7.7
ic w
(7), (1, 7), (3, 7), (3, 5), (5), (3, 7, 5, 3, 7), (5, 3, 7)
(7), (2, 7), (2, 2, 7), (4, 7), (4, 3), (3, 7), (4, 3, 4, 7), (9), (2, 9),
Simple cycles :
(4, 9) (3, 9)
bl no
table. 4 4 20 nsecs
1 1
Clock cycle (··· frequency = = )
0 1 2 3 4 5 6 20 nsecs
Stage
1
S1 = = 25 MIPS
70 nsecs
S2
Ex. 10.7.8 : For a unifunction pipeline, the forbidden set of
S3
latencies is as given below.
1. Determine latencies in Forbidden list F
F = {1, 3, 6} with the largest forbidden
and collision vector C
latency = 6
2. Draw the state transistor diagram
1. Obtain collision vector
3. List all simple cycles and greedy cycles
2. Draw the state diagram
4. Determine minimum average latency
3. State all simple and greedy cycles
(MAL)
4. Obtain MAL
5. For a pipeline clock period = 20 ns.
Soln. :
Determine maximum throughput of the
1. Collision vector (C) = (100101)
pipeline.
1 2 3 4 5
S1
S2
Fig. P. 10.7.8 S3
3. Latencies : (5), (4), (4, 5), (2, 5), (2, 2, 5), (7), (2, 7), Also find the throughput for = 25 nsecs
ns e
(4, 7) Soln. :
Simple latencies : (5), (4), (4, 5), (2, 5), (7), (2, 7)
io dg
1. Collision vector (C) = (0110)
Greedy latency : (2, 2, 5) 2. State transition diagram : (Refer Fig. Ex. 10.7.10)
4. Optimal latency : (2, 2, 5) 3. Latencies : (4), (1,4)
2+2+5 Simple latencies : (4), (1,4)
Minimal average latency (MAL) = =3
Note : at le 3
S1 1+4
Minimal average latency (MAL) = = 2.5
2
S2
ch
Ex. 10.7.11 : A certain pipeline with the four stages S1, S2,
Soln. : S3 and S4 is characterized by the following
Table P. 10.7.11.
Table P. 10.7.11
Fig. P. 10.7.9 t0 t1 t2 t3 t4 t5 t6
ns e
Latency
[expressed as operations/second or operations/cycles]
io dg
In a pipelined processor,
Fig. P. 10.7.11
1
3. Latency cycle : (6), (1, 6), (3), (3, 6), (3, 3, 6), Throughput
Latency
(6, 1, 6), (3, 6, 6) Pipelining : To implement pipelining, designers divide a
4.
at le
Simple cycles : (6), (1, 6), (3), (3, 6)
Greedy cycles : (3, 3, 6), (6, 1, 6), (3, 6, 6)
Optimal constant latency : (3)
processor’s data path into sections called stages and
place pipeline latches between each section.
ic w
As shown in Fig. 10.8.1, at start of each cycle, the
Minimum average latency (MAL) = 3 pipeline latches read their inputs and copy them to their
outputs.
10.8 Instruction Pipelining and Pipelining
bl no
Stages :
and writes the result back into the register file. This
approach is called unpipelined approach.
Pipelining is a technique for overlapping the execution The amount of data path that a signal travels through in
of several instructions to reduce the execution time of a one cycle is called a stage of the pipeline.
set of instructions. A five-stage pipeline is shown in Fig. 10.8.1(b).
Each instruction takes the same amount of time to Stage 1 : Fetch block
execute in a pipelined processor as it would in a Stage 2 : Decode block
non-pipelined processor, but the rate at which
Stage 3, 4, 5 are subsequent blocks in execution process.
Fig. 10.8.2 shows the Instruction flow in a pipelined pipelining technique if the time taken for each
processor. stage is 20 ns.
Cycle Soln. : n = 100 instruction, K = 5, = 20 ns
1 2 3 4 5 6 7 8 9 10 Execution time pipelined = (5 + 100 – 1)
(Instruction) = (5 + 99) 20 ns
Pipeline I1 I2 I3 I4 I5 I6 I7 I8 I9 I10 = 2080 ns.
IF
stages Execution time unpipelined = (K ) n = 5 20 ns 100
DE I1 I2 I3 I4 I5 I6 I7 I8 I9
= 10000 ns.
FO I1 I2 I3 I4 I5 I6 I7 I8
10000
ns e
EX I1 I2 I3 I4 I5 I6 I7 Speedup ratio is = = 4.80 times.
2080
WB I1 I2 I3 I4 I5 I6
io dg
10.9 Pipeline Hazards :
I1 : executed in 5th cycle
I2 : executed in 6th cycle Pipelining increases processor performance by
I3 : executed in 7th cycle increasing instruction throughput, because several
instructions are overlapped in the pipeline, cycle time
at le Fig. 10.8.2
25 ns. What is the cycle time of a pipelined 1. Structural hazards (Resource conflicts) :
version of the processor with 5 evenly divided These hazards are caused by access to memory by two
pipeline stages, if each pipeline latch has a instructions at the same time. These conflicts can be
latency of 1 ns?
slightly resolved by using separate instruction and data
Te
Soln. : memories.
Cycle time unpipelined
Cycle time pipelined = Structural hazards occur when the processor’s hardware
Number of stages of pipeline
is not capable of executing all the instructions in the
+ Pipeline latch latency
pipeline simultaneously.
25 ns
= + 1 ns = 6 ns. Structural hazards within a single pipeline are rare on
5
modern processors because the Instruction Set
To find the speedup of the execution process in a
architecture is designed to support pipelining.
pipelined processor,
2. Data hazards (Data dependency) :
Execution time pipelined = (K + n – 1)
This hazard arises when an instruction depends on the
Execution time unpipelined = (K ) n
result of a previous instruction, but this result is not yet
Where, n = Number of instructions available.
= Time taken for each stage These are divided into four categories :
K = Number of stages in pipeline 1. RAW – Hazard (Read after write Hazard)
Ex. 10.8.2 : If a processor executes 100 instructions in a 2. RAR – Hazard (Read after read Hazard)
pipelined (5 stage) processor and unpipelined 3. WAW – Hazard (Write after write Hazard)
processor. What is the speedup achieved by 4. WAR – Hazard (Write after read Hazard)
RAR Hazard : should fetch from, it consumes some time and also
RAR Hazard occurs when two instructions both read some time is required to flush the pipeline and fetch
instructions from target location. This time wasted is
from the same register. This hazard does not cause a
called as branch penalty.
problem for the processor because reading a register does
not change the register’s value. Therefore, two instructions 10.9.1 Methods to Resolve the Data Hazards
that have RAR Hazard can execute on successive cycles. and Advances in Pipelining :
Example 1 : Instructions having RAR Hazard. The methods used to resolve the data hazards are
ADD r1, r2, r3 discussed in the following sub sections.
ns e
SUB r4, r5, r3 Both Instructions read r3, creating RAR
10.9.1.1 Pipeline Stalls :
io dg
RAW Hazard :
The hardware inserts a special instruction called (NOP)
This hazard occurs when an instruction reads a register i.e. no operation instruction known as a bubble into the
that was written by a previous instruction. These are also flow of execution stage of pipeline to resolve the RAW
called as data dependencies (or) true dependencies. hazard between two instructions.
ADD
at le
Example 2 : Instructions having RAW - Hazard.
r1, r2, r3
This method is also called as hardware interlocks.
they appear in the program and uses the same pipeline for Example of a RAW dependency exists between two
all instructions, WAR and WAW hazards do not cause any instructions, instead of transferring an ALU result into a
problem in execution process. destination register, the hardware checks the destination
Example 3 : Instruction having WAR Hazard.
Te
ns e
Fig. 10.9.1 : Operand forwarding mechanism in pipelining for resolving a data hazard
io dg
In this case the ALU stage or the execution stage of the language program, it detects the data dependencies and
pipeline will forward the data to the next instruction as re-orders the instructions.
shown in the Fig. 10.9.1. Fig. 10.9.1 assumes that the If necessary to delay the loading of the conflicting data
system is 6 stage pipelined system. it inserts no-operation instruction (NOP).
at le
As shown in Fig. 10.9.1 the data i.e. the value of register
r1 is passed from the first instruction to the second
10.9.2 Handling of Branch Instructions to
Resolve Control Hazards :
ic w
instruction. Actually the value of the register r1 is
The methods used to resolve the control hazards are
updated by the write operand stage of the first
discussed in the following sub sections.
instruction. But, before that the same is required by the
bl no
execute stage of second instruction. Hence the value of 10.9.2.1 Pre-Fetch Target Instruction :
this register is passed to the second instruction. One way of handling a conditional branch is to prefetch
10.9.1.3 Dynamic Instruction Scheduling (or) the target instruction in addition to the instruction
Out-Of-Order (OOO) Execution :
Pu K
ns e
to speculate the outcome of a conditional branch
ADD AX,[SI]
instruction before it is executed.
io dg
INC SI
The pipeline then begins pre-fetching the instructions
ADD AX,[SI]
stream from the predicted path.
A correct prediction eliminates the wasted time caused Thus you will notice that we have unrolled the loop, and
written the loop for the number of times it was to be
at le
by branch penalties.
10.9.2.6 Loop Unrolling Technique : was to be repeated for only 5 times, but if the loop was
larger and had to be repeated for say 100 or 1000 times,
ch
This is a very superb solution to handle the stalls due to the memory consumed would be very huge.
branching in loops.
In this case a code which has a loop that has to be
10.9.2.7 Software Scheduling or Software
Pipelining :
executed multiple times, will be actually stored multiple
Te
times (or unrolled) so as to remove the need of In case of software pipelining, the iterations of a loop of
branching. the source program are continuously initiated at regular
Let us see how this can be implemented with an intervals, before the earlier iterations complete. Thus
example. taking advantage of the parallelism in data path.
If there is a code for adding an array of 5 numbers, the
It can be said that software scheduling, schedules the
loop can be written as shown in the code below (using
operations within a loop, such that an iteration of the
processor 8086) :
loop can be pipelined to yield optimal performance.
Label Instructions
The sequences of the instructions before steady state
MOV AX,0000H
are called as PROLOG, while the ones after the steady
MOV CX,0005H
state are called as EPILOG.
AGAIN : ADD AX,[SI]
Let us see this with an example. Suppose the source
INC SI code is
DEC CX for(i=0;i<=n-1;i++)
JNZ AGAIN a[i]=a[i]+10;
This loop can be unrolled to avoid stalling as shown When this loop is executed by a processor, the
below : processor will do the following:
for(i=0;i<=n-1;i++) {
{ Store a[i-2];
Load a[i]; Add a[i-1]+10;
Add a[i]+10; Load a[i]
Store a[i]; }
} Store a[n-2];
Here you will notice that the three instructions inside Add a[n-1]+10;
the loop (in each iteration) are the same i.e. each of the Store a[n-1]
ns e
three instructions have to operate on the data a[i]. Thus, you will notice inside the for loop i.e. for each
When this is converted to pipeline, it will look as shown iteration, each of the three instructions are working on
io dg
in the Fig. 10.9.2. But the three instructions, one below different data and hence are not dependent on each
the other are dependent and hence cannot be pipelined. other and hence allowing pipelining of the three
But the instructions that are circled can be pipelined. instructions without any hazards.
Load
at le
a[0]
a[0] a[1]+10 hazard due to branching. Let us see how this can be
Store Add a[2]+10 Load a[3] implemented.
a[1] In this case the probability of branch to be taken or not
taken is found. Based on this the code is written with all
Pu K
Store a[3] Add Load probability calculated earlier. This code is called as the
a[4]+10 a[5] trace.
Store Add The other blocks of code are made for less probable
a[4] a[5]+10 cases i.e. if branching is taken. Hence this trace code and
Te
ns e
Each instruction has the operands and a predicate. This
This can be divided into four blocks as shown in the
removes the branching instructions and hence the stall
io dg
Fig. 10.9.3.
of pipeline.
An example of predicate instruction is given below,
CMOVZ AX, BX, CX.
Although in 25% cases we will need to branch to In this case the data is brought from the memory, well
Block 1, but there would be only one branching and not before it is needed.
multiple branching as required in the previous case. The compiler indicates the data that will be required in
the later parts of the program and the corresponding
Trace :
data is brought and kept in the processor.
Load a[i] into say AL This removes the latency of memory accesses required
Compare AL with 0 for the data to be brought from the memory.
If not equal to zero then As the data required later is speculated and brought in
branch to label over advance it is called as speculative loading of data.
Block 1
Add AL with 10
Over : Increment AL 10.9.2.11 Register Tagging :
Multiply AL with itself
Store the result in a[i] Multiply AL with itself Register tagging is normally done by a unit called as
Store the result in a[i] Reservation Station (RS) in a processor.
This reservation station is used in order to resolve the
data or resource conflicts amongst the multiple
Fig. 10.9.4 instructions entering the processor.
The operands are made to wait in the reservation station The number of speculative instructions in the instruction
until their data dependencies are resolved. window or the reorder buffer. Typically only a limited
A tag is used to identify each reservation station, and number of instructions can be removed each cycle.
the tag unit keeps on monitoring these reservation Misprediction is expensive (11 or more cycles in the
stations. Pentium II).
This tag unit also monitors all the registers used 10.9.3.2 Static Branch Prediction :
currently or the reservation stations. This technique is
called as register tagging. Static Branch Prediction predicts always the same
This mechanism allows to resolve the register conflicts direction for the same branch during the whole program
ns e
and hence the resultant data hazards. execution.
io dg
The reservation stations can also be used as buffers It comprises hardware-fixed prediction and compiler-
between the various stages of pipeline in the processor. directed prediction.
These stages can work simultaneously once the conflict
Simple hardware-fixed direction mechanisms can be :
is resolved.
(a) Predict always not taken
at le
10.9.3 Branch Prediction :
Branch prediction foretells the outcome of conditional
branch instructions. Excellent branch handling
(b)
(c)
Predict always taken
The fact if miss peculated, instructions can be removed During the start-up phase of the program execution,
from internal buffers, or have to be executed and can where a static branch prediction might be less effective,
only be removed in the retire stage, the history information is gathered and dynamic branch
prediction gets more effective. In general, dynamic Hence all the operations i.e. accessing the data or
branch prediction gives better results than static branch instructions from memory, accessing I/O devices and
prediction, but at the cost of increased hardware internal LAU operations can be done simultaneously. In
complexity. the 8086 processor, there are two separate units to
10.9.3.5 One-bit Dynamic Branch Predictor : perform the memory accesses and the LAU operations
named as Bus Interface Unit (BIU) and the Execution Unit
(EU).
ns e
10.11.1 Flynn’s Classification of Parallel
io dg
Computing :
Fig. 10.9.6
A method introduced by Flynn, for classification of
A one-bit predictor correctly predicts a branch at the parallel processors is most common. This classification is
at le
end of loop iteration, as long as the loop does not exist.
loop instead of looping again, and one when executing 1. Single Instruction Single Data (SISD)
the first loop iteration, when it predicts exit instead of 2. Single Instruction Multiple Data (SIMD)
looping.
3. Multiple Instruction Single Data (MISD)
Such a double misprediction in nested loops is avoided 4. Multiple Instruction Multiple Data (MIMD)
Pu K
Fig. 10.9.7
from the processor and decodes it.
2. Single Instruction Multiple Data (SIMD) : This system is not used much, but can be used in cases
where in a data has to undergo many computations to
get the result for e.g. to add two floating point numbers.
Fig. 10.11.3 shows the implementation of such a system.
4. Multiple Instruction Multiple Data (MIMD) :
ns e
Examples of this kind of systems are SMPs (Symmetric
Fig. 10.11.2 : SIMD organization
io dg
Multiprocessors), clusters and NUMA (Non-Uniform
In this case the same instruction is given to multiple
Memory Access). Fig. 10.11.4 shows the structure of such
processing elements, but different data.
a system.
This kind of system is mainly used when many data
at le
(array of data) have to be operated with same operation.
Vector processors and array processors fall into this
category.
ic w
Fig. 10.11.2 shows the structure of a SIMD system
The data stream is single. In this case the data is taken Intel brought its mainstream desktop CPU lineup into
by the first processing element. the Nehalem era today with the launch of the Core i7
This processing element performs an operation on the 860 and 870, and the Core i5 750. Also launched the P55
chipset, which implements a new system architecture
data given to it and forwards the result to the next
Te
new PC system architecture from even AMD's offerings, As the GPU gained in size and importance, the standard
there is now a PCIe interface that enables the GPU to PC system essentially took on a kind of hacked-together
attach directly to the processor socket. non-uniform memory architecture (NUMA) topology,
This latter move was made in anticipation of two things : with two main pools of DRAM (main memory and
(1) The GPU will migrate right into the processor socket at a graphics memory) attached to the two main processors
later point when Intel releases a CPU with an on-die (the CPU and the GPU).
GPU integrated into it, and As the amount of graphics memory increased to the
ns e
(2) For a discrete GPU, Intel hopes you'll use Larrabee. point where the GPU became a second system on a
daughtercard, this topology began to get more and
To understand what all of this means, let's look at a few
io dg
more unbalanced and inefficient in its use of memory
figures.
and bandwidth.
The P55's new system architecture :
In 2003, AMD made the obvious improvement by
at le
Fig. 10.11.6 is of a standard Core 2 Duo system, and it
represents the general layout of an Intel system up until
Nehalem or an AMD system up until the Opteron.
moving the memory controller hub up to the CPU
socket, so that main memory could attach directly to the
ic w
CPU the way that GDDR had been directly attached to
the GPU for some time. You can see the results below,
bl no
6. Intel HD graphics.
8. Multi thread/multi-core.
ns e
speed, bigger cache and hyper-threading features in i7
(C-8331) Fig. 10.11.8 : Intel's P55 platform
io dg
processor make them more suitable than i5 processors
Intel's P55 can be seen as an evolution of the AMD for certain applications like video encoding, data
topology shown previously, with the graphics hub and
crunching, graphic-intensive work, multitasking and
memory hub functionality all moved right onto the
at le
processor die. Because the northbridge is completely
also typically included in the chipset count). 2. i7 mobile processor are quad-core.
The PCH is connected to the processor socket by the 3. i5/i7 processors support hyper-threading.
relatively low-bandwidth (2GB/s) DMI bus that used to 4. i5/i7 processors come with direct media interface
Pu K
connect the MCH to the ICH. Disk I/O, network traffic, and integrated GPU.
and other types of I/O will have to share this link. This 5. i5/i7 processors come with Ivy bridge.
ch
shouldn't be a problem for single-socket systems, 6. i5/i7 processors come with turbo boost facility.
though.
The basic block diagram of i5/i7 processor is given in
Te
So with the advent of the P55, Intel's core logic has the Fig. 10.11.9.
gone from a two-chip to a one-chip implementation,
i7 mobile processors are the next generation 64-bit,
pushing ahead of the comparable AMD platform. In
multi-core mobile processors built on 45-nanometer
theory, this very tight, direct coupling of the GPU +
process technology.
GDDR and CPU + DRAM systems should make for a
It has quad-core.
performance boost vs. both earlier topologies.
It has 32-KB instruction and 32-KB data first-level cache
10.11.2 i5/i7 Mobile Version :
(L1) for each core.
The Intel i5 mobile processor has the following
It has 256-KB shared instruction/data second-level
features :
cache (L2) for each core.
1. Intel hyper-threading technology.
It can have upto 8-MB of shared instruction/data third-
2. Intel turbo boost technology.
level cache (L3), shared among all cores.
3. Number of simultaneous threads.
ns e
io dg
at le
ic w
bl no
Pu K
ch
It has PCI express bus. It is an improved version of older PCI, PCI-X and AGP bus standards.
Te
Intel core i7-900 supports one, 16-lane PCI express port configurable to two, 8-LAN PCI express port intended for
graphics.
The second generation microprocessors of the Intel core i3, i5 and i7 processors are the ones we normally seen in the
computers today. The comparison of the same is given in the Table 10.11.1.
ns e
size
io dg
6. Intel turbo Not present Present Present Present
boost 2.0
at le
8. Best Desktop
processor
Intel Core i3-2130
(3.4GHz, 3MB)
Intel Core i5-2550K
(3.4GHz, 6MB)
Intel Core i7-3930
(3.2GHz, 12MB)
Intel Core i7-3960
(3.3GHz, 15MB)
ic w
9. Best Mobile Intel Core i3-2370 Intel Core i5-2540M Intel Core i7-2860 Intel Core i7-
(Liptop) (2.4GHz, 3MB) (2.6GHz, 3MB) (2.5GHz, 8MB) 2960XM (2.7GHz,
bl no
processor 8MB)
Q. 1 What are the different addressing modesexplain Q. 9 Write a short note on pipeline processing.
any two ?
Q. 10 Explain six stage pipelining.
ch
Chapter
11
ns e
io dg
at le Memory & Input /
ic w
Output Systems
bl no
Pu K
Syllabus
ch
Memory Systems : Characteristics of memory systems, Memory hierarchy, Signals to connect memory
to processor, Memory read and write cycle, Characteristics of semiconductor memory : SRAM, DRAM
and ROM, Cache memory – Principle of locality, Organization, Mapping functions, Write policies,
Replacement policies, Multilevel caches, Cache coherence,
Te
Input / Output systems : I/O module, Programmed I/O, Interrupt driven I/O, Direct Memory Access
(DMA).
Case study : USB flash drive.
Chapter Contents
11.1 Introduction to Memory and Memory 11.8 Cache Memory : Concept, Architecture
Parameters (L1, L2, L3) and Cache Consistency
11.2 Memory Hierarchy : Classifications of 11.9 Cache Mapping Techniques
Primary and Secondary Memories
11.3 Types of RAM 11.10 Pentium Processor Cache Unit
11.4 ROM (Read Only Memory) 11.11 Input / Output System
11.5 Allocation Policies 11.12 I/O Modules and 8089 IO Processor
11.6 Signals to Connect Memory to Processor 11.13 Types of Data Transfer Techniques :
and Internal Organization of Memory Programmed I/O, Interrupt Driven I/O and DMA
11.7 Memory Chip Size and Numbers
Memory Parameters : This refers to the size of the data that is transferred in
one clock cycle. It mainly depends on the data bus size.
When a memory is taken then there are various The data as discussed earlier may be internal or external
characteristics of this memory that are considered. The and accordingly will be the data to be transferred in one
characteristics of memory are based on the following : clock pulse :
(a) Internal : It is related to the communication of
data with the memory directly accessible. It is
ns e
usually governed by data bus width.
(b) External : This is the data communication with the
io dg
external removable memory or virtual memory. It is
usually a block which is much larger than a word.
4. Access method :
song second and the third song, and then the first
stanza, second stanza of the third song and then
ch
with it, and to reach to the required location the tags many computer architectures. The size of the byte has
are to be compared with the location to be accessed. been hardware dependent and no definition exists.
There are techniques used to reach to the required The fact is that standard of eight bits is also convenient
tagged location at a faster speed. power of two permitting the values from 0 to 255 for
5. Performance : one byte. With ISO/IEC 80000-13, this common
meaning was codified in a formal standard. Many types
The performance of the memory depends on its speed
of applications use variables representable in eight bits
of operation or the data transfer rate. The data transfer
or multiple of eight bits.
rate is the rate at which the data is transferred. The
speed of operation depends on two things : 11.2 Memory Hierarchy : Classifications of
ns e
(a) Access time : The time between providing the address Primary and Secondary Memories :
io dg
and getting the valid data from memory is called as its
Memory Hierarchy explains that the nearer the memory
access time i.e. the address to data time.
to the processor, faster is its access. But costlier the
(b) Memory cycle time : The time that is required for the memory becomes as it goes closer to the processor. The
memory to “recover” before next access i.e. the time following sequence is in faster to slower or costlier to
6. at le
between two addresses is called as memory cycle time.
Physical type : 1.
cheaper memory :
Registers i.e. inside the CPU.
ic w
The physical material using which the memory is made 2. Internal memory that includes one or more levels of
can be different like : cache and the main memory. Internal memory is always
(a) Semiconductor : Memory can be made using RAM, SRAM for cache and DRAM for main memory. The
bl no
semiconductor material i.e. ICs, for example RAM. differences between the SRAM and DRAM will be seen
(b) Magnetic : Memory can also be made using magnetic in a later section in this chapter. This is also called as the
read and write mechanism, for example Magnetic disk primary memory.
(c) Optical : Optical memories i.e. memories that use hard disk, CDs, DVDs etc. This is the secondary memory.
optical methods to read and write have become famous Fig. 11.2.1 shows the memory hierarchy based on the
ch
these days, for example CD and DVD. closeness to the processor. The registers as discussed
(d) There are some other methods using which data was are the closest to the processor and hence are the
stored in early days like Bubble and Hologram. fastest while off-line storage like magnetic tape are the
farthest and also the slowest.
Te
7. Physical characteristics :
The list of memories from closest to the processor to be written the data is to be given on the data line and
the farthest is given as below : will be written on the capacitor.
1. Registers
2. L1 Cache
3. L2 Cache
4. Main memory
5. Magnetic Disk
6. Optical
ns e
7. Tape
To have a large faster memory is very costly and hence
io dg
the different memory at different levels gives the
memory hierarchy. How does this memory hierarchy (a) DRAM cell structure
give faster operation and some other terms like cache
etc. will be understood in the subsequent sections.
at le
ROM or the read only memory is quite cheaper
compared to RAM and is mainly used
implementation of the secondary or the virtual memory.
for
ic w
Thus the application of ROM is for virtual or secondary
memories like hard disks, external storage like
bl no
Fig. 11.3.1
11.3 Types of RAM :
On the other hand, the SRAM has each cell made of a
ch
RAM (Random Access Memory) is called so because any flip-flop, thus requires more components as compared
memory location in this IC can be accessed randomly. to the DRAM cell. Hence it occupies more space on the
There are two types of RAM, namely, SRAM (Static RAM) silicon wafer, and is costlier.
and DRAM (Dynamic RAM).
Te
Table 11.3.1 shows the differences between the SRAM The PROM (Programmable Read Only Memory) or also
and DRAM. sometimes referred to as OTP (One Time
Table 11.3.1 Programmable) memory, as it can be written onto only
once. When manufactured, it is blank, once written on it,
Sr.
SRAM DRAM it cannot be re-written. There are diodes that are used
No.
to store data, and they are fused or kept as it is to store
1. No refreshing required. Continuous refreshing the data in them. The internal diagram of the PROM is
required (disadvantage).
shown in Fig. 11.4.1.
2. It is faster for accessing It is slower in accessing
ns e
The AND array is used as address lines and the OR array
data. data. as data lines. The AND array (on the left in
io dg
3. It takes more space on It takes less space on chip Fig. 11.4.1) comes as predefined connections as shown
chip as more number of as less number of in the Fig. 11.4.1 in sequence of binary, in this case from
components are required components are required “000” to “111”, as there are three bit address.
per bit. per bit.
The OR array (on the right hand in Fig. 11.4.1) comes
4.
5.
at le
Hence is also costly.
capacitor.
7. SRAM is mainly used or DRAM is mainly used or links, the data will be available on the OR gates output
ns e
io dg
at le
ic w
Fig. 11.4.2 : Read-Write mechanism
The EPROM as discussed earlier are these days replace The head may be single for both read and write
bl no
with EEPROM (Electrically Erasable Programmable Read operations or separate ones.
Only Memory). The EEPROMs are erased by giving an During read/write operation, head is stationary while
extra supply voltage.
the platter (disk) rotates.
Pu K
11.4.2 Magnetic Memory : Write operation is done by passing current through coil
that produces magnetic field and then the pulses are
ch
Magnetic disks are very cheap and widely used as sent to head. Thus the magnetic pattern i.e. NS (North-
external storage and as hard disks. When used as hard South) or SN (South-North) is recorded on surface
disks, they are called as Winchester Disk. below.
Initially magnetic tapes were used for storage. Magnetic
Te
ns e
io dg
Fig. 11.4.4 : Data storage format in magnetic memory
A. CD-ROM :
A floppy disk is single platter, while a hard disk or In the CD-ROM the data stored as pits and lands as
winchester disk is multi platter as shown in the shown in Fig. 11.4.6(a).
Fig. 11.4.3(b). These pits and lands are read by reflecting laser. The CD
has a constant packing density hence constant linear
In this case one head for each side of the multiple
velocity across a track is required as against the
platters are mounted to form a head stack assembly.
constant angular velocity in case of magnetic discs.
It is called as Winchester hard disk because it was Fig. 11.4.6(a) shows that the CD is made up of three
developed by IBM in Winchester (USA). It is a sealed layers, namely the polycarbonate plastic, aluminium and
unit with the platters and the heads fly on boundary the protective material like acrylic.
layer of air as disk spins. The laser beam incident on the highly reflective
Also, there is very small head to disk gap making it substance like aluminium, returns back in some amount
more robust. Winchester hard disk is cheap and the of time. Based on this time gap, the optical disk reader
fastest external storage. can realize that there was a land or a pit.
In case ifit is a land it will take more time to return while less time in case if it is a pit as seen in the Fig. 11.4.6(a).
ns e
io dg
Fig. 11.4.6(a) : Construction of CD
The data format on a CD-ROM is shown in the Fig. 11.4.6(b). Initially a data 00H is stored, followed by 10 bytes of FFH
at le
and again a 00H, called as the synchronous 12 bytes.
The next is the 4 bytes ID (IDentity) about the time required for this data to be played (in MINutes and SEConds), the
ic w
sector in which the data is placed and the mode. There are three modes,
o Mode 0 indicates blank data field
bl no
B. DVD :
The major difference between a CD and DVD is that a DVD has multiple layers and hence very high capacity.
Another major difference in a DVD with respect to CD is that the DVD has more denser data storage mechanism which
results in the data storage capacity of around 4.7G per layer of a DVD.
There are DVDs available with single layer as well as multiple layers.
Fig. 11.4.7 shows the constructional differences of a CD and DVD.
Fig. 11.4.7(Contd...)
ns e
io dg
(a) CD-ROM - Capacity 682 MB
at le
ic w
bl no
Pu K
As seen in the Fig. 11.4.7(b), the double sided, two (1) Best fit : In this case the smallest available fragment is
layers DVD, has a reflective and semi-reflective layers on
searched and the required data is stored in that
both the sides.
fragment. The smallest fragment searched for should be
Te
high intensity, the low intensity beam is reflected by the used to store the data.
semi-reflective substance, while the high intensity beam (3) Next fit : In this case immediate next empty block of a
is reflected by the highly reflective substance. Thus size equal to or greater than the size of data to be
giving a mechanism to read the data written on both
stored is searched sequentially and the required data is
the layers of each of the side.
stored there.
11.5 Allocation Policies :
Ex. 11.5.1 : Given the memory partitions of size 100 K, 500
Partitioning refers to logical division of the memory into K, 200 K, 300 K and 600 K (in order). How
subparts so that they can be accessed individually by would each of the first fit, best-fit, worst fit
tasks fragmentation generally happens when memory
algorithms place the processes of 212 K, 417
blocks have been allocated and are freed randomly.
K, 112 K and 426 K (in order) ? Which
This results in splitting of partition memory into small
algorithm makes the most efficient use of
non-continuous fragments.
memory ?
There are 3 memory allocation policies :
Worst-Fit :
ns e
io dg
(S9.5)Fig. 11.5.1
at le
Partition number 2 of size 500 K is assigned to P1
(size = 212 K). It is the first partition that can
ic w
accommodate P1.
Partition number 5 of size 600 K is assigned to
(S9.5) Fig. 11.5.1(b)
P2 (size = 417 K). It is the first empty partition that can
bl no
accommodate P2. The largest free partition no.5 of size 600 K is allocated
to P1 (212 K).
P3 is assigned to partition 3.
P2 (size 417 K) is assigned to partition no.2. Partition no.
P4 cannot be executed.
2 is the largest free partition and it can accommodate
Memory utilized
Pu K
Total memory
P4 cannot be executed as there is no free partition that
212 K + 417 K + 112 K
= can accommodate P4.
1700 K
Memory utilized by P1, P2, P3
741 Memory utilization =
= = 0.436 Total memory
Te
1700
212 K + 417 K + 112 K
II] Best-fit : =
1700 K
741
= = 0.436
1700
ns e
16 65536 64 k
Fig. 11.6.1 : Block diagram of a memory device
io dg
11.7 Memory Chip Size and Numbers :
Number of address lines Size of memory in
required bytes Table 11.7.1 : EPROM ICs available in the market
at le
2
3
4
8
number
2716
2732
data
2k7
4k7
pins
24
24
ic w
4 16
2764 8k7 28
5 32
27128 16 k 7 28
6 64
bl no
27256 32 k 7 28
7 128
27512 64 k 7 28
8 256
9 512
Pu K
ch
Te
ns e
Step 1 : Total EPROM required = 6 kB
Chip size available = 6 kB (assume) (IC 2732)
io dg
4 kB
No. of chips required = =1
4 kB
Chip 1 : Starting address = 000H
Chip size = 4kB = 1FFFH
Table 11.7.2
X 1 X X Not selected
X X 0 X (Power down)
1 0 1 1 Output disable
Te
Truth Table for 6116 Chip 1 : Starting address = ending address of EPROM + 1
––– ––– ––– = 1FFFH + 1 = 1000H
CS OE WE Mode
Chip size = 2 kB = 07FFH
1 X X Not Selected Ending address = 17FFH
0 0 1 Read
EPROM SA = 000H 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
chip 1 EA = 1FFFH 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1
– –
y0 y 1
RAM SA = 1000H 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
chip 1 EA = 17FFH 0 0 0 1 0 1 1 1 1 1 1 1 1 1 1 1
–
y2
ns e
11 Soln. :
Smaller chip size RAM = 2 kB = 2
Neglect lower 11 address lines i.e. A0 to A10 and Note : Let us assume the processor has 16 address lines.
io dg
consider remaining address lines A11 to A15 for
decoding. Hence 5:32 decoder is required. Hence circle
the five bits as shown in memory map.
EPROM has two possibilities 00000b and 00001b, thus it
at le – –
requires y0 and y1 outputs of the 5:32 decoder. RAM
–
has 00010b and hence it requires y2 output of the
ic w
decoder.
bl no
Pu K
EPROM SA = 000H 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
chip 1 EA = 1FFFH 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1
–
y0
EPROM SA = 1000H 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
ns e
chip 2 EA = 1FFFH 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1
–
io dg
y1
RAM SA = 2000H 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
chip 1 EA = 2FFFH 0 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1
at le –
y2
RAM SA = 3000H 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0
ic w
chip 2 EA = 4FFFH 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1
–
y3
bl no
12
= 6 kB = 2
shown in the memory map. EPROM chip one has 0000b, thus
–
it requires y0 ; while EPROM chip two has 0001b, thus it
–
requires y1 and so on.
Ex. 11.7.3 : Interface 7 KB EPROM and 6 kB RAM to a Step 2 : Total RAM required = 6 kB
processor with 16-bit address and 7-bit data Chip size = 6 kB (Assume)
bus. No. of chips = 1
Soln. : Chip 1 : Starting Address = 2000H
Step 1 : Total EPROM required = 8 KB Chip size = 6 kB 1FFFH
Chip size = 8 KB (Assume) Ending Address = 2FFFH
No of chips = 1
Chip 1 : Starting Address = 000H
Chip size = 8 KB 1FFF H
Ending Address = 1FFF H
ns e
Step 3 : Memory Map :
io dg
A15 A14 A13 A12 A11 A10 A9 A8 A7 A6 A5 A4 A3 A2 A1 A0
EPROM Chip1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
SA = 000H
EA = 1FFFH 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1
– –
at le
y0 y1
RAM Chip1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
ic w
SA = 2000H
EA = 2FFFH 0 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1
–
y2
bl no
Pu K
ch
Fig. P. 11.7.3
Fig. P. 11.7.3(a)
Ex. 11.7.4 : A computer system needs 128 Bytes of ROM Chip size available = 128B
and 128 Bytes of RAM. ROM chip available is Total number of chips required = 4
of capacity 128 bytes and RAM chip available
Chip 1 : Starting address = Previous ending + 1 = 0200H
is of 128 Bytes. Draw memory address map for
a computer system and also draw a Chip size = 128B 007FH
connection structure. Ending address = 027FH
Soln. :
Chip 2 : Starting address = Previous ending + 1
Assuming that the processor has 10 address lines to
= 027FH + 1 = 0280H
ns e
interface 1KB memory (128 B + 128 B)
Chip size = 128B 007FH
io dg
Step 1 : Total EPROM required = 128 B
Ending address = 02FFH
Chip size = 128 B
Chip 3 : Starting address = Previous ending + 1 = 0300H
Number of chips required = 1
Chip size = 128B 007FH
at le
Chip 1 : Starting address
Chip size
= 000H
= 128 B 01FFH
Ending address = 037FH
A9 A8 A7 A6 A5 A4 A3 A2 A1 A0
Pu K
EA = 01FFH 0 1 1 1 1 1 1 1 1 1
– – – –
ch
y 0 .y1 .y2 y3
EA = 027FH 1 0 0 1 1 1 1 1 1 1
Te
–
y4
EA = 02FFH 1 0 1 1 1 1 1 1 1 1
–
y5
EA = 037FH 1 1 0 1 1 1 1 1 1 1
–
y6
EA = 03FFH 1 1 1 1 1 1 1 1 1 1
–
y7
ns e
Fig. P. 11.7.4
io dg
Absolute (full) decoding logic :
7
EPROM chip size = 128 B while RAM chip size = 128B. Thus smaller chip size = 128B = 2 .
Therefore neglect lower 7 address lines i.e. A0 to A6. Now since three address lines are remaining, we need a 3 : 7
decoder.
at le
The remaining lines i.e. A7 to A9 will be inputs to the decoder, as shown by circles in memory map.
Fig. P. 11.7.4(a)
11.8 Cache Memory : Concept, cache memory by cache controller and it updates its
Architecture (L1, L2, L3) and Cache directory to track the information stored in cache
Consistency : memory.
Before going to the cache of Pentium processor, we will Assume the cache memory is empty, in the beginning
see some basics of the cache like its operation, advantage, (after reset). The following sequence takes place :
principles of locality of reference, cache architectures, write 1. The processor performs a memory read cycle to fetch
policies etc.
the first instruction from memory.
11.8.1 Cache Operation : 2. The cache controller uses the address issued by the
ns e
processor to determine if a copy of the requested
io dg
information is already in the cache memory. But a cache
miss occurs as the cache memory is empty.
at le 4.
This will consume some wait states.
1. Implementation of cache memory subsystem is an (line) of data is sent to the cache. No performance gain
attempt to achieve almost all accesses with zero wait is achieved due to absence of information in cache
Pu K
6. The requested information is from the DRAM given to characteristics of programs that run in relatively small
the processor. The information is also copied into the loops in consecutive memory locations.
ns e
(1) Temporal locality : code, many accesses will result in cache misses.
io dg
(a) Since the programs have loops, the same instructions 4. Fortunately, most programs run in loops and hence a
are required frequently, i.e. the programs tend to use high percentage of cache hit (85 – 95%) is experienced.
the most recently used information again and again. 5. Besides locality of reference various other factors
(b)
(c)
at le
If for a long time a information in cache is not used,
then it is less likely to be used again.
(a) Programs and the data accessed by the processor 1. Look-through cache design
mostly reside in consecutive memory locations. 2. Look-aside cache design
Fig. 11.8.4
(a) The performance of systems incorporating Look (a) In this case the processor is directly connected to the
Through Cache is typically higher than that of systems system bus or memory bus.
incorporating Look Aside Cache. (b) When the processor initiates a bus access, cache
(b) Data from main memory (DRAM) is not transferred to controller as well as main memory detects the bus
the processor using system bus hence system bus is free access address.
for other bus masters (like DMAC) to access the main (c) The cache controller sits aside and monitors each
memory. processor memory request to determine if the cache
(c) This system isolates the processor’s local bus from the contains the copy of the requested information.
system bus hence achieving bus concurrency.
ns e
(d) If it is a cache hit, the cache controller terminates the
(d) The major advantage is that two bus masters can bus cycle by instructing memory subsystem to ignore
io dg
operate simultaneously. One processor accesses look the request. If it is a cache miss, the bus cycle completes
through cache while another bus master such as DMA in normal fashion from memory (and wait states are
can access the system bus is possible. required).
(e) To expansion devices, a look-through cache controller is Advantages :
(f) at le
like a system processor.
During memory writes, look-through cache provides
zero wait state operation (using posted writes) for write
(a) Cache miss cycles complete faster in Look Aside Cache
as the bus cycle is already in progress to memory and
hence no look up penalty is incurred.
ic w
misses.
(b) Simplicity of designs because only one address is to be
Advantages : monitored by cache controller form processor and not
bl no
(a) It reduces the system and memory bus utilization, from I/O devices.
leaving them available for use by other bus master. (c) Lower cost of implementation due to their simplicity.
(b) It allows bus concurrency, where both the processor and Disadvantages :
Pu K
another bus master can perform bus cycles at the same (a) The processor requires system bus utilization for its
time. every access, to access both cache subsystem and
memory.
ch
Most Significant s bits specify one memory block to Fig. 11.9.1 shows the method the access the data from
which the cache line corresponds. the cache implementing direct mapping technique
The MSBs are split into a cache line field r and a tag of In this case to search a line from the cache memory, the
s-r (most significant) line field selects the particular line, whose tag is to be
Example : Let Cache be of 64kByte that is divided into compared with the tag of the address specified by the
14
blocks of 4 bytes hence cache is 16k (2 ) lines of 4 processor.
bytes. And let the main memory size be 16MBytes that
The advantages of Direct Mapping are :
24
requires 24 bit address lines (2 =16M).
1. Simple implementation
ns e
Tag (s - r) (8 bits) Line (r) (14 bits) Word (w) (2 bits) 2. Inexpensive
io dg
Hence the 24 bit address is divided as 2 bit word The disadvantages of Direct mapping are :
identifier (4 byte block), 22 bit block identifier i.e. 8 bit 1. Fixed location for given block hence if a program
14
tag 14 bit slot or line(16K lines=2 )
accesses 2 blocks that map to the same line
Fig. 11.9.1 shows the organization of Direct Mapping
at le
Cache.
repeatedly, cache misses are very high.
ic w
bl no
Pu K
ch
Te
Fig. 11.9.1
The organization of Fully Associative Cache mapping is shown in the Fig. 11.9.2.
ns e
io dg
at le
ic w
bl no
Pu K
ch
Te
Fig. 11.9.2
1. If a program accesses 2 blocks (that would map to the same line in case of Direct mapping) repeatedly, cache misses
will not occur
In this case cache is divided into a number of sets. Each set contains a number of lines.
A given block maps to any line in a given set (i mod j), where i is the line number of the main memory to be mapped
and j is the total number of sets in the cache memory.
For example, if there are 2 lines per set, it is called as 2 way associative mapping i.e. a given block can be in one of 2 lines
in only one set.
ns e
io dg
at le
ic w
bl no
Pu K
Fig. 11.9.3
Example: Let Cache be of 64kByte that is divided into Ex. 11.9.1 : A block set associative cache consists of 64
ch
14
blocks of 4 bytes hence cache is 16k (2 ) lines of 4 blocks divided in 4 block sets. The main
bytes. And let the main memory size be 16MBytes that memory contains 4096 blocks, each 128
24 words of 16 bit length
requires 24 bit address lines (2 =16M).
For this example for set associative mapping address (1) How many bits are there in main
Te
memory address ?
structure: 2 bits for one of the 4 words, 8K lines in each
13
of the 2 sets hence 13 bits to select a set (2 =8K) and (2) How many bits are there in cache
remaining (24 – 13 – 2=) 9 bits for tag. memory address (tag, set and word
fields) ?
Tag (9 bits) Set (13 bits) Word (2 bits)
Soln. :
In this case the set field is used to determine cache set 12 7
(1) Main memory size = 4096 blocks x 128 word =2 2 =
to look in and Compare tag field to see if we have a hit. 19
2
Fig. 11.9.3 shows an example of Two Way Set
Thus main memory address lines required is equal to 19.
Associative Cache Organization
(2) Cache memory has 64 blocks divided in 4 block sets,
The advantages of Set Associative Mapping are:
4
thus each set has 16 blocks. Hence 16 = 2 ; 4 address
1. If a program accesses 2 blocks that map to the same set
lines for set
repeatedly, cache misses will not occur because they
7
would go into different lines of the set. Each block has 128 words; hence 128 = 2 , 7 address
2. Not very complex because of just 2, 4 or 8 parallel lines for word field
ns e
corresponding memory line must be updated to reflect
the change made in the cache; else another bus master 5. With this policy, another bus master is not permitted to
use the bus until the write-through is completed,
io dg
may get stale data if it reads from these lines.
thereby ensuring that the bus master will receive the
4. Three write policies are used to prevent this type of
latest information from memory.
consistency problem: Write-through, Buffered or posted
The write-through operations use either system or
write-through and Write-back.
memory bus. Hence when write-through to memory is
A. Write-Through Cache Designs : through designs, update memory each time a memory
write is performed, although the need for such action
ch
in memory, and hence ensuring that consistency is performance by updating a line in main memory only
maintained between cache and memory. when necessary, thereby keeping the system bus free
for use by other processors and bus masters and hence
3. Very simple and effective implementation.
ensuring bus concurrency.
4. But poor performance due to slow main memory writes
operation. 2. The memory is updated only when :
5. Also it doesn’t allow bus concurrency. (a) Another bus master initiates a read operation from a
memory line that contains stale data.
B. Buffered or Posted Write-Through Designs :
(b) Another bus master initiates a write operation from a
1. It has an advantage of providing zero wait state write
memory line that contains stale data.
operation for cache hits as well as cache misses.
(c) The cache line that contains modified information is
2. When a write occurs, buffered write through caches
about to be overwritten in order to store a line newly
tricks the processor into thinking that the information
was written to memory in zero wait states. In fact, the acquired from memory i.e. during line replacement.
write to main memory has not been performed yet. 3. Cache controller marks the cache lines as ‘modified’ in
3. The look-through cache controller stores the entire the cache directory when the processor updates them.
write operation in a buffer, and writes to the main Hence when read by another master or written into the
memory later. Hence the processor need not perform memory, the cache subsystem checks whether it is
slow write operation with wait states and hence doesn’t marked as ‘modified’ in cache.
4. They design of such cache controller is COMPLICATED 2. First In First Out (FIFO) : In this case the line which was
to implement because they must MAKE DECISIONS on brought into the cache first is replaced first. Thus the
when to write ‘modified’ lines back to memory to ensure line which has stayed the longest in the cache is
consistency. replaced.
11.9.5 Replacement Algorithms : 3. Least frequently used : In this case the line which is
used for the least number of times is replaced first.
Replacement algorithm is required to replace a line
4. Random : In this case randomly any line is replaced.
from the cache memory with the new line as discussed
earlier. Ex. 11.9.2 : Assume that memory consists of three frames
ns e
There are various replacement polices available. The and during execution of a program, following
widely used ones are LRU, FIFO, LFU and random as pages are referenced in the sequence :
io dg
discussed below : 23215245325
at le
ic w
bl no
Ex. 11.9.3 : Find out page fault for following string using LRU and FIFO method. 6 0 12 0 30 4 2 30 32 1 20 15
(Consider page frame size = 3)
Soln. :
Te
6 0 12 0 30 4 2 30 32 1 20 15
FIFO 6 6 6 6 30 30 30 30 32 32 32 15
0 0 0 0 4 4 4 4 1 1 1
12 12 12 12 2 2 2 2 20 20
F F F F F F F
LRU 6 6 6 6 30 30 30 30 30 30 20 20
0 0 0 0 0 2 2 2 1 1 1
12 12 12 4 4 4 32 32 32 15
F F F F F F F
Ex. 11.9.4 : Consider a paging system in which M1 has a capacity of three frames. The page address stream formed by
executing a program is 2 3 2 1 5 2 4 5 3 2 5 2
Soln. :
Time 1 2 3 4 5 6 7 8 9 10 11 12
Address space 2 3 2 1 5 2 4 5 3 2 5 2
FIFO 2 2 2 2 5 5 5 5 3 3 3 3
3 3 3 3 2 2 2 2 2 5 5
1 1 1 4 4 4 4 4 2
ns e
io dg
2 2 2 2 2 2 2 2 3 3 3 3
LRU 3 3 3 5 5 5 5 5 5 5 5
1 1 1 4 4 4 2 2 2
OPT 3 3 3 3 3 3 3 3 3 3 3
bl no
1 5 5 5 5 5 5 5 5
Ex. 11.9.5 : Find out page fault for following string using Pages accessed Frames
LRU method. Consider page frame size = 3
2 1 3 2
ch
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1.
0 1 0 2 F
Soln. :
1 1 0 2
Pages accessed Frames
7 1 0 7 F
Te
7 7 – –
0 1 0 7
0 7 0 –
1 1 0 7
1 7 0 1
2 2 0 1 F
3 4 3 2 F 1 3 2 4 2 1 5 1 3 2 6 7 5 4 3 2 4 2 3 1 4
0 0 3 2 F 1 1 1 4 4 4 4 4 3 3 3 7 7 7 3 3 3 3 3 3 4
3 0 3 2 3 3 3 3 1 1 1 1 2 2 2 5 5 5 2 2 2 2 2 2
2 0 3 2 2 2 2 2 5 5 5 5 6 6 6 4 4 4 4 4 4 1 1
1 1 3 2 F M M M M H M M H M M M M M M M M HHH M M
5 16 3. OPTIMAL :
% Hit = × 100 = 23.81 % % Miss = × 100
21 21
2 3 3 1 5 2 4 5 3 2 5 2
= 76.19 %
2. LRU : 2 2 2 2 2 2 4 4 4 4 4 4
3 3 3 3 3 3 3 3 2 2 2
1 3 2 4 2 1 5 1 3 2 6 7 5 4 3 2 4 2 3 1 4
1 5 5 5 5 5 5 5 5
1 1 1 4 4 4 5 5 5 2 2 2 5 5 5 2 2 2 2 2 4
M M H M M H M H H M H H
3 3 3 3 1 1 1 1 1 6 6 6 4 4 4 4 4 4 1 1
ns e
2 2 2 2 2 2 3 3 3 7 7 7 3 3 3 3 3 3 3
6
% Hit = 100 = 50 %
12
io dg
M M M M H M M H M M M M M M M M HHH M M
% Miss = 50%
5 16
% Hit = × 100 = 23.81 % % Miss = × 100 =
21 21 Ex. 11.9.8 : Calculate the number of page hits and faults
76.19 % using FIFO, LRU and OPTIMAL page
at le
Ex. 11.9.7 : Find out page hit and miss for the following
string using FIFO, LRU and OPTIMAL page
replacement algorithms for the following page
frame sequence :
2 , 3, 1, 2, 4, 3, 2, 5, 3, 6, 7, 9, 3, 7. (FRAME
ic w
replacement policies considering a frame size SIZE = 3). May 17, 10 Marks.
f three. Soln. :
1. FIFO :
bl no
2, 3, 3, 1, 5, 2, 4, 5, 3, 2, 5, 2.
Soln. : 2 2 2 2 4 4 4 4 3 3 3 9 9 9
1. FIFO : 3 3 3 3 3 2 2 2 6 6 6 3 3
Pu K
1 1 1 1 1 5 5 5 7 7 7 7
2 3 3 1 5 2 4 5 3 2 5 2
ch
M M M H M H M M M M M M M H
2 2 2 2 5 5 5 5 3 3 3 3
3
% Hit = 100 = 21.43% % Miss = 88.57%
3 3 3 3 2 2 2 2 2 5 5 14
1 1 1 4 4 4 4 4 2 2. LRU :
Te
M M K M M M M H M H M M 2 3 1 2 4 3 2 5 3 6 7 9 3 7
3 2 2 2 2 2 2 2 2 2 6 6 6 3 3
% Hit = 100 = 25%
12 3 3 3 4 4 4 5 5 5 7 7 7 7
% Miss = 75% 1 1 1 3 3 3 3 3 3 9 9 9
H H H H
2. LRU :
4
% Hit = 100 = 28.57 %
2 3 3 1 5 2 4 5 3 2 5 2 14
% Miss = 81.43%
2 2 2 2 5 5 5 5 5 5 5 5
3. OPTIMAL :
3 3 3 3 2 2 2 3 3 3 3
2 3 1 2 4 3 2 5 3 6 7 9 3 7
1 1 1 4 4 4 2 2 2
2 2 2 2 2 2 2 5 5 5 7 7 7 7
M M H M M H M H H M H H 3 3 3 3 3 3 3 3 3 3 3 3 3
4 1 1 4 4 4 4 4 6 6 9 9 9
% Hit = 100 = 33 %
12
H H H H H H
% Miss = 67%
6 tA2
% Hit = 100 = 42.86 % where r = = Speed Ratio
14 tA1
% Miss = 57.14% 11.9.7 Cache Consistency (Also Known as
11.9.6 Cost and Performance Measurement Cache Coherency) :
of Two Level Memory Hierarchy : 1. In order to work properly for the cache subsystems, the
Any two level memory has to be analysed with its CPU and the other bus masters must be getting the
most updated copy of the requested information.
performance characteristics as per the following set of
characteristics. 2. There are several cases wherein the data stored in cache
or in main memory may be altered whereas the
ns e
The different group of two level memories can be cache
duplicate copy remains unchanged.
memory and main memory, main memory and virtual
Causes of cache consistency problems :
io dg
memory, internal and external cache memory etc.
1. When the copy of line in cache, no longer matches the
Let us see the various parameters to be considered contents of line stored in memory, there is loss of cache
during the performance analysis. consistency. It can be either due to cache line being
updated while the memory line is not, or the memory
at le 2.
line being updated while the cache line is not.
In each of these instances the stale data must be
updated. It can be a result of cache write hit and hence
ic w
the caches write policy has to handle this problem for
the first case.
bl no
where, C1 and C2 are the costs per bit of memory 1 1. When another device in system uses the buses, it must
(faster memory) and memory 2 (slower memory) become bus master.
respectively. 2. In case of look-through cache design, the cache
Te
tA = H tA1 + (1 – H) tA2
A. Writes to memory (with write-through cache) : 6. The cache controller then seizes the bus and performs a
1. When the bus masters write through memory, they memory write to update this stale line in the main
update locations that may also be cached by the cache memory. In the cache directory, the cache line is now
controller. invalidated because the bus master will update the
memory cache line immediately after the line is flushed.
2. In these cases, memory is updated and the line in the
The cache then removes back-off signal, permitting the
cache becomes stale. Hence cache controller must
bus master to reinitiate the memory write operation.
monitor the memory writes to avoid this coherency
When the bus master completes the write to memory,
problems. When the write is detected, the cache line is
the memory line will contain the most updated data.
invalidated because it will contain stale data after the
ns e
write to memory completes. Hence the cache controller 11.10 Pentium Processor Cache Unit :
has to monitor the system bus to find out what the
io dg
other bus master is doing on the system bus. So that if 1. Pentium implements separate data and code cache
another master is updating a line of the main memory, referred to as split cache.
the cache has to invalidate this line. This monitoring of 2. For each of these caches, line size = 32 bytes and data
the system bus is called as snooping. bus size is of 8 bytes (64 bits) in width.
B.
1.
at le
Reads from memory (with write-back cache) :
in cache but not in memory. 5. Since it uses write-back policy, the Pentium processor’s
2. To detect this coherency problem, write-back caches L1 data cache introduces additional complexity into
must also snoop reads from memory. cache consistency logic. The data cache can use write-
through policy or a write-back policy on a line-by-line
3. The system can be designed to back-off the bus master
basis.
Pu K
6. The non-cacheable address logic (NCAL) determines 5. If the L2 cache does not have allocate-on-write
whether the address is cacheable or not. capability, the L2 cache will pass the data to main
7. If it is determined to be non-cacheable, the NCAL memory. When the main memory completes the write, it
notifies the L2 cache and the processor, that the activates ready to the L2 cache and the processor to
addressed location cannot be safely cached. Hence, the notify them that the cycle has completed.
bus cycle is not converted into a cache line fill and a 6. If the L2 cache supports allocate-on-write, it passes the
single-transfer bus cycle is run to fetch the requested data to the main memory as described above, but will
information directly from memory. then perform a L2 cache line fill from the same line.
8. If NCAL determines the address is cacheable, it indicates Write hits and Write-once policy :
ns e
L2 cache controller and the processor about the same,
Assume that a new line has just been read from memory
and causes a read cycle to be converted into a cache
io dg
and the line of data is brought in the L1 cache. Since the
line fill for both the L2 and L1 caches.
line passes through the L2 cache, a copy of the line also
9. Since the access is from slow external memory, wait exists in the L2 write-back cache and in memory.
states will be inserted in the transfer. The L2 cache
Now suppose one of the execution units writes to same
copies the first 64 bits into its cache line-fill buffer, while
at le
simultaneously forwarding it to processor and indicating
that valid data is present on the processor’s local data
line. The L1 data cache controller will check its directory
and find that the target line is the data cache. This L1
cache line will be modified. The L1 data cache line now
ic w
bus.
has modified data and its directory entry is updated to
10. The information requested for is contained in the first reflect the ‘modified’ state of the cache line.
quadword and hence the information is immediately
bl no
Processor : The problem is that the L2 cache was not notified by the
L1 data cache that the line had been modified, and
1. The data cache can use either a write-through or a
hence the other bus master gets stale data from
write-back policy.
Te
memory.
2. When using write-back policy with an L2 cache, special
The solution to the above problem is use of write once
care has to be taken when interacting between the
policy. This policy says to use write-through for the first
Pentium processor cache and external cache to ensure
time and write-back thereafter.
cache coherency.
1. When a line is initially placed in the L1 data cache, it is
Write Misses :
marked as ‘shared’ and makes the use of write-through
1. The L1 data cache controller checks the target address to the L2 cache on first write hit to the cache line in L1.
of the memory write to determine if the copy of the 2. Thus when the first time one of the execution units
target memory line exists in it. writes to the line, the line will be updated and the write
2. In case of a write miss, the Pentium processor initiates a operation written-through L2 cache. Hence, the L2
cache will also be updated and its directory entry
single transfer bus cycle to the target location. The L2
updated to indicate that the line has been ‘modified’.
cache controller checks its directory to determine if a
Now that the L2 cache directory indicates that this line
copy of the target line is resident in its cache.
has been ‘modified’, it will snoop correctly and will not
3. If L2 cache hit occurs, then write is performed in L2 permit any bus master to read the stale data from the
according to write-back or write-through policy. corresponding line in memory. After using write-
4. If L2 cache miss occurs, the action of L2 cache depends through, the L1 data cache changes the state of the line
on whether or not it supports allocate-on-write. from ‘shared’ to the ‘exclusive’ state.
3. Now, when an execution unit again writes to this line, 11.10.4 The MESI Model :
the L1 data cache finds the target line in the data cache
Based on the discussion in the previous sections we can
and in exclusive state. The L1 line is modified and its show the state transition diagram of a line in cache as
directory entry is updated to the ‘modified’ state, shown in Fig. 11.10.1.
indicating that the line has been modified. All these Besides the basic Pentium read/write and another bus
subsequent writes to this line are performed using write master read/write operations, there are some more
back policy. cases shown in the MESI model.
1. Internal snoop hit is to be considered in case when the
ns e
11.10.3 Memory Reads Initiated by Another data required by the processor is there in code cache or
Bus Master :
io dg
vice versa. In such cases the modified line is written
back to the main memory and then invalidated, while a
1. If a snoop read hit occurs to a line that has been
non modified line is simply invalidated.
modified, the L2 cache causes the bus master to back-
2. Flush# signal indicates the L1 cache to clear the entire
at le
off and simultaneously passes the address to the
Pentium processor L1 cache so that it can also snoop
the address.
cache. WBINVD indicates to clear a particular line in the
L1 cache. In case of a Flush# signal or WBINVD, also the
modified line is first written back to the main memory
ic w
and then invalidated while non modified line is simply
2. If the L1 cache controller detects a snoop read hit to a
invalidated.
bl no
Fig. 11.10.1 : State diagram of MESI transition states as in Pentium’s Data cache
ns e
The general model of I/O module interfacing with
possible ways of doing it :
system bus is shown in Fig. 11.11.1.
io dg
(1) Parallel data transfer
at le
are connected, one at each side.
The internal block diagram of I/O module is shown in Now let’s compare the specified 2 methods of data
Fig. 11.11.2. transfer.
Sr.
Parallel Serial
No.
1. Simplex :
Sr.
Parallel Serial
No. The simplex is one way transmission.
4. This cannot be used for This can be used for The connection exists such that data transfer takes place
distant communication. distant communication. only in one direction.
5. More parallel hardware is Less parallel hardware There is no possibility of data transfer in the other
required. required. direction.
ns e
In these two methods the cost of connecting two
The duplex is two way transmissions. It is further divided
io dg
distant points, is the main factor. So though the parallel
in 2 groups :
data transfer is faster, it is preferred for small distances
(a) Half duplex :
only. But for long distances, serial data transfer is
preferred. It is a connection between two terminals such that, data
at le
In serial data transfer the 8 bits of data is converted into
serial 8 bits; using shift register (parallel in serial out
may travel in both the directions, but transmission
activated in one direction at a time.
ic w
mode). These serial bits are transferred on single line This indicates that the line has to turn around after
using serial I/O data transfer. communication is complete in one direction.
bl no
To transfer 8 bits of data, it will require 8 clock pulses. (b) Full duplex :
On the other side exactly opposite process is done.
It is a connection between two terminals such that, data
These serial 8 bits are accepted and converted to
may travel in both the directions simultaneously. So it
parallel form to get 8 bits of data. This process is shown
Pu K
some have handshaking signals etc. Hence I/O module 11.13 Types of Data Transfer Techniques :
communicates according to the protocol required by Programmed I/O, Interrupt Driven
the I/O devices. I/O and DMA :
11.12.1 I/O Module : There is yet another method of classifying the
interfacing of I/O devices based on how and when the
1. Need of input module : each output device operates at
data is transferred between the processor and I/O
a different speed, has different data format and
devices.
ns e
different protocol.
There are three types under this method of classification
io dg
Also, most of the I/O devices are slower than the speed namely programmed I/O, interrupt driven I/O and DMA
of the processor. Hence, an IO module is used to (Direct Memory Access).
interface the IO device to the processor.
Definition of polling :
Mode and control unit indicates the mode of operation IC 8255 is generally used as a I/O module for
for the I/O module, as well as controls the transfer of programmed I/O method of interfacing.
data between the IO module and IO device, as well as A common programming task is the transfer of a block
IO module and CPU. of words between an Input/output device and memory.
Fig. 11.13.1 gives a flowchart for transferring a block of 4. On getting the interrupt, CPU requests data from
data. the I/O module.
ns e
operation in sequence
io dg
Save context i.e. the contents of the registers on the
stack
at le
Restore the register context from the stack.
below :
When the interrupt from Input/output device occurs, it The data count register is used to keep a track of the
ns e
suspends execution of the current program, reads data number of bytes to be transferred. The counter register
from the port and then resumes execution of the is decremented after every transfer.
io dg
suspended program.
The data register is used in a special case i.e. when the
11.13.3 DMA : transfer of a block is to be done from one memory
location to another memory location.
DMA stands for Direct Memory Access. The I/O module
at le
can directly access (read or write) the memory using this
method.
Also you will note in the Fig. 11.13.3 the read and write
signals are bidirectional.
ic w
Interrupt driven and programmed I/O require active The DMA controller is initially programmed by the CPU,
operation of the CPU, hence transfer rate is limited and for the count of bytes to be transferred, address of the
bl no
CPU is also busy doing the transfer operation. DMA is memory block for the data to be transferred etc.
the solution for these problems.
During this programming of the DMAC (DMA
DMA controller takes over the control of the bus from
controller), the read and write lines work as inputs for
Pu K
They are used to tell the memory that the DMAC wants
to read or write from the memory according to the
operation being data transfer from memory to I/O or
from I/O to memory.
In Fig. 11.13.3, you will notice that there are various the data between the memory and I/O device by the
registers like data count, data register and address DMA controller.
register. The four major methods used are discussed below :
ns e
Fig. 11.13.4 : DMA transfer modes will keep on getting data and need not wait for extra
time for the request grant signals as in the single
io dg
1. Single transfer mode :
transfer method.
In single transfer mode, the device is programmed to
Also the CPU has not to wait for longer time in case if
make one byte transfer only after getting the control of
the I/O device is slower, because if the I/O device is
the system bus.
at le
After transferring one byte the control of the bus will be
returned back to the CPU.
4.
slower the transfer terminate.
Hidden transfer mode :
ic w
In hidden transfer mode, the DMA controller takes over
The word count will be decremented and the address the charge on the system bus and transfers data when
decremented or incremented following each transfer. processor does not needs system bus.
bl no
The disadvantage in this method is that the I/O device The processor does not even realize of this transfer
has to wait for a long time after every transfer for the being taken place.
extra request grant signals. The processor does not needs the system bus when it is
Pu K
Q. 10 What are the conditions for consistence occurrence Q. 16 With neat block diagram explain I/O module.
of problem ? Q. 17 Write a short note on direct memory access.
Q. 11 Write a short note on pentium processor cache unit. Q. 18 What are the different transfer modes in DMA ?
Q. 12 Write a short note on write hits and write-once Explain.
policy. Q. 19 Explain various types of ROM : Magnetic as well as
Q. 13 With neat state diagram explain MESI model. optical.
Q. 14 Differentiate between parallel and serial interface. Q. 20 Interface 7 KB EPROM and 6 kB RAM to a
Q. 15 What are the different types of communication processor with 16-bit address and 7-bit data bus.
ns e
system ? Explain.
io dg
at le
ic w
bl no
Pu K
ch
Te