0% found this document useful (0 votes)
540 views364 pages

LDCO

Uploaded by

Charu Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
540 views364 pages

LDCO

Uploaded by

Charu Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 364

Logic Design & Computer

Organization
(Code : 214442)

ns e
io dg
Semester III – Information Technology
(Savitribai Phule Pune University)

at le
ic w
Strictly as per the New Choice Based Credit System Syllabus (2019 Course)
Savitribai Phule Pune University w.e.f. academic year 2020-2021
bl no
Pu K

J. S. Katre Harish G. Narula


ch

M.E. (Electronics and Telecommunication) Formerly Assistant Professor (Senior)


Formerly, Assistant Professor Department of Computer Engineering
Department of Electronics Engineering D. J. Sanghvi College of Engineering, Mumbai.
Te

Vishwakarma Institute of Technology (V.I.T.), Pune. Maharashtra, India.


Maharashtra, India

Prof. Nilesh N. Thorat


PhD (Pursuing), M. Tech in CS, B.E. in CE
Assistant Professor in IT Department
JSPM’s BSIOTR, Wagholi, Pune
Maharashtra, India.

(Book Code : PO130A)

Powered by TCPDF (www.tcpdf.org)


Logic Design & Computer Organization (Code : 214442)
(Semester III, Information Technology, Savitribai Phule Pune University)
J. S. Katre, Harish G. Narula, Nilesh N. Thorat

Copyright © by Authors. All rights reserved. No part of this publication may be reproduced, copied, or stored in
a retrieval system, distributed or transmitted in any form or by any means, including photocopy, recording, or
other electronic or mechanical methods, without the prior written permission of the publisher.

ns e
io dg
This book is sold subject to the condition that it shall not, by the way of trade or otherwise, be lent, resold, hired
out, or otherwise circulated without the publisher’s prior written consent in any form of binding or cover other
than which it is published and without a similar condition including this condition being imposed on the
subsequent purchaser and without limiting the rights under copyright reserved above.
at le
First Printed in India : January 2002
ic w
First Edition : August 2020 (TechKnowledge Publications)
bl no

This edition is for sale in India, Bangladesh, Bhutan, Maldives, Nepal, Pakistan, Sri Lanka and designated
countries in South-East Asia. Sale and purchase of this book outside of these countries is unauthorized by the
publisher.
Pu K
ch

ISBN : 978-93-89889-45-1
Te

Published by :
TechKnowledge Publications

Head Office : B/5, First floor, Maniratna Complex, Taware Colony, Aranyeshwar Corner,
Pune - 411 009. Maharashtra State, India
Ph : 91-20-24221234, 91-20-24225678.
Email : [email protected],
Website : www.techknowledgebooks.com

[214442] (FID : PO130) (Book Code : PO130A)

(Book Code : PO130A)

Powered by TCPDF (www.tcpdf.org)


We dedicate this Publication soulfully and wholeheartedly,
in loving memory of our beloved founder director,
Late Shri. Pradeepji Lalchandji Lunawat,
who will always be an inspiration, a positive force and strong support
behind us.

“My work is my prayer to God”


- Lt. Shri. Pradeepji L. Lunawat

Soulful Tribute and Gratitude for all Your


Sacrifices, Hardwork and 40 years of Strong Vision…
(Book Code : PO130A)

Powered by TCPDF (www.tcpdf.org)


Syllabus…
Logic Design & Computer Organization : Sem. III, Information Technology (SPPU)

Teaching Scheme : Credit Scheme : Examination Scheme :


Theory (TH) : 03 Hrs. /Week 03 Mid-Semester : 30 Marks
End-Semester : 70 Marks

Prerequisite Courses : if any: Basics of electronics engineering.

ns e
Companion Course : if any :
Course Objectives :

io dg
1. To make undergraduates, aware of different levels of abstraction of computer systems from hardware perspective.
2. To make undergraduates, understand the functions, characteristics of various components of Computer& in
particular processor & memory.

at le
Course Outcomes : On completion of the course, students will be able to :
CO1 : Perform basic binary arithmetic and simplify logic expressions.
ic w
CO2 : Grasp the operations of logic ICs and Implement combinational logic functions using ICs.
CO3 : Comprehend the operations of basic memory cell types and Implement sequential logic functions using ICs.
bl no

CO4 : Elucidate the functions and organization of various blocks of CPU.


CO5 : Understand CPU instruction characteristics, enhancement features of CPU.
CO6 : Describe an assortment of memory types (with their characteristics) used in computer systems and basic principle
of interfacing input, output devices.
Pu K

Course Contents
ch

Unit 1

Introduction To Digital Electronics :

Digital Logic families: Digital IC Characteristics; TTL : Standard TTL characteristics, Operation of TTL NAND gate;
Te

CMOS : Standard CMOS characteristics, operation of CMOS NAND gate; Comparison of TTL and CMOS. Signed
Binary number representation and Arithmetic: Sign Magnitude, 1’s complement and 2’s complement representation,
unsigned Binary arithmetic (addition, subtraction, multiplication, and division), subtraction using 2’s complement;
IEEE Standard 754 Floating point number representations. Codes: Binary, BCD, octal, Hexadecimal , Excess-3 , Gray
code and their conversions Logic minimization: Representation of logic functions: logic statement, truth table, SOP
form, POS form; Simplification of logical functions using K-Maps up to 4 variables.

Case Study : 1) CMOS 4000 series ICs 2) practical applications of various codes in computers 3) four basic
arithmetic operations using floating point numbers in a calculator. (Refer chapters 1, 2, 3 and 4)

Unit 2

Combinational Logic Design :

Design using SSI chips: Code converters, Half- Adder, Full Adder, Half Subtractor, Full Subtractor, n bit Binary adder.
Introduction to MSI chips: Multiplexer (IC 74153), Demultiplexer (IC 74138), Decoder (74238) Encoder (IC 74147),
Binary adder (IC 7483) Design using MSI chips: BCD adder and subtractor using IC 7483, Implementation of logic
functions using IC 74153 & 74138.

Case Study : Use of combinational logic design in 7 segment display interface (Refer chapters 5)

(Book Code : PO130A)

Powered by TCPDF (www.tcpdf.org)


Unit 3

Sequential Logic Design :

Introduction to sequential circuits: Difference between combinational circuits and sequential circuits; Memory
element-latch and Flip-Flop. Flip- Flops: Logic diagram, truth table and excitation table of SR, JK, D, T flip flops;
Conversion from one FF to another, Study of flip flops with regard to asynchronous and synchronous, Preset and
Clear, Master Slave configuration ; Study of 7474, 7476 flip flop ICs. Application of flip-flops: Counters- asynchronous,
synchronous and modulo n counters, study of 7490 modulus n counter ICs and their applications to implement mod
counters; Registers- shift register types (SISO, SIPO, PISO and PIPO) and applications.

ns e
Case Study : Use of sequential logic design in a simple traffic light controller (Refer chapters 6, 7 and 8)

io dg
Unit 4

Computer Organization and Processor :

Computer organization and computer architecture, organization, functions and types of computer units- CPU(typical

at le
organization, Functions, Types), Memory (Types and their uses in computer), IO( types and functions) and system bus
(Address, data and control, Typical control lines, Multiple-Bus Hierarchies); Von Neumann and Harvard architecture;
Instruction cycle Processor: Single bus organization of CPU; ALU( ALU signals, Functions and types); Register (types
ic w
and functions of user visible, control and status registers such as general purpose, address registers, data registers,
flags, PC, MAR, MBR, IR) and control unit (control signals and typical organization of hard wired and
bl no

microprogrammed CU). Micro Operations (fetch, Indirect, Execute, interrupt) and control signals for these micro
operations. Case Study : 8086 processor , PCI bus (Refer chapter 9)

Unit 5
Pu K

Processor Instructions and Processor Enhancements :

Instruction : elements of machine instruction ; instruction representation (Opcode and mnemonics, Assembly
ch

language elements) ; Instruction Format and 0-1-2-3 address formats, Types of operands Addressing modes;
Instruction types based on operations (functions and examples of each); key characteristics of RISC and CISC;
Interrupt: its purpose, types, Classes and interrupt handling (ISR, multiple interrupts), Exceptions; Instruction
pipelining (operation and speed up) Multiprocessor systems: Taxonomy of Parallel Processor Architectures, two types
Te

of MIMD clusters and SMP (organization and benefits) and multicore processor (various Alternatives and advantages
of multicores), Typical features of multicore intel core i7.

Case Study : 8086 Assembly language programming (Refer chapter 10)


Unit 6

Memory & Input / Output Systems :

Memory Systems: Characteristics of Memory Systems, Memory Hierarchy, signals to connect memory to processor,
memory read and write cycle, Characteristics of semiconductor memory: SRAM, DRAM and ROM, Cache Memory –
Principle of Locality, Organization, Mapping functions, write policies, Replacement policies, Multilevel Caches, Cache
Coherence, Input / Output Systems : I/O Module, Programmed I/O, Interrupt Driven I/O, Direct Memory Access
(DMA). Case Study : USB flash drive (Refer chapter 11)

Case Studies may be assigned as self-study to students


and to be excluded from theory examinations
   

(Book Code : PO130A)

Powered by TCPDF (www.tcpdf.org)


Table of Contents 1 LD&CO (Sem. III / IT / SPPU)

Unit 1 1.5.1 Disadvantages of Open Collector


Output .................................................. 1-13
Chapter 1 : Digital Logic Families 1-1 to 1-22 1.5.2 Advantage of Open Collector Output ... 1-13
1.5.3 Wired ANDing ...................................... 1-13
Syllabus : Digital IC characteristics; TTL : Standard TTL
characteristics, Operation of TTL NAND gate; CMOS : 1.5.4 Comparison of Totem-pole and Open
Standard CMOS characteristics, Operation of CMOS Collector Outputs ................................. 1-14
NAND gate; Comparison of TTL & CMOS. 1.6 Standard TTL Characteristics ............................. 1-15

Case Study : CMOS 4000 series ICs. 1.6.1 Advantages of TTL ............................... 1-16

ns e
1.6.2 Disadvantages of TTL .......................... 1-16
1.1 Logic Families ....................................................... 1-2

io dg
1.7 MOS - Logic Family ............................................ 1-16
1.1.1 Classification Based on Circuit
1.8 CMOS Logic ....................................................... 1-16
Complexity .............................................. 1-2
1.8.1 CMOS NAND Gate ............................... 1-16
1.2 Classification of Logic Families ............................ 1-2
1.8.2 CMOS Series ....................................... 1-18

1.3
at le
1.2.1 Classification Based on Devices Used ... 1-2

Characteristics of Digital ICs ................................ 1-3


1.9 Standard CMOS Characteristics ........................ 1-18

1.9.1 Power Supply Voltage .......................... 1-18


ic w
1.3.1 Voltage and Current Parameters ............ 1-3 1.9.2 Logic Voltage Levels ............................ 1-18

1.3.2 Fan-in and Fan-out ................................. 1-4 1.9.3 Noise Margins ...................................... 1-18
bl no

1.9.4 Power Dissipation . ............................... 1-19


1.3.3 Noise Margin .......................................... 1-5
1.9.5 Fan Out . ............................................... 1-19
1.3.4 Propagation Delay (Speed of
1.9.6 Switching Speed . ................................. 1-19
Operation) .............................................. 1-5
Pu K

1.9.7 Unconnected Inputs ............................. 1-19


1.3.5 Power Dissipation ................................... 1-6
1.9.8 Advantages of CMOS ........................... 1-20
ch

1.3.6 Operating Temperature .......................... 1-6


1.9.9 Disadvantages of CMOS ...................... 1-20
1.3.7 Figure of Merit (Speed Power Product 1.10 Comparison of CMOS and TTL .......................... 1-20
(SPP)) ...................................................... 1-7
1.11 Case Study CMOS 4000 series ICs ................... 1-21
Te

1.3.8 Invalid Voltage Levels ............................ 1-7  Review Questions ........................................ 1-21
1.3.9 Current Sourcing and Current Sinking ... 1-7 Unit 1

1.3.10 Power Supply Requirements .................. 1-7 Chapter 2 : Number Systems and Codes 2-1 to 2-28
1.4 Transistor-Transistor Logic (TTL) ......................... 1-7 Syllabus : Binary, BCD, Octal, Hexadecimal, Excess-3,
1.4.1 The Multiple Emitter Transistor ............... 1-7 Gray code and their conversions.
Case study : Practical applications of various codes in
1.4.2 Two Input TTL-NAND Gate (Totempole
computers.
Output) ................................................... 1-8
2.1 Introduction .......................................................... 2-2
1.4.3 Totem-pole (Active Pull up) Output
2.2 System or Circuit .................................................. 2-2
Stage . ................................................... 1-10
2.2.1 Digital Systems ...................................... 2-2
1.4.4 Unconnected Inputs ............................. 1-11
2.3 Binary Logic and Logic Levels .............................. 2-2
1.4.5 Clamping Diodes .................................. 1-11
2.3.1 Positive Logic ......................................... 2-2
1.4.6 5400 Series .......................................... 1-11
2.3.2 Negative Logic ....................................... 2-2
1.4.7 Three Input TTL NAND Gate ................ 1-11
2.4 Number Systems .................................................. 2-3
1.5 Open Collector Outputs (TTL) ............................ 1-12

Powered by TCPDF (www.tcpdf.org)


Table of Contents 2 LD&CO (Sem. III / IT / SPPU)

2.4.1 Important Definitions Related to All 2.14.2 Hex to Other Systems .......................... 2-18
Numbering Systems ............................... 2-3
2.15 Concept of Coding .............................................. 2-21
2.4.2 Various Numbering Systems .................. 2-3
2.16 Classification of Codes ....................................... 2-21
2.5 The Decimal Number System ............................... 2-4
2.16.1 Weighted Binary Codes ....................... 2-21
2.5.1 Characteristics of a Decimal System ...... 2-4
2.16.2 Non Weighted Codes ........................... 2-21
2.6 The Binary Number System .................................. 2-4
2.16.3 Alphanumeric Codes ............................ 2-21
2.6.1 Binary Number Formats ......................... 2-5

ns e
2.17 Binary Coded Decimal (BCD) Code ................... 2-22
2.7 Octal Number System .......................................... 2-5
2.17.1 Comparison with Binary ....................... 2-22

io dg
2.8 Hexadecimal Number System .............................. 2-5
2.17.2 Advantages of BCD Codes .................. 2-23
2.9 Conversion of Number Systems ........................... 2-6
2.17.3 Disadvantages ..................................... 2-23
2.10 Conversions Related to Decimal System ............. 2-6

at le
2.10.1 Conversion from any Radix r to
Decimal .................................................. 2-6
2.18 Non – weighted Codes ....................................... 2-23

2.18.1 Excess – 3 Code .................................. 2-23


ic w
2.19 Gray Code .......................................................... 2-24
2.10.2 Conversion from Decimal to Other
Systems .................................................. 2-7 2.19.1 Application of Gray Code ..................... 2-25
bl no

2.10.2.1 Successive Division for Integer Part 2.19.2 Advantages of Gray Code .................... 2-25
Conversion ............................................. 2-7 2.19.3 Gray-to-Binary Conversion ................... 2-25
2.10.2.2 Successive Multiplication for Fractional 2.19.4 Binary to Gray Conversion ................... 2-26
Pu K

Part Conversion ...................................... 2-9


2.20 Code Conversions .............................................. 2-26
2.10.2.3 Conversion of Mixed Decimal Number to
2.20.1 Binary to BCD Conversion ................... 2-26
ch

Any Other Radix ................................... 2-10

2.11 Conversion from Binary to Other Systems ......... 2-11 2.20.2 BCD to Binary Conversion ................... 2-26

2.11.1 Conversion from Binary to Decimal ...... 2-11 2.20.3 BCD to Excess – 3 ............................... 2-26
Te

2.11.2 Binary to Octal Conversion ................... 2-11 2.20.4 Excess – 3 to BCD Conversion ............ 2-27

2.11.3 Binary to Hex Conversion ..................... 2-12  Review Questions ........................................ 2-27

2.12 Conversion from Other Systems to Binary Unit 1


System ................................................................ 2-12
Chapter 3 : Binary Arithmetic 3-1 to 3-28
2.12.1 Conversion from Decimal to Binary ...... 2-12
Syllabus : Signed binary number representation and
2.12.2 Octal to Binary Conversion ................... 2-12
arithmetic : Sign magnitude, 1’s complement and 2’s
2.12.3 Hex to Binary Conversion ..................... 2-13 complement representation, Unsigned binary arithmetic
(Addition, subtraction, multiplication and division),
2.13 Conversion from Octal to Other Systems ........... 2-14
Subtraction using 2’s complement; IEEE standard 754
2.13.1 Octal to Hex Conversion ....................... 2-14 floating point number representations.

2.13.2 Conversion from Other Systems to Case study : Four basic arithmetic operations using
Octal ..................................................... 2-14 floating point numbers in a calculator.

2.14 Conversions Related to Hexadecimal System .... 2-16 3.1 Introduction .......................................................... 3-2
2.14.1 Other Systems to Hex .......................... 2-16 3.2 Unsigned Binary Numbers ................................... 3-2

Powered by TCPDF (www.tcpdf.org)


Table of Contents 3 LD&CO (Sem. III / IT / SPPU)

3.2.1 Important Features of Unsigned 3.8.3 AND Operator ...................................... 3-19


Numbers ................................................. 3-2
3.8.4 OR Operator ......................................... 3-19
3.2.2 Unsigned Binary Arithmetic .................... 3-2
3.8.5 Logic Gates .......................................... 3-20
3.2.3 Binary Addition ....................................... 3-2
3.8.6 Gates, Symbols and Boolean
3.2.4 Sum and Carry ....................................... 3-2 Expression ........................................... 3-20

3.2.5 Binary Subtraction .................................. 3-3 3.8.7 Postulates . ........................................... 3-20

3.2.6 Subtraction and Borrow .......................... 3-3 3.9 Definition of Boolean Algebra ............................. 3-21

ns e
3.2.7 Binary Multiplication ............................... 3-4 3.9.1 Boolean Postulates and Laws .............. 3-21

io dg
3.2.8 Binary Division ........................................ 3-4 3.10 Two Valued Boolean Algebra ............................. 3-22

3.3 Sign-Magnitude Numbers ..................................... 3-4 3.11 Basic Theorems and Properties of Boolean
Algebra ............................................................... 3-23
3.3.1 Range of Sign-Magnitude Numbers ....... 3-5

3.4 at le
Complements ....................................................... 3-5

3.4.1 1’s Complement ..................................... 3-5


3.11.1

3.11.2
Duality .................................................. 3-23

Basic Theorems . .................................. 3-23


ic w
3.11.3 De-Morgan’s Theorems ....................... 3-25
3.4.2 Representation of Signed Numbers using
1’s Complement ..................................... 3-6 3.11.4 Operator Precedence ........................... 3-25
bl no

3.4.3 2’s Complement ..................................... 3-6 3.12 Boolean Expression and Boolean Function ....... 3-26

3.4.4 Representation of Signed Numbers using 3.12.1 Truth Table Formation .......................... 3-26
2’s Complement ..................................... 3-6
3.12.2 Examples on Reducing the Boolean
Pu K

3.4.5 Signed Complement Numbers ............... 3-7 Expression ........................................... 3-26


ch

3.4.6 Addition of Signed Magnitude 3.12.3 Complement of a Function ................... 3-27


Numbers ................................................. 3-9
 Review Questions ........................................ 3-28
3.5 2’s Complement Arithmetic ................................. 3-10
Unit 1
Te

3.5.1 Subtraction of Unsigned Binary using


2’s Complement ................................... 3-10 Chapter 4 : Logic Minimization 4-1 to 4-28

3.5.2 Subtraction of Signed Binary Syllabus : Representation of logic function : Logic


Numbers ............................................... 3-13 statement, Truth-table, SOP form, POS form; Simplification

3.6 Floating Point Representation ............................ 3-13 of logical functions using K-Maps upto 4 variables.

3.6.1 Parts of Floating Point Numbers .......... 3-13 4.1 System or Circuit .................................................. 4-2

3.6.2 Binary Floating Point Numbers ............. 3-14 4.1.1 Digital Systems ...................................... 4-2

3.6.3 Single Precision Floating Point Binary 4.1.2 Types of Digital Systems ........................ 4-2
Numbers ............................................... 3-14
4.1.3 Combinational Circuit Design ................. 4-3
3.7 IEEE-754 Standard for Representing Floating Point
4.2 Standard Representations for Logical Functions . 4-3
Numbers ............................................................. 3-16
4.2.1 Sum-of-Products (SOP) Form ................ 4-4
3.8 Introduction to Boolean Algebra ......................... 3-19

3.8.1 Basic Logical Operations (Logic 4.2.2 Product of the Sums Form (POS) .......... 4-4
Variables) ............................................. 3-19 4.2.3 Standard or Canonical SOP and POS
3.8.2 NOT Operator (Inversion) ..................... 3-19 Forms ..................................................... 4-4

Powered by TCPDF (www.tcpdf.org)


Table of Contents 4 LD&CO (Sem. III / IT / SPPU)

4.2.4 Conversion of a Logic Expression to 4.7.2 Minimization of Logic Functions not


Standard SOP or POS Form .................. 4-5 Specified in Standard SOP Form ......... 4-20

4.3 Concepts of Minterm and Maxterm ....................... 4-6 4.7.3 Don’t Care Conditions .......................... 4-22

4.3.1 Representation of Logical Expressions using 4.8 Product of Sum (POS) Simplification ................. 4-23
Minterms and Maxterms ......................... 4-7
4.8.1 K-map Representation of POS Form ... 4-23
4.3.2 Writing SOP and POS Forms for a Given
4.8.2 Representation of Standard POS
Truth Table ............................................. 4-7
form on K-map ..................................... 4-24

ns e
4.3.3 To Write Standard SOP Expression for a
4.8.3 Simplification of Standard POS Form using
Given Truth Table ................................... 4-7
K-map ................................................... 4-25

io dg
4.3.4 To Write a Standard POS Expression for a
 Review Questions ........................................ 4-28
Given Truth Table ................................... 4-8
Unit 2
4.3.5 Conversion from SOP to POS and Vice

4.4
at le Versa ...................................................... 4-8

Methods to Simplify the Boolean Functions ......... 4-9


Chapter 5 : Combinational Logic Design

Syllabus : Design using SSI chips : Code converters,


5-1 to 5-58
ic w
4.4.1 Algebraic Simplification .......................... 4-9 Half-adder, Full adder, Half subtractor, Full subtractor, n bit

4.5 Karnaugh-Map Simplification (The Map binary adder.


bl no

Method) .............................................................. 4-10 Introduction to MSI chips : Multiplexer (IC 74153),

4.5.1 K-map Structure ................................... 4-10 Demultiplexer (IC 74138), Decoder (74238), Encoder (IC
74147), Binary adder (IC 7483).
4.5.2 K-map Boxes and Associated Product
Design using MSI chips : BCD adder & subtractor using
Pu K

Terms ................................................... 4-11


IC 7483, Implementation of logic functions using IC 74153
4.5.3 Alternative Way to Label the K-map ..... 4-12 & 74138.
ch

4.5.4 Truth Table to K-map ............................ 4-12 Case study : Use of combinational logic design in 7
4.5.5 Representation of Standard SOP Form on segment display interface.
K-map ................................................... 4-13
5.1 Introduction to Combinational Circuits ................. 5-2
Te

4.6 Simplification of Boolean Expressions using


5.1.1 Analysis of a Combinational Circuit ........ 5-2
K-map ................................................................. 4-13
5.1.2 Design of Combinational Logic using SSI
4.6.1 How does Simplification Takes Place ? 4-14
Chips ...................................................... 5-3
4.6.2 Way of Grouping (Pairs, Quads and
5.2 Design of Combinational Logic using SSI
Octets) .................................................. 4-14
Chips .................................................................... 5-4
4.6.3 Grouping Two Adjacent One’s (Pairs) .. 4-14
5.2.1 Code Converters .................................... 5-4
4.6.4 Grouping Four Adjacent Ones (Quad) .. 4-15
5.2.1.1 BCD to Excess 3 Converter ................... 5-4
4.6.5 Grouping Eight Adjacent Ones
5.2.2 BCD to Gray Code Converter ................. 5-6
(Octet) .................................................. 4-17
5.2.3 Binary to Gray Code Converter .............. 5-7
4.6.6 Summary of Rules Followed for K-Map
Simplification ........................................ 4-17 5.2.4 Gray to BCD Converter .......................... 5-8

4.7 Minimization of SOP Expressions (K Map 5.2.5 Excess 3 to BCD Converter ................. 5-10
Simplification) ..................................................... 4-17 5.3 Binary Adders and Subtractors .......................... 5-10
4.7.1 Elimination of a Redundant Group ....... 4-20 5.3.1 Types of Binary Adders ........................ 5-11

Powered by TCPDF (www.tcpdf.org)


Table of Contents 5 LD&CO (Sem. III / IT / SPPU)

5.3.2 Half Adder. ............................................ 5-11 5.8.1 A 2-Bit Comparator . ............................. 5-27

5.3.3 Full Adder. ............................................. 5-12 5.9 Multiplexer (Data Selector) ................................. 5-29

5.3.4 Full Adder using Half Adder ................. 5-13 5.9.1 Necessity of Multiplexers ..................... 5-29

5.3.5 Applications of Full Adder ..................... 5-13 5.9.2 Advantages of Multiplexers .................. 5-29

5.3.6 Binary Subtractors ................................ 5-13 5.10 Types of Multiplexers .......................................... 5-30

5.3.7 Half Subtractor ..................................... 5-13 5.10.1 2 : 1 Multiplexer .................................... 5-30

ns e
5.3.8 Full Subtractor ...................................... 5-14 5.10.2 A 4 : 1 Multiplexer . ............................... 5-30

5.3.9 Full Subtractor using Half 5.10.3 8 : 1 Multiplexer .................................... 5-31

io dg
Subtractors ........................................... 5-15
5.10.4 Applications of a Multiplexer ................ 5-31
5.4 The n-Bit Parallel Adder ..................................... 5-16
5.11 Study of Different Multiplexer ICs ....................... 5-32
5.4.1 A Four Bit Parallel Adder Using Full

at le
5.4.2
Adders . ................................................ 5-16

Propagation Delay in Parallel Adder .... 5-16


5.12
5.11.1 54LS 153/DM 54LS 153/DM 74LS 153
(Dual 4 : 1 Multiplexer) ......................... 5-32

Multiplexer Tree/Cascading of Multiplexer ......... 5-33


ic w
5.4.3 Look Ahead – Carry Adder ................... 5-17
5.13 Use of Multiplexers in Combinational Logic
5.4.4 Four Bit Fast Adder with Look-Ahead Design ................................................................ 5-35
bl no

Carry ..................................................... 5-18


5.13.1 Implementation of a Logical Expression in
5.4.5 MSI Binary Adder IC 74 LS 83 / 74 the Standard SOP Form ....................... 5-35
LS 283 . ................................................. 5-19
5.13.2 Use of 4 : 1 MUX to Realize a 4 Variable
Pu K

5.4.6 Four Bit Binary Adder using IC 7483 .... 5-19 Function ............................................... 5-36

5.4.7 Cascading of Adders ............................ 5-19 5.13.3 Use of 8 : 1 MUX to Realize a 4 Variable
ch

Function ............................................... 5-37


5.5 n-bit Parallel Subtractor ...................................... 5-20
5.13.4 Implementation of a Logical Expression in
5.5.1 4 Bit Parallel Subtractor using
the Non-standard SOP Form ............... 5-38
IC7483 .................................................. 5-20
Te

5.13.5 Implementing a Standard POS Expression


5.5.2 4-Bit Binary Parallel Adder / Subtractor
using Multiplexer .................................. 5-38
Using IC 7483 ....................................... 5-20
5.13.6 Implementation of Boolean SOP Expression
5.6 BCD Addition ...................................................... 5-21
with Don’t Care Conditions ................... 5-39
5.6.1 BCD Adder using MSI IC 7483 ............. 5-22
5.14 Demultiplexers .................................................... 5-41
5.7 BCD Subtractor using MSI IC 7483 .................... 5-24
5.14.1 Demultiplexer Principle ........................ 5-41
5.7.1 BCD Subtraction using 9’s
5.15 Types of Demultiplexers ..................................... 5-42
Complement ......................................... 5-24
5.15.1 1 : 2 Demultiplexer ............................... 5-42
5.7.2 4-Bit BCD Subtractor using 9’s Complement
Method ................................................. 5-24 5.15.2 1 : 4 Demultiplexer ............................... 5-42

5.7.3 BCD Subtraction using 10’s 5.15.3 1 : 8 Demultiplexer ............................... 5-43


Complement ......................................... 5-25
5.15.4 IC 74138 as 1 : 8 DE-MUX ................... 5-43
5.7.4 4-bit BCD Subtraction using 10’s
5.16 Demultiplexer Tree ............................................. 5-44
Complement Method ............................ 5-26
5.16.1 Use of DEMUX in Combinational Logic
5.8 Magnitude Comparators ..................................... 5-27
Design .................................................. 5-44

Powered by TCPDF (www.tcpdf.org)


Table of Contents 6 LD&CO (Sem. III / IT / SPPU)

5.17 Encoders ............................................................ 5-47 6.1 Introduction .......................................................... 6-2


5.17.1 Types of Encoders ............................... 5-47 6.1.1 Clock Signal ........................................... 6-2
5.18 Priority Encoder . ................................................. 5-47
6.1.2 Comparison of Combinational and
5.18.1 Priority Encoders in the IC Form .......... 5-48 Sequential Circuits ................................. 6-2
5.18.2 Decimal to BCD Encoder ..................... 5-48 6.1.3 1-Bit Memory Cell (Basic Bistable
5.18.3 Decimal to BCD Encoder MSI Element) ................................................. 6-3
IC 74147 ............................................... 5-48
6.1.4 Latch ...................................................... 6-3

ns e
5.19 Decoder .............................................................. 5-49
6.1.5 Symbol and Truth Table of S-R Latch .... 6-4
5.19.1 2 to 4 Line Decoder .............................. 5-49

io dg
6.1.6 Characteristic Equation .......................... 6-4
5.19.2 Demultiplexer as Decoder .................... 5-49
6.1.7 NAND Latch [S-R Latch using
5.19.3 3 to 8 Line Decoder .............................. 5-49
NAND Gates] ......................................... 6-5
5.19.4 1:8 DEMUX as 38 Decoder .................. 5-50

at le
5.19.5

5.19.6
IC 74138 / IC 74238 as 3:8 Decoder .... 5-50

Combinational Logic Design Using


6.2 Triggering Methods .............................................. 6-6

6.2.1 Concept of Level Triggering ................... 6-6


ic w
Decoders .............................................. 5-51 6.2.2 Types of Level Triggered Flip-flops ........ 6-6

5.19.7 Advantage of Decoder Realization ....... 5-54 6.2.3 Concept of Edge Triggering ................... 6-7
bl no

5.20 Case Study Combinational Logic Design of 6.2.4 Types of Edge Triggered Flip Flops ....... 6-7
BCD to 7 Segment Display Controller ................ 5-54
6.3 Gated Latches (Level Triggered SR Flip Flop) ..... 6-7
5.20.1 Seven Segment LED Display ............... 5-54
6.3.1 Types of Level Triggered (Clocked) Flip
Pu K

5.20.2 Types of Seven Segment Displays ....... 5-54


Flops ...................................................... 6-7
5.20.3 Common Anode Display ....................... 5-54
6.4 The Gated S-R Latch (Level Triggered S-R
ch

5.20.4 Common Cathode Display ................... 5-54 Flip Flop) .............................................................. 6-7
5.20.5 Use of a Decoder for Driving the Seven
6.4.1 Positive Level Triggered SR Flip-flop ..... 6-7
Segment Display .................................. 5-55
6.4.2 Negative Level Triggered SR Flip Flop .. 6-8
Te

5.20.6 BCD to Seven Segment Display Driver


(Common Anode Display) .................... 5-55 6.5 The Gated D Latch (Clocked D Flip Flop) ............ 6-9

 Review Questions ........................................ 5-57 6.6 Gated JK Latch (Level Triggered JK


Flip Flop) . ........................................................... 6-10
Unit 3
6.6.1 Race Around Condition in JK Latch ..... 6-10
Chapter 6 : Flip Flops 6-1 to 6-38
6.6.2 Difference between Latch and
Syllabus : Introduction to sequential circuits : Difference
Flip-flop ................................................ 6-11
between combinational circuits and sequential circuits;
Memory element-latch & Flip-Flop. 6.7 Edge Triggered Flip Flops .................................. 6-11

Flip-Flops : Logic diagram, Truth table & excitation table of 6.7.1 Positive Edge Triggered S-R
SR, JK, D, T flip flops; Conversion from one FF to another, Flip Flop ............................................... 6-11
Study of flip flops with regard to asynchronous and
6.7.2 Negative Edge Triggered S - R
synchronous, Preset & clear, Master slave configuration;
Flip Flop ............................................... 6-13
Study of 7474, 7476 flip flop ICs.
6.8 Edge Triggered D Flip Flop ................................ 6-13
Case study : Use of sequential logic design in a simple
traffic light controller. 6.8.1 Positive Edge Triggered D Flip Flop .... 6-13

6.8.2 Negative Edge Triggered D Flip Flop ... 6-14

Powered by TCPDF (www.tcpdf.org)


Table of Contents 7 LD&CO (Sem. III / IT / SPPU)

6.8.3 Applications of D Flipflop ...................... 6-14 6.15.5 Conversion of D Flip Flop to T


Flip Flop . .............................................. 6-26
6.9 Edge Triggered J-K Flip Flop .............................. 6-14
6.15.6 T Flip Flop to D Flip Flop
6.9.1 Positive Edge Triggered JK Flip Flop ... 6-14
Conversion ........................................... 6-26
6.9.2 Characteristic Equation of JK
6.15.7 JK Flip Flop to D Flip Flop
Flip Flop ................................................ 6-16
Conversion ........................................... 6-27
6.9.3 How does an Edge Triggered JK FF Avoid
6.15.8 JK Flip Flop to SR Flip Flop
Race Around Condition ? ...................... 6-16
Conversion .......................................... 6-27

ns e
6.9.4 Negative Edge Triggered JK
6.15.9 D FF to SR FF Conversion ................... 6-28
Flip-Flop ............................................... 6-16

io dg
6.15.10 T FF to SR FF Conversion ................... 6-28
6.10 Toggle Flip Flop (T Flip Flop) ............................. 6-17
6.15.11 Conversion from D FF to JK FF ........... 6-28
6.10.1 Positive Edge Triggered T-FF .............. 6-17
6.16 Applications of Flip Flops ................................... 6-29

at le
6.10.2

6.10.3
Negative Edge Triggered T Flip Flop .... 6-18

Application of T F/F .............................. 6-18


6.17 Study of Flip-Flop ICs ......................................... 6-29

6.17.1 SN74LS74A Dual D-Type Positive Edge-


ic w
6.11 Master Slave (MS) JK Flip Flop .......................... 6-18
triggered Flip-Flop Low Power
6.12 Preset and Clear Inputs ..................................... 6-20 Schottky ............................................... 6-29
bl no

6.12.1 S-R Flip-Flop with Preset and Clear 6.17.2 SN74LS76A Dual JK Flip-Flop with Set and
Inputs .................................................... 6-20 Clear Low Power Schottky ................... 6-30
6.12.2 Synchronous Preset and Clear 6.18 Analysis of Clocked Sequential Circuits ............. 6-31
Pu K

Inputs .................................................... 6-21


6.18.1 State Table ........................................... 6-31
6.12.3 JK Flip Flop with Preset and Clear
6.18.2 State Diagram ...................................... 6-31
ch

Inputs .................................................... 6-21


6.18.3 State Equation ...................................... 6-32
6.12.4 Applications of JK Flip Flop .................. 6-21
6.19 Design of Clocked Synchronous State Machine
6.13 Various Representations of Flip Flops ................ 6-21
using State Diagram ........................................... 6-34
Te

6.13.1 Characteristic Equations ...................... 6-22


6.20 Case Study Use of Sequential Logic Design in a
6.14 Excitation Table of Flip-Flop ............................... 6-22 Simple Traffic Light Controller ............................ 6-35
6.14.1 Excitation Table of SR Flip Flops ......... 6-22  Review Questions ........................................ 6-37

6.14.2 Excitation Table of D Flip Flop .............. 6-23 Unit 3


6.14.3 Excitation Table of JK Flip Flop ............ 6-23
Chapter 7 : Counters 7-1 to 7-34
6.14.4 Excitation Table of T Flip Flop .............. 6-23
Syllabus : Application of flip-flops : Counters -
6.15 Conversion of Flip Flops ..................................... 6-23
Asynchronous, Synchronous and modulo n counters,
6.15.1 Conversion from S-R Flip Flop to D Study of 7490 modulus n counter ICs & their applications
Flip Flop ............................................... 6-24 to implement mod counters.

6.15.2 Conversion of JK FF to T FF ................ 6-24 7.1 Introduction .......................................................... 7-2

6.15.3 SR Flip Flop to T Flip Flop .................... 6-25 7.1.1 Types of Counters . ................................. 7-2

6.15.4 SR Flip Flop to JK Flip Flop .................. 6-25 7.1.2 Classification of Counters ...................... 7-2

7.2 Asynchronous / Ripple Up Counters .................... 7-2

Powered by TCPDF (www.tcpdf.org)


Table of Contents 8 LD&CO (Sem. III / IT / SPPU)

7.2.1 Two Bit Asynchronous Up Counter 7.10.3 Comparison of Synchronous and


using JK Flip-Flops ................................. 7-4 Asynchronous Counters ....................... 7-29

7.2.2 3 Bit Asynchronous Up Counter ............. 7-4 7.11 Lock Out Condition ............................................. 7-29

7.2.3 4 Bit Asynchronous up Counter .............. 7-5 7.11.1 Bushless Circuit . .................................. 7-29

7.2.4 State Diagram of a Counter .................... 7-6 7.12 Bush Diagram ..................................................... 7-31

7.3 Asynchronous Down Counters ............................. 7-6 7.13 Applications of Counters .................................... 7-33

ns e
7.3.1 3- Bit Asynchronous Down Counter ....... 7-6  Review Questions ........................................ 7-33

7.4 UP / DOWN Counters ........................................... 7-7 Unit 3

io dg
7.4.1 UP/DOWN Ripple Counters ................... 7-7 Chapter 8 : Registers 8-1 to 8-16
7.5 Modulus of the Counter (MOD-N Counter) ........... 7-8
Syllabus : Shift register types (SISO, SIPO, PISO & PIPO)

at le
7.5.1 Design of Asynchronous MOD
Counters ................................................. 7-8
& applications.

8.1 Introduction .......................................................... 8-2


ic w
7.5.2 Frequency Division Taking Place in
8.2 Data Formats ........................................................ 8-2
Asynchronous Counters ....................... 7-10
8.3 Classification of Registers .................................... 8-2
bl no

7.5.3 Disadvantages of Ripple Counters ....... 7-10


8.4 Buffer Registers .................................................... 8-2
7.6 Ripple Counter IC 7490 (Decade Counter) ......... 7-10
8.5 Shift Registers ...................................................... 8-3
7.6.1 The Internal Diagram of IC 7490 .......... 7-11
8.5.1 Serial Input Serial Output
Pu K

7.6.2 Other Applications of IC 7490 .............. 7-12 (Shift Left Mode) ..................................... 8-4

7.6.3 Symmetrical Bi-quinary Divide by Ten 8.5.2 Serial In Serial Out


ch

Counter ................................................. 7-12 (Shift Right Mode) .................................. 8-5

7.7 Problems Faced by Ripple Counters ................. 7-19 8.5.3 Applications of Serial Operation ............. 8-6

8.6 Serial In Parallel Out (SIPO) ................................ 8-7


Te

7.8 Synchronous Counters ....................................... 7-19

7.8.1 2-Bit Synchronous up Counter ............. 7-19 8.7 Parallel In Serial Out Mode (PISO) ....................... 8-7

8.8 Parallel In Parallel Out (PIPO) .............................. 8-8


7.8.2 3-Bit Synchronous Binary up Counter .. 7-20
8.9 Bidirectional Shift Register ................................... 8-9
7.8.3 Design of the 3 Bit Synchronous
Counter ................................................. 7-21 8.9.1 A 3-bit Bidirectional Register using the JK
Flip Flops ............................................... 8-9
7.8.4 Four Bit Synchronous Up Counter ........ 7-23
8.10 Applications of Shift Registers ........................... 8-10
7.9 Modulo – N Synchronous Counters . .................. 7-24
8.10.1 Serial to Parallel Converter .................. 8-10
7.9.1 Synchronous Decade Counter ............. 7-24
8.10.2 Parallel to Serial Converter .................. 8-10
7.10 UP / DOWN Synchronous Counter .................... 7-27
8.10.3 Ring Counter ........................................ 8-10
7.10.1 3-bit Up/Down Synchronous
8.10.4 Johnson’s Counter (Twisted / Switch Tail
Counter ................................................. 7-28
Ring Counter) ....................................... 8-12
7.10.2 Advantages of Synchronous Counter ... 7-28
8.10.5 Sequence Generator ............................ 8-16

8.11 Sequence Detector ............................................. 8-16

Powered by TCPDF (www.tcpdf.org)


Table of Contents 9 LD&CO (Sem. III / IT / SPPU)

8.11.1 Pseudo Random Binary Sequence (PRBS) 9.6.2 Execute Cycle ...................................... 9-10
Generator ............................................. 8-16
9.6.3 Interrupt Cycle ...................................... 9-11
 Review Questions ........................................ 8-16
9.6.4 Examples of Microprograms ................ 9-12
Unit 4 9.6.5 Applications of Microprogramming ....... 9-14

Chapter 9 : Computer Organization & Processor 9.7 Control Unit Hardwired Control Unit Design
9-1 to 9-18 Methods .............................................................. 9-15

9.8 Control Unit Soft Wired (Micro programmed)

ns e
Syllabus : Computer organization and computer
architecture, Organization, Functions and types of Control Unit Design Methods ............................. 9-16

io dg
computer units - CPU (Typical organization, Functions, 9.8.1 Wilkie’s Microprogrammed Control
Types), Memory (Types and their uses in computer), IO Unit ....................................................... 9-17
(Types and functions) and system bus (Address, data and
9.8.2 Comparison between Hardwired and
control, Typical control lines, Multiple-Bus Hierarchies);
Micro-programmed Control ................... 9-18

at le
Von Neumann and Harvard architecture; Instruction cycle.

Processor : Single bus organization of CPU; ALU (ALU


signals, functions and types); Register (Types and
 Review Questions ........................................ 9-18
ic w
Unit 5
functions of user visible, Control and status registers such
as general purpose, Address registers, Data registers, Chapter 10 : Processor Instructions & Processor
bl no

Flags, PC, MAR, MBR, IR) & control unit (Control signals Enhancements 10-1 to 10-40
and typical organization of hard wired and
Syllabus : Instruction : Elements of machine instruction;
microprogrammed CU), Micro Operations (Fetch, Indirect,
Instruction representation (Opcode and mnemonics,
Execute, Interrupt) and control signals for these micro
Assembly language elements) ; Instruction format and 0-1-
Pu K

operations.
2-3 address formats, Types of operands,
Case study : 8086 processor, PCI bus.
ch

Addressing modes; Instruction types based on operations


9.1 Introduction ........................................................... 9-2 (Functions and examples of each); Key characteristics of
RISC and CISC; Interrupt : Its purpose, Types, Classes
9.2 Basic Organization of Computer and Block Level
and interrupt handling (ISR, Multiple interrupts),
Description of Functional Units ............................. 9-2
Te

Exceptions; Instruction pipelining (Operation and speed


9.2.1 Structural Components of a Computer ... 9-2 up),
9.2.2 Functional View of a Computer .............. 9-3 Multiprocessor systems : Taxonomy of Parallel Processor
9.3 Von Neumann and Harvard Architecture .............. 9-3 Architectures, Two types of MIMD clusters and SMP
(Organization and benefits) and multicore processor
9.3.1 Von Neumann Architecture .................... 9-3
(Various alternatives and advantages of multicores),
9.3.2 Harvard Architecture .............................. 9-6 Typical features of multicore intel core i7.

9.4 Basic Instruction Cycle ......................................... 9-6 Case study : 8086 Assembly language programming.

9.4.1 Interrupt Cycle ........................................ 9-6 10.1 Instruction Encoding Format .............................. 10-2

9.5 CPU Architecture and Register Organization ....... 9-8 10.2 Instruction Format and 0-1-2-3 Address
Formats .............................................................. 10-5
9.5.1 Interrupt Control ..................................... 9-9
10.2.1 Instruction Formats .............................. 10-5
9.5.2 Timing and Control Unit .......................... 9-9
10.2.2 Instruction Word Format - Number of
9.6 Instruction, Micro-instructions and Micro-operations
Addresses ............................................ 10-5
Interpretation and Sequencing ............................. 9-9
10.3 Addressing Modes .............................................. 10-6
9.6.1 Fetch Cycle .......................................... 9-10

Powered by TCPDF (www.tcpdf.org)


Table of Contents 10 LD&CO (Sem. III / IT / SPPU)

10.3.1 Examples on Addressing Modes .......... 10-8 10.9.2.4 Branch Prediction .............................. 10-31

10.4 Instruction Set of 8085 ........................................ 10-9 10.9.2.5 Pipeline Stall (Delayed Branch) ........ 10-31

10.5 Reduced Instruction Set Computer 10.9.2.6 Loop Unrolling Technique ................. 10-31
Principles .......................................................... 10-11 10.9.2.7 Software Scheduling or Software
10.5.1 RISC Versus CISC ............................. 10-11 Pipelining ........................................... 10-31

10.5.2 RISC Properties ................................. 10-11 10.9.2.8 Trace Scheduling .............................. 10-32

10.5.3 Register Window ................................ 10-12 10.9.2.9 Predicated Execution ........................ 10-33

ns e
10.5.4 Miscellaneous Features or Advantages of 10.9.2.10 Speculative Loading ....................... 10-33
RISC Systems .................................... 10-12

io dg
10.9.2.11 Register Tagging ............................ 10-33
10.5.5 RISC Shortcomings ............................ 10-14 10.9.3 Branch Prediction ............................... 10-34
10.5.6 ON-Chip Register File Versus Cache 10.9.3.1 Misprediction Penalty ......................... 10-34
Evaluation ........................................... 10-14

10.6

10.7
at le
Polling and Interrupts ........................................ 10-14

Pipeline Processing .......................................... 10-15


10.9.3.2 Static Branch Prediction ..................... 10-34

10.9.3.3 Branch-Target Buffer or Branch-Target


Address Cache ................................... 10-34
ic w
10.7.1 Non-Pipelined System Versus Two Stage
10.9.3.4 Dynamic Branch Prediction ............... 10-34
Pipelining ............................................ 10-15
10.9.3.5 One-bit Dynamic Branch Predictor .... 10-35
bl no

10.7.2 Basic Pipelined Datapath and Control for a


Six Stage CPU Instruction Pipeline .... 10-16 10.9.3.6 Two-bit Prediction .............................. 10-35

10.7.3 Linear Pipeline Procesors .................. 10-17 10.10 Multiprocessor Systems and Multicore Processor
(Intel Core i7 Processor) .................................. 10-35
Pu K

10.7.3.1 Asynchronous and Synchronous Linear


Pipelining ............................................ 10-17 10.10.1 Overlapping the CPU and Memory or I/O
Operations .......................................... 10-35
ch

10.7.3.2 Clocking and Timing Control ............. 10-18


10.11 Flynn’s Classifications ...................................... 10-35
10.7.3.3 Speedup, Efficiency and Throughput . 10-19
10.11.1 Flynn’s Classification of Parallel
10.7.4 Non Linear Pipeline Processors ......... 10-20
Computing .......................................... 10-35
Te

10.7.4.1 Collision Free Scheduling or Job


10.11.2 i5/i7 Mobile Version ........................... 10-38
Sequencing ........................................ 10-22
 Review Questions ...................................... 10-40
10.8 Instruction Pipelining and Pipelining Stages ..... 10-27
Unit 6
10.9 Pipeline Hazards .............................................. 10-28

10.9.1 Methods to Resolve the Data Hazards and Chapter 11 : Memory & Input / Output Systems
Advances in Pipelining ....................... 10-29 11-1 to 11-38
10.9.1.1 Pipeline Stalls ..................................... 10-29
Syllabus : Memory Systems : Characteristics of memory
10.9.1.2 Operand Forwarding (or) Bypassing .. 10-29 systems, Memory hierarchy, Signals to connect memory to
10.9.1.3 Dynamic Instruction Scheduling (or) Out- processor, Memory read and write cycle, Characteristics of
Of-Order (OOO) Execution ................. 10-30 semiconductor memory : SRAM, DRAM and ROM, Cache
memory – Principle of locality, Organization, Mapping
10.9.2 Handling of Branch Instructions to Resolve
functions, Write policies, Replacement policies, Multilevel
Control Hazards ................................. 10-30
caches, Cache coherence,
10.9.2.1 Pre-Fetch Target Instruction .............. 10-30
Input / Output systems : I/O module, Programmed I/O,
10.9.2.2 Branch Target Buffer (BTB) ............... 10-30 Interrupt driven I/O, Direct Memory Access (DMA).
10.9.2.3 Loop Buffer ........................................ 10-30
Case study : USB flash drive.

Powered by TCPDF (www.tcpdf.org)


Table of Contents 11 LD&CO (Sem. III / IT / SPPU)

11.1 Introduction to Memory and Memory 11.9.6 Cost and Performance Measurement of Two
Parameters ......................................................... 11-2 Level Memory Hierarchy .................... 11-28

11.1.1 Bytes and Bits ...................................... 11-3 11.9.7 Cache Consistency (Also Known as Cache
Coherency) ......................................... 11-28
11.2 Memory Hierarchy Classifications of Primary and
Secondary Memories .......................................... 11-3 11.9.8 Bus Master / Cache Interaction for Cache
Coherency .......................................... 11-28
11.3 Types of RAM ..................................................... 11-4
11.10 Pentium Processor Cache Unit ........................ 11-29
11.4 ROM (Read Only Memory) ................................. 11-5

ns e
11.10.1 Memory Reads Initiated by the Pentium
11.4.1 Types of ROM ...................................... 11-5
Processor ........................................... 11-29

io dg
11.4.2 Magnetic Memory ................................. 11-6
11.10.2 Memory Writes Initiated by PENTIUM
11.4.3 Optical Memory .................................... 11-7 Processor ........................................... 11-30
11.5 Allocation Policies ............................................... 11-9 11.10.3 Memory Reads Initiated by Another Bus
11.6
at le
Signals to Connect Memory To Processor and
Internal Organization of Memory ...................... 11-10
Master ................................................ 11-31

11.10.4 The MESI Model ................................. 11-31


ic w
11.7 Memory Chip Size and Numbers ...................... 11-11 11.11 Input / Output System ....................................... 11-32
11.8 Cache Memory Concept, Architecture (L1, L2, L3) 11.11.1 Parallel Versus Serial Interface .......... 11-32
bl no

and Cache Consistency .................................... 11-18


11.11.2 Types of Communication Systems ..... 11-33
11.8.1 Cache Operation ................................ 11-18
11.12 I/O Modules and 8089 IO Processor ................ 11-33
11.8.2 Principles of Locality of Reference ..... 11-18
11.12.1 I/O Module .......................................... 11-34
Pu K

11.8.3 Cache Performance ........................... 11-19


11.13 Types of Data Transfer Techniques Programmed I/O,
11.8.4 Cache Architectures ........................... 11-19 Interrupt Driven I/O and DMA ........................... 11-34
ch

11.9 Cache Mapping Techniques ............................. 11-20 11.13.1 Programmed I/O ................................. 11-34
11.9.1 Direct Mapping Technique .................. 11-20 11.13.2 Interrupt Driven I/O ............................. 11-35
Te

11.9.2 Fully Associative Mapping .................. 11-21 11.13.3 DMA ................................................... 11-36
11.9.3 Set Associative Mapping .................... 11-22 11.13.4 DMA Transfer Modes ......................... 11-36
11.9.4 Write Policy ........................................ 11-24  Review Questions ...................................... 11-37

11.9.5 Replacement Algorithms .................... 11-25



Powered by TCPDF (www.tcpdf.org)


Unit 1

Chapter

1
ns e
io dg
at le
ic w
Digital Logic Families
bl no
Pu K

Syllabus
Digital IC characteristics; TTL : Standard TTL characteristics, Operation of TTL NAND gate; CMOS :
ch

Standard CMOS characteristics, Operation of CMOS NAND gate; Comparison of TTL & CMOS.
Case study : CMOS 4000 series ICs.
Te

Chapter Contents
1.1 Logic Families 1.7 MOS - Logic Family
1.2 Classification of Logic Families 1.8 CMOS Logic
1.3 Characteristics of Digital ICs 1.9 Standard CMOS Characteristics
1.4 Transistor-Transistor Logic (TTL) 1.10 Comparison of CMOS and TTL
1.5 Open Collector Outputs (TTL) 1.11 Case Study : CMOS 4000 series ICs
1.6 Standard TTL Characteristics

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 1-2 Digital Logic Families

1.1 Logic Families : 1.1.1 Classification Based on Circuit


Complexity :
SPPU : Dec. 08, May 10, Dec. 19.  The logical circuits of different types are available in the

University Questions. integrated circuits.

Q. 1 What is logic family ?  Depending on the level of complexity the integrated


circuit are classified into four categories as follows :
(Dec. 08, May 10, Dec. 19, 2 Marks)
1. SSI : Small Scale Integration.
Definition :
2. MSI : Medium Scale Integration.

ns e
 Logic families are defined as the type of logic circuit 3. LSI : Large Scale Integration.
used in an IC. Various digital ICs available in market

io dg
4. VLSI : Very Large Scale Integration.
belong to various types. Each type is known as a logic
 The number of components (diodes, transistors, gates
family.
etc.) in SSI will be the lowest and that in VLSI will be the
 Various digital ICs available in market belong to various highest.


at le
types. These types are known as “families”.

Based on the components and devices internally used,


1.
2.
SSI < 10 components.
MSI < 100 components.
ic w
the digital IC families are named as RTL (Resistor 3. LSI > 100 components.
Transistor Logic), TTL (Transistor Transistor Logic), DTL 4. VLSI > 1000 components.
(Diode Transistor Logic), CMOS etc.
bl no

1.2 Classification of Logic Families :


Features of logic families :
1.2.1 Classification Based on Devices Used :
 Table 1.1.1 gives the comparison of some of the
SPPU : Dec. 08, May 10.
outstanding features of these logic families.
Pu K

University Questions.
Table 1.1.1 : Comparison of important
features of logic families
Q. 1 Give the classification of logic families.
ch

(Dec. 08, May 10, 4 Marks)


Sr.
Characteristics TTL CMOS ECL  The two basic techniques for manufacturing ICs are :
No.
1. Bipolar technology.
1. Power input Moderate Low Moderate
Te

to high 2. Unipolar devices-Metal Oxide Semiconductor


(MOS) technology.
2. Frequency limit High Moderate Very high
 The classification of logic families is shown in Fig. 1.2.1.
3. Circuit density Moderate High to Moderate
to high very high
4. Circuit types per High High Moderate
family

 Thus TTL family has a great versatility. It has high speed


as well (delay less than 1 nS for some TTL subfamilies).

 The CMOS family is popular because of its low input


power and very high circuit density i.e. more circuits can
be placed in the same volume of IC.

 ECL is used for very high speed digital circuits.

 But it needs more input power and less types of logic


circuits are available in ECL than those available in TTL
and CMOS. (C-205) Fig. 1.2.1 : Classification of logic families

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 1-3 Digital Logic Families

Bipolar families : 1.3 Characteristics of Digital ICs :


 The bipolar families of logic circuits use the bipolar
 Eventhough there are various logic families, the general
transistors fabricated on the chip.
characteristics, their definitions, and terminologies used
 That means all the gates belonging to the bipolar family for all of them have been standardized.
use the transistorised circuits.
 So let us discuss some of the most important general
 In the bipolar category there are three basic families characteristics that are applicable to all the digital ICs
called, Diode Transistor Logic (DTL), Transistor Transistor first and then discuss them for some particular families.
Logic (TTL) and Emitter Coupled Logic (ECL).
 The important characteristics of all the digital IC families

ns e
 DTL uses diodes and transistors, TTL uses transistors as are as follows :
the main elements.

io dg
1. Voltage and current parameters.
 TTL has become the most popular family in SSI (Small 2. Fan-in and fan-out.
Scale Integration) and MSI (Medium Scale Integration) 3. Noise margin.
chips, while ECL is the fastest logic family which is used 4. Propagation delay (speed).


at le
for high speed applications.

In the “Bipolar saturated” logic families, the bipolar


5.
6.
7.
Power dissipation.
Operating temperature.
Invalid voltage levels.
ic w
transistors are used as the main device.
 It is used as a switch and operated in the saturation or 8. Figure of merit (SPP).
cutoff regions. 1.3.1 Voltage and Current Parameters :
bl no

 TTL is an example of saturated bipolar logic.


SPPU : Dec. 08, May 17.
 In the unsaturated bipolar logic, the bipolar transistors
University Questions.
are not driven into hard saturation.
Q. 1 Explain any four characteristics of digital ICs.
Pu K

 This increases the speed of operation. So the


(Dec. 08, 2 Marks)
unsaturated bipolar ICs such as Schottky TTL and ECL
Q. 2 Explain any three characteristics of digital ICs.
(Emitter Coupled Logic) are much faster as compared to
ch

(May 17, 6 Marks)


TTL.
Voltage parameters (Threshold levels) :
 All these ICs are fabricated on silicon chips using
different fabrication technologies.  Ideally the input voltage levels of 0 V and + 5 V (for TTL)
Te

Unipolar families : are called as logic 0 and 1 levels respectively.

 However practically we won't always observe or obtain


 The MOS family use fabricates MOS Field Effect
the voltage levels matching exactly to these values.
Transistors (MOSFETs) fabricated on the chip. So all the
gates belonging to the MOS family use the MOSFET  Therefore it is necessary to define the worst case input
based circuits. voltages.

 In the MOS category there are three logic families 1. VIL (max) - Worst case low level input voltage :
namely PMOS (p-channel MOSFETs) family, NMOS (n-  This is the maximum value of input voltage which is to
channel MOSFETs) family and CMOS (Complementary
be considered as a logic 0 level.
MOSFETs) family.
 If the input voltage is higher than VIL (max), then it will not
 PMOS is the oldest and slowest type. NMOS is used for
be treated as a low (0) input level.
the LSI (large scale integration) field i.e. for the
2. VIH (min) - Worst case high level input voltage :
microprocessors and memories.

 CMOS which uses a push pull arrangement of n-channel  This is the minimum value of the input voltage which is
and p-channel MOSFETs, is extensively used where low to be considered as a logic 1 level.
power consumption is needed such as in pocket  If the input voltage is lower than VIH (min), then it will not
calculators. be treated as a High (1) input.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 1-4 Digital Logic Families

3. VOH (min) - Worst case high level output voltage : 4. IOH - High level output current :

 This is the minimum value of the output voltage which  This is the current flowing from the output when the
will be considered as a logic HIGH (1) level. output voltage happens to be in the specified HIGH (1)
voltage range and a specified load is applied.
 If the output voltage is lower than this level then it
won't be treated as a HIGH (1) output.  If the output current flows into the output terminal then
it is called as a sinking current and if the output current
4. VOL (max) - Worst case low level output voltage :
flows out of the output terminal then it is called as a
 This is the maximum value of the output voltage which sourcing current.

ns e
will be considered as a logic LOW (0) level.
 The current parameters are displayed on the logic circuit
 If the output voltage is higher than this value then it shown in Fig. 1.3.3.

io dg
won't be treated as a LOW (0) output. All the voltage
parameters are shown in Fig. 1.3.1.

at le (a)
ic w
bl no

(b)
(C-7584) Fig. 1.3.1 : Voltage parameters
(C-208) Fig. 1.3.3 : Current parameters
 The voltage parameters can be shown on the digital
 Note that the actual current directions can be opposite
circuit consisting of gates as shown in Fig. 1.3.2.
Pu K

to those shown in Fig. 1.3.3; depending on the logic


 Note that, the NAND and NOT gates shown, can be of family.
TTL, ECL, CMOS or any other type.
ch

 Note that current flowing into a node or device is


considered positive and current flowing out of node or
device is considered negative.

1.3.2 Fan-in and Fan-out :


Te

SPPU : May 17, May 18, Dec. 19.


University Questions.
(C-207) Fig. 1.3.2 : Voltage parameters on a logic circuit
Q. 1 Explain any three characteristics of digital ICs.
Current parameters :
(May 17, 6 Marks)
1. IIL - Low level input current : Q. 2 Define following terms related to logic families :
 It is the current that flows into the input terminals when 1. Fan-in 2. Fan-out. (May 18, 4 Marks)
a low level input voltage in the specified range is Q. 3 What is Logic Family ? Explain the terms :

applied. 1. Fan out 2. Propagation Delay.


(Dec. 19, 6 Marks)
2. IIH - High level input current :
 It is the current that flows into the input terminals when Fan-in :

a high level input voltage in the specified range is  Fan-in is defined as the number of inputs a gate has. For
applied. example a two input gate will have a fan-in equal to 2.

3. IOL - Low level output current : Fan-out :

 This is the current that flows out of the output when the  Fan-out is defined as the maximum number of inputs of
output voltage happens to be in the specified low (0) the same IC family that a gate can drive without falling
voltage range and a specified load is applied. outside the specified output voltage limits.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 1-5 Digital Logic Families

 Higher fan out indicates higher the current supplying  In order to avoid the effects of noise voltage, the
capacity of a gate. designers adjust the voltage levels VOH (min) and VIH (min)
 For example, a fan out of 5 indicates that the gate can to different levels with some difference between them
drive (supply current to) at the most 5 inputs of the as shown in Fig. 1.3.4.
same IC family.  The difference between VOH (min) and VIH (min) is known as

1.3.3 Noise Margin : the high level noise margin VNH.

 Similarly the difference between VIL (max) and VOL (max) is


SPPU : Dec. 08, Dec. 15, May 17, May 18.
called as the low level noise margin VNL.

ns e
University Questions.
 High level noise margin,
Q. 1 Explain noise margin. (Dec. 08, 2 Marks)

io dg
Q. 2 Explain the following TTL characteristics : VNH = VOH (min) – VIH (min)

1. Noise Immunity  Low level noise margin,


2. High level input voltage (VNH) VNL = VIL (max) – VOL (max)

at le
Q. 3
3. Figure of Merit (Dec. 15, 6 Marks)
Explain any three characteristics of digital ICs.
 When a high logic output is driving a logic circuit input,
any negative noise spike greater than VNH can force the
ic w
(May 17, 6 Marks) voltage to reduce into the invalid range.
Q. 4 Define following terms related to logic families :
 Similarly when a low logic output is driving a logic
bl no

Noise margin. (May 18, 2 Marks)


circuit input, any positive noise spikes greater than VNL
 To understand the meaning of the term “Noise Margin” can force the voltage to go into the invalid state.
or “Noise Immunity”, refer to the input and output
voltage profiles shown in Fig. 1.3.4. 1.3.4 Propagation Delay
Pu K

(Speed of Operation) :

SPPU : Dec. 08, May 18, Dec. 19.


ch

University Questions.

Q. 1 Explain propagation delay. (Dec. 08, 2 Marks)


Te

Q. 2 Define following terms related to logic families :


Propagation delay. (May 18, 2 Marks)

Q. 3 What is Logic Family ? Explain the terms :

1. Fan out 2. Propagation Delay.


(a) Input profile (b) Output profile (Dec. 19, 6 Marks)
(C-210) Fig. 1.3.4
Definition :
 Noise is an unwanted electrical disturbance which may
 The output of a logic gate does not change its state
induce some voltage in the connecting wires used
between two gates or from a gate output to load. instantaneously when the state of its input is changed.

 There is a time delay between these two time instants,


Definition :
which is called as the propagation delay.
 Noise immunity is defined as the ability of a logic
circuit to tolerate the noise without causing the output  Thus propagation delay is defined as time delay

to change undesirably. between the instant of application of an input pulse and

 A quantitative measure of noise immunity of a logic the instant of occurrence of the corresponding output

family is known as noise margin. pulse. This is shown in Fig. 1.3.5.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 1-6 Digital Logic Families

 For example, a logic circuit with a propagation time of


5 nS will be a faster logic circuit than the one with 10 nS
propagation time, under the specified load conditions.

1.3.5 Power Dissipation :

SPPU : Dec. 08, May 18.

University Questions.
Q. 1 Explain the power dissipation. (Dec. 08, 2 Marks)

ns e
Q. 2 Define following term related to logic families :

io dg
Power dissipation. (May 18, 2 Marks)
(a) Propagation delays for an inverter
Definition :

 The power dissipated in a logic IC due to applied

at le voltage and currents flowing through it, is known as its


power dissipation.
ic w
 This power is dissipated in it, in the form of heat.

 The power drawn by an IC from the power supply is


bl no

given by,

P = VCC  ICC
where ICC is the current drawn from the power supply.
Pu K

 This power is in milliwatts. Care should be taken to


reduce the power dissipation taking place in the logic IC
(b) Propagation delays for an AND gate
in order to protect the IC against damage due to
ch

(C-211) Fig. 1.3.5


excessive temperature, to reduce the loading on power
 From Fig. 1.3.5 it is observed that there are two supplies etc.
propagation delays.
 Another importance of power dissipation is that the
Te

1. tPHL : It is the propagation delay measured when the


product of power dissipation and propagation time is
output is making a transition from HIGH (1) to LOW (0)
always constant.
state.
2. tPLH : This is the propagation delay measured when the  Therefore if we reduce the power dissipation may lead

output makes a transition from LOW (0) to HIGH (1) then the propagation delay will increase in order to
state. keep their product constant.
 The values of tPHL and tPLH are not always equal. If they  Usually there is only one power supply terminal on any
are not equal then the one which is higher is considered IC. It is denoted by VCC for the TTL ICs and VDD for the
as the propagation delay time of the gate. CMOS ICs.
 The propagation delays are measured between the 1.3.6 Operating Temperature :
points corresponding to 50% levels as shown in
 The temperature range acceptable for the consumer
Fig. 1.3.5.
and industrial applications is 0 to 70 C and that for the
 Ideally propagation delay should be zero and practically
military applications is – 55 C to 125 C.
it should be as short as possible.
 The performance of gates will be in the specified limits
 The values of propagation delays are used to measure
how fast a logic circuits is. only if the temperature is in these ranges.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 1-7 Digital Logic Families

1.3.7 Figure of Merit (Speed Power Product


(SPP)) : SPPU : May 18

University Questions

Q. 1 Define following terms related to logic families :


Figure of merit. (May 18, 2 Marks)

Definition :
(C-1000) Fig. 1.3.6
 The figure of merit of a logical family is the product of
power dissipation and propagation delay. Current sinking :

ns e
 It is called as the speed power product. The speed is  The current sinking action has been demonstrated in

io dg
specified in seconds and power is specified in W. Fig. 1.3.6(b).
 Figure of merit = Propagation delay time  Power  Gate-2 which is the load gate acts as a load.
dissipation.  As soon as the output of Gate-1 goes low, the current
 Ideally the value of figure of merit is zero and starts flowing into the output terminal of Gate-1 as


at le
practically, it should be as low as possible.
Figure of merit is always a compromise between speed 
shown in Fig. 1.3.6(b).

Thus when gate-1 is accepting the current through its


ic w
and power dissipation. output terminal, it is said that the current sinking is
 That means if we try to reduce the propagation delay taking place.
then the power dissipation will increase and vice-versa.
bl no

1.3.10 Power Supply Requirements :


1.3.8 Invalid Voltage Levels :
 The supply voltage(s) and the amount of power
 The operation of a logic circuit will be proper if and only required by an IC are important characteristics required
if its input voltage levels are kept outside the invalid
Pu K

to choose the proper power supply.


voltage range.
 That means the input voltage should be either lower 1.4 Transistor-Transistor Logic (TTL) :
ch

than VIL (max) or higher than VIH (min.).


 The long form of TTL is transistor logic. The digital ICs in
 The invalid input voltage will produce an unpredictable
the TTL family use only transistors as their basic building
output response. Therefore it should be avoided.
block.
Te

 When the output is overloaded, there is a possibility of


 TTL ICs were first developed in 1965 and they were
output voltage going into the invalid range.
known as “standard TTL”.
1.3.9 Current Sourcing and Current Sinking :  This version of TTL circuits is not practically used now
due to availability of advanced versions.
Current sourcing :
 The standard TTL has been improved to a great extent
 The current sourcing action is illustrated in Fig. 1.3.6(a).
over the years.
Gate-1 acts as a source and Gate-2 acts as load.
 TTL devices are still used as “glue” logic which connects
 The output of Gate-1 is high. It supplies a current IIH to
more complex devices in digital systems.
the input of Gate-2. This is called as the current sourcing
 The bipolar TTL family as a whole is now becoming
action.
obsolete, but we can study it in order to understand
basic important concepts in general about the logic
families.

1.4.1 The Multiple Emitter Transistor :

 In the TTL circuits that we are going to discuss, use a


very special type of transistor.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 1-8 Digital Logic Families

 The normal transistor has only three terminals namely Q. 3 Explain the working of two input TTL NAND gate
collector, base and emitter. with active pull up. Consider various input, output
 But this special transistor has more than one emitters, as states for explanation.
shown in Fig. 1.4.1(a) and its equivalent circuit is shown (May 12, Dec. 12, 8 Marks)
in Fig. 1.4.1(b). The number of emitters is equal to the Q. 4 Draw and explain 2 inputs TTL NAND gate with
number of inputs of the gate. totem pole output. (Dec. 13, 6 Marks)

Circuit diagram :

 A two input TTL-NAND gate is shown in Fig. 1.4.2.

ns e
 A and B are the two inputs while Y is the output

io dg
terminal of this NAND gate.

(a) Multiple emitter transistor (b) Equivalent circuit

(C-1011) Fig. 1.4.1

 at le
The multiple emitter input transistor can have upto
eight emitters, for an eight input NAND gate.
ic w
 In the equivalent circuit of Fig. 1.4.1(b), diodes D1 and D2
represent the two base to emitter junctions whereas D3
bl no

represents the collector to base junction.

 In the forthcoming discussions, we are going to replace


the multiple emitter transistor by its equivalent circuit
(C-1012) Fig. 1.4.2 : Two input TTL NAND gate
Pu K

shown in Fig. 1.4.1(b). Operation :


 This transistor can be turned ON by forward biasing  In order to understand the operation of this circuit, let
ch

either (or both) the diodes D1 and D2. us replace transistor Q1 by its equivalent circuit as
shown in Fig. 1.4.3.
 This transistor will be in the OFF state if and only if both
the base-emitter junctions (D1 and D2) are reverse
Te

biased.
Standard TTL :

 The 7400 TTL series is known as the standard TTL series.


The TTL gates that we are going to discuss belong to
this family.

1.4.2 Two Input TTL-NAND Gate (C-1012) Fig. 1.4.3 : Transistor Q1 is replaced
(Totempole Output) : by its equivalent

SPPU : Dec. 08, Dec. 09, May 10, May 12, 1. A and B are the input terminals. The input voltages A
Dec. 12, Dec. 13. and B can be either LOW (zero Volt ideally) or HIGH
University Questions. (+ VCC ideally).

Q. 1 Draw three input standard TTL NAND gate circuit 2. A and B both LOW (A = B = 0) :
and explain its operation. (Dec. 08, 8 Marks)
 If A and B both are connected to ground, then both the
Q. 2 Draw 2-input standard TTL NAND gate circuit and B-E junctions of transistor Q1 are forward biased.
explain operation of transistor (ON/OFF) with
suitable conditions and truth table.  Hence diodes D1 and D2 in Fig. 1.4.3 will conduct to

(Dec. 09, May 10, 10 Marks) force the voltage at point C in Fig. 1.4.3 to 0.7 V.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 1-9 Digital Logic Families

 This voltage is insufficient to forward bias


base-emitter junction of Q2 due to the presence of D3.
Hence Q2 will remain OFF.

 Therefore its collector voltage VX rises to VCC.

 As transistor Q3 is operating in the emitter follower

mode, output Y will be pulled up to high voltage.

 Y = 1 (HIGH) …For A = B = 0 (LOW)

ns e
 The equivalent circuit for this input condition is shown

io dg
in Fig. 1.4.4(a).

(C-1013) Fig. 1.4.4(b) : Equivalent circuit for A = 1, B = 0

at le 4.


A and B both HIGH (A = B = 1) :

If A and B both are connected to + VCC, then both the


diodes D1 and D2 will be reverse biased and do not
ic w
conduct.
 Therefore voltage at point “C” i.e. at the anode of D3
bl no

increases to a sufficiently high value.


 Therefore diode D3 is forward biased and base current is
supplied to transistor Q2 via R1 and D3 as shown in
Pu K

Fig. 1.4.4(c).

(C-1013) Fig. 1.4.4(a) : Equivalent circuit for A = B = 0


ch

3. Either A or B LOW (A = 0 or B = 0) :

 If any one input (A or B) is connected to ground with


the other terminal left open or connected to + VCC, then
Te

the corresponding diode (D1 or D2) will conduct.

 This will pull down the voltage at “C” to 0.7 V.


(Fig. 1.4.3)
 This voltage is insufficient to turn ON D3 and Q2. So it

remains OFF.

 So collector voltage VX of Q2 will be equal to VCC. This


voltage acts as base voltage for Q3.

 As Q3 acts as an emitter follower, output Y will be pulled (C-1014) Fig. 1.4.4(c) : Equivalent circuit for A = B = 1
to VCC.  As Q2 conducts, the voltage at X will drop down and Q3
will be OFF, whereas voltage at Z (across R3) will
increase to a sufficient level to turn ON Q4.
(C-6373)
 As Q4 goes into saturation, the output voltage Y will be
 The equivalent circuit for this mode is shown in pulled down to a low voltage.
Fig. 1.4.4(b).  Y = 0 …For A = B = 1

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 1-10 Digital Logic Families

 The equivalent circuit for this mode of operation is


shown in Fig. 1.4.4(c).

 This discussion reveals that the circuit operates as a


NAND gate.
Truth Table :

 The Truth Table of Two input standard NAND gate is as


follows.

(C-7538) Table 1.4.1 : Truth Table of a 2-input NAND gate

ns e
io dg
(C-1015) Fig. 1.4.5(a) : No power dissipation in R3

at le 2. Another advantage of totem-pole arrangement is when


the output Y is HIGH. Here Q3 is ON and acting in the

emitter follower mode. It will therefore have a very low


ic w
output impedance (typically 10 ). Therefore the output

1.4.3 Totem-pole (Active Pull up) Output time constant will be very short for charging up any
bl no

Stage : SPPU : May 05, Dec. 07. capacitive load on the output as shown in Fig. 1.4.5(b).

University Questions.
Q. 1 Give advantages and disadvantages of totem-pole
Pu K

output-stage arrangement. (May 05, 8 Marks)


Q. 2 Will totempole output is suitable for wired-OR
ch

logic ? Justify your answer. (Dec. 07, 5 Marks)


 The arrangement of Q3 and Q4 on the output side of a
TTL NAND gate is called as the totem-pole
Te

arrangement.

 It is possible in TTL gates to speed up the charging of


output capacitance without corresponding increase in
the power dissipation with help of Totem-pole output
stage. (C-1015) Fig. 1.4.5(b) : Low output resistance

 The Totem-pole output is also known as active pull-up. Disadvantages of Totem-pole output :

Advantages of totem-pole output stage : 1. Q4 in the totem-pole output turns OFF more slowly than
Q3 turns ON.
 The advantages of using the totem-pole output stage
2. So before Q4 is completely turned OFF, Q3 will come
are as follows :
into conduction. So for a very short duration of few
1. With Q3 in the circuit, the current flowing through R3 will
nanoseconds, both the transistors will be simultaneously
be equal to zero when the output Y = 0, that means
ON.
when Q4 is ON, as shown in Fig. 1.4.5(a). This is
3. This is called as cross conduction and it will draw
important because it reduces the power dissipation
relatively large current (30 to 40 mA) from the 5 V
taking place in the circuit. supply.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 1-11 Digital Logic Families

Function of diode D :
 Diode D is added in the circuit in order to keep Q3 OFF
when Q4 is already conducting.
 It is important to avoid simultaneous conduction of Q3
and Q4, because it will lead to cross conduction and will
increase power dissipation.
 Thus D3 is used for successfully avoiding the cross
conduction.
(a) Clamping diodes

ns e
Function of R3 :

io dg
 During the cross conduction, if R3 is not used then there
will be no current limiting element in series with Q3 and
Q4 and a heavy current will be drawn from the source
which can damage the IC.

at le
This can be avoided by limiting the current by inserting
resistor R3 in series with Q3.
ic w
1.4.4 Unconnected Inputs :
 If any input of a TTL gate is left open, disconnected or (b) Effect of clamping diodes
bl no

floating, then the corresponding base emitter junction (C-1019) Fig. 1.4.6
of the input transistor Q1 is not forward biased as shown
in Fig. 1.4.6(a).  These are fast recovery diodes. They are forward biased

 Therefore the open or floating input is equivalent to a during the negative half cycles of the ringing sinusoidal
Pu K

logical 1 is applied to that input. waveform. Hence the negative input voltage will be

 Hence in TTL ICs all the unconnected inputs are treated restricted (clamped) to – 0.7 Volt approximately as
ch

as logical 1s. shown in Fig. 1.4.6(b).


 However, the unused inputs should either be connected 1.4.6 5400 Series :
to some used input (s) or returned to VCC through a
 The temperature range for the devices in 7400 series is
Te

suitable resistor.
from 0 to 70C, over a supply voltage range of 4.75
1.4.5 Clamping Diodes : to 5.25 V.

 So 7400 series is used for the commercial applications.


 The TTL inputs should not be subjected to negative
 But 5400 TTL series is developed specially for the
voltages. Under normal operating conditions this gets
military applications.
managed easily.
 The devices of 5400 series operate over the temperature
 But if fast voltage transitions are applied at the input, range of – 55 to 125C and over the supply voltage
then there is a possibility of ringing (sinusoidal range of 4.5 to 5.5 Volts.

oscillations with positive and negative half cycles). Due 1.4.7 Three Input TTL NAND Gate :
to ringing, the inputs will be subjected to negative SPPU : Dec. 10, May 11.

voltages. University Questions.


Q. 1 Draw and explain the design of 3 input TTL NAND
 To suppress this ringing, clamping diodes are generally
gate circuit. Also explain various input, output
connected externally in all the TTL circuits as shown in states and corresponding transistor (ON/OFF)
Fig. 1.4.6(a). states. (Dec. 10, May 11, 12 Marks)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 1-12 Digital Logic Families

Circuit diagram : Truth Table :

 The circuit diagram of a three input NAND gate is as  Table 1.4.2 shows the truth table and the status of
shown in Fig. 1.4.7. various transistors in the circuit.

 Note that the multiple emitter transistor has three (C-6351) Table 1.4.2 : Truth table of the 3-input NAND gate
emitter terminals which act as the inputs A, B and C to
the NAND gate.

ns e
io dg
1.5 Open Collector Outputs (TTL) :
at le University Questions.
SPPU : May 08, Dec. 11, May 15.
ic w
Q. 1 What is the advantage of open collector output ?
Justify your answer with suitable circuit diagram.
bl no

(a) Circuit diagram (May 08, 8 Marks)


Q. 2 Explain the working of two input TTL NAND gate
with open collector output. Consider various input,
output states for explanation. (Dec. 11, 8 Marks)
Pu K

Q. 3 What do you mean by open collector output ?


Explain with suitable circuit diagram. What is the
advantage of this output ? (May 15, 6 Marks)
ch

Circuit diagram :
 We have seen that the gates having totem-pole output
cannot be wired ANDed.
 Such a connection becomes possible if another type
Te

(b) Equivalent circuit of the multiple emitter transistor Q1


output stage called open collector output is used.
(C-1685) Fig. 1.4.7 : Three input TTL NAND gate
 The circuit diagram of a 2-input NAND gate is shown in
Operation : Fig. 1.5.1.

 The principle of operation of this circuit is exactly same  You will realize that this is the same TTL NAND gate
as that of the two input NAND gate discussed in which we have discussed earlier but with R3 and Q3

section 1.4.7. removed.

 If at least one of the inputs is low (0) then at least one of


the diodes D1, D2, D3 in Fig. 1.5.2(b) will be conducting.
Hence the voltage at point C will be clamped to 0.7 V.
 Hence Q2 will be off. So Q3 will conduct and the output
Y = 1 (HIGH).
 Y = 1 if at least one input is 0.
 If A = B = C = 1 then D1, D2, D3 will be off. So Q2 will be
turned on. So Q3 will be turned off and Q4 will be turned
on. So the output Y = 0 (LOW).

 Y = 0 if all the inputs are 1. (C-1027) Fig. 1.5.1 : Open collector 2 input NAND gate

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 1-13 Digital Logic Families

 The collector point of Q4 is brought out as output as 1.5.1 Disadvantages of Open Collector
shown in Fig. 1.5.1, therefore it is called as open
Output :
collector output. 1. The value of pull up resistance is high (few k).
 For proper operation it is necessary to connect an Therefore if the load capacitance is large then the RC
external resistance R3 between VCC and the open time constant (R3C) becomes large. This slows down the
collector output as shown in Fig. 1.5.1. switching speed of Q4. Therefore the gates having an
 This resistance is called as pull up resistance. open collector output will be slow.
Operation :
2. Second disadvantage is increased power dissipation.

ns e
1. With A = B = 0 : When Q4 is ON, a large current flows through the pull

 With A = B = 0, both the BE junctions of Q1 are forward up R3. Hence power dissipation is increased. This

io dg
biased. So Q2 remains OFF. problem is eliminated if we use totem-pole output
arrangement.
 Hence no current flows through R4. So VZ  0 V.

 Therefore Q4 is OFF and its collector voltage is equal to 1.5.2 Advantage of Open Collector Output :

2.
at le
VCC. So Y = 1 when A = B = 0.

With A = 0, B = 1 OR A = 1, B = 0 : University Questions


SPPU : May 15
ic w
 One of the BE junctions is forward biased (the one Q. 1 What do you mean by open collector output ?

corresponding to 0 input). Explain with suitable circuit diagram. What is the


advantage of this output ? (May 15, 6 Marks)
bl no

 So Q2 is OFF and Q4 also is OFF. So its collector voltage


is equal to VCC.  The main advantage of open collector output is that
Wired ANDing becomes possible.
 Therefore output Y = 1 when any one input is low.
1.5.3 Wired ANDing :
Pu K

3. With A = B = 1 :
 Wired ANDing means connecting the outputs of gates
 When both the inputs are high, transistor Q1 is turned
together to obtain the AND function.
ch

OFF.
 It is possible to connect the outputs of two or more
 So Q2 will be turned ON. Sufficient voltage is developed
gates together as shown in Fig. 1.5.3(a).
across R4. Base current is applied to Q4 and Q4 goes into
 The wired ANDing is represented schematically as
Te

saturation.
shown in Fig. 1.5.3(b).
 So the output voltage is equal to VCE (sat) of Q4 which is
very small. Thus Y = 0 when A = B = 1. The equivalent
circuit for this mode is shown in Fig. 1.5.2.

(C-1028) Fig. 1.5.2 : Equivalent circuit of open collector (C-1029) Fig. 1.5.3(a) : Wired ANDing of two open collector
2-input NAND gate with A = B = 1 TTL gates

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 1-14 Digital Logic Families

(b) Wired ANDing of NAND gate outputs

ns e
io dg
(C-1029) Fig. 1.5.3(b) : Symbol of wired ANDing

at le
Q4A and Q4B are the output transistors of Gate-A and

Gate-B respectively. A common pull up resistance is (c) Wired ANDing of inverter outputs
ic w
used for all the output transistors.
 The transistors Q4A and Q4B are operated as switches.
bl no

They operate in either saturation or cut off regions.

 Therefore the wired ANDing equivalent circuit is as


shown in Fig. 1.5.4(a).
Pu K

 The equivalent circuit of Fig. 1.5.4(a) indicates that the


wired AND output will be “0” if any one or all the output (d) Wired ANDing of the NOR outputs
transistor (s) are conducting. (C-1031) Fig. 1.5.4
ch

 The output will be high (1) if and only if all the 1.5.4 Comparison of Totem-pole and Open
transistors are in their OFF state. Collector Outputs :
This is the reason why such a connection is called as
Te

 SPPU : Dec. 05, May 07, Dec. 13.


wired AND connection. University Questions.
Q. 1 With the help of circuit diagram explain the
 Figs. 1.5.4(b), (c) and (d) shows that it is possible to wire
difference between totempole output and open
AND the outputs of any gate such as NAND, NOR or collector output. (Dec. 05, May 07, 10 Marks)
inverter. Q. 2 Compare totempole and open collector output
configurations in TTL. (Dec. 13, 6 Marks)
 The corresponding output Boolean equations are given
alongwith the diagrams.  Table 1.5.1 gives the comparison of totem-pole and
open collector outputs of TTL.
Table 1.5.1 : Comparison of totem-pole and
open collector outputs

Sr. Open
Parameter Totem-pole
No. collector
1. Circuit Q3 (pull up Only the pull
components transistor), D down
on the output and Q4 (pull transistor Q4
side down transistor) is used.
(C-1030) Fig. 1.5.4(a) : Equivalent circuit of wired ANDing are used.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 1-15 Digital Logic Families

Sr. Open Table 1.6.2 : Voltage levels for TTL 74 series


Parameter Totem-pole
No. collector
2. Wired ANDing Cannot be done Easily be Voltages Minimum Typical Maximum
done VOL – 0.2 V 0.4 V
3. External pull Not required Required to
VOH 2.4 V 3.4 V –
up resistor be
connected. VIL – – 0.8 V
4. Power Low due to High due to VIH 2.0 V – –
Dissipation presence of pull current

ns e
up transistor flowing Noise margin :
Q3. through R3.
 We have already defined the noise margins as,

io dg
5. Speed Operating Operating
speed is high. speed is low. High level noise margin,

1.6 Standard TTL Characteristics : VNH = VOH (min) – VIH (min)

SPPU : May 11, Dec. 11, and Low level noise margin,

at le
University Questions.
May 12, Dec. 14, Dec. 15, Dec. 17


VNL = VIL (max) – VOL (max)

Substituting the values from Table 1.6.2, the noise


ic w
Q. 1 Explain various characteristics for TTL logic
margin for the TTL logic families can be calculated.
families. (May 11, Dec. 14, 4 Marks)
 VNH = 2.4 – 2 = 0.4 V.
Q. 2 Define the following terms related to logic families.
bl no

Mention typical values for standard TTL family : VNL = 0.8 – 0.4 = 0.4 V.
1. Propagation delay 2. Fan-out
 Thus noise margin for the TTL family is 0.4 V. This means
3. VIL, VIH 4. Noise margin
that as long as the induced noise voltage is less than 0.4
Pu K

(Dec. 11, May 12, 8 Marks)


V, the operation of the TTL ICs will be unaffected.
Q. 3 Explain the following TTL characteristics :
 The noise margins have been shown in section 1.3.3.
ch

1. Noise Immunity
2. High level input voltage (VNH) Power dissipation :
3. Figure of Merit (Dec. 15, 6 Marks)
 The average power dissipation for the standard TTL 74
Q. 4 Explain standard TTL characteristics in brief.
series is approximately 10 mW.
Te

(Dec. 17, 6 Marks)


 It is dependent on the parameters such as tolerances,
 The Texas instrument first introduced two TTL series
signal level etc.
namely 54 series and 74 series.
Propagation delay :
Supply voltage and temperature ranges :
 The propagation delay of a standard TTL gate is
 Table 1.6.1 gives the supply voltage and temperature –9
approximately 10 nanoseconds (1 nanosecond = 10
ranges for two TTL families.
seconds)
Table 1.6.1

Supply voltage Temperature Fan out :


TTL series
range range  A standard TTL gate is capable of driving at the most 10
74 series 4.75 V to 5.25 V 0 C to 70 C other TTL gates simultaneously, maintaining its output
54 series 4.5 V to 5.5 V – 55 C to 125 C voltage within the specified limits. Hence fan out of a

Voltage levels : TTL gate is 10.

 Table 1.6.2 shows the input and output logic voltage  All the important characteristics of TTL logic family are

levels for the TTL 74 series. listed in Table 1.6.3.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 1-16 Digital Logic Families

Table 1.6.3 : Important parameters of TTL logic family 1.8 CMOS Logic :
Sr. Definition :
Parameter Values
No.
 CMOS stands for complementary MOSFET. It is obtained
1. Supply voltage 74 series : (4.75 to 5.25 V) by using a p-channel MOSFET and n-channel MOSFET
54 series : (4.5 to 5.5 V) simultaneously.
2. Temperature 74 series : 0 to 70 C  The p and n channel MOSFETs are connected in series,
range 54 series : – 55 to 125 C with their drains connected together and output taken
3. Voltage levels VIL (max) = 0.8 V from common drain point.

ns e
VOL (max) = 0.4 V  Input is applied at the common gate terminal formed by
VIH (min) = 2 V connecting two gates together.

io dg
VOH (min) = 2.4 V  The 74C00 CMOS series is a group of CMOS circuits
4. Noise margin 0.4 V which are pin-for-pin and function for function
compatible with the TTL 7400 devices.
5. Power 10 mW

6.
at ledissipation
Propagation 10 nanosec.
 For example 74C32 is a quad 2-input OR gate in the
CMOS family whereas 7402 is a quad 2 input TTL gate in
the 7400 TTL family.
ic w
delay
7. Fan out 10 1.8.1 CMOS NAND Gate :
8. Figure of merit 100 SPPU : Dec. 11,,Dec. 12.
bl no

University Questions.
1.6.1 Advantages of TTL :
Q. 1 Draw the structure of two input CMOS NAND gate.
1. TTL circuits are fast. Explain its working. (Dec. 11, 4 Marks)
Pu K

2. Low propagation delay. Q. 2 Draw and explain the working of a 2-input CMOS
3. Power dissipation is not dependent on frequency. NAND gate. (Dec. 12, 8 Marks)
ch

4. Compatible to all the other families. Circuit diagram :


5. Latch ups do not take place.
 The two input CMOS NAND gate is shown in
6. These are not susceptible to the damage due to static
Fig. 1.8.1(a) and its equivalent circuit by replacing each
charges.
Te

MOSFET by a switch is shown in Fig. 1.8.1(b).


7. Higher current sourcing and sinking capabilities.

1.6.2 Disadvantages of TTL :

1. Large power dissipation.


2. Fan out is lower than that of CMOS.
3. Less component density.
4. Can operate only on + 5 V power supply.
5. Poor noise immunity.

1.7 MOS - Logic Family :


 The MOS - logic family uses MOSFET as the basic device
the way TTL uses BJT.

 The MOSFETs are of two types namely the depletion


MOSFETs and the enhancement type MOSFETs.

 In logic circuits, the enhancement MOSFETs are used. (a) 2-input CMOS NAND gate

We use the E-MOSFET as a switch. Fig. 1.8.1(Contd...)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 1-17 Digital Logic Families

2. With A = 0 and B = 1 :

 With A = 0 and B = 1, Q1 will continue to be ON and Q3


continues to be OFF. But Q2 will now turn OFF and Q4

will be turned ON.

 The equivalent circuit of this mode is shown in


Fig. 1.8.2(b) which shows that output Y = + VDD i.e. logic

1. So Y = 1 if A = 0 and B = 1.

ns e
io dg
(b) Equivalent circuit

(C-1058) Fig. 1.8.1


at le
Q1 and Q2 are p-channel MOSFETs. They are connected
in parallel with each other.
ic w
 Q3 and Q4 are n-channel MOSFETs. They are connected
in series with each other.
Input A is connected to gates of Q1 and Q3. So A
bl no


controls the status of MOSFETs Q1 and Q3.
(C-1059) Fig. 1.8.2(b) : Equivalent circuit for A = 0, B = 1
 Input B is connected to gates of Q2 and Q4. So B
controls the status of MOSFETs Q2 and Q4. 3. With A = 1 and B = 0 :
Pu K

 With A = 1, Q1 will be turned OFF and Q3 will turn ON.


Operation :
 And with B = 0, Q2 will be turned ON and Q4 will be
ch

1. A=B=0:
turned OFF.
 With A = 0 and B = 0, both the PMOSFETs i.e. Q1 and Q2
 As seen from the equivalent circuit of Fig. 1.8.2(c), the
will be ON. But both the N-MOSFETs i.e. Q3 and Q4 will
output Y = + VDD (logic 1).
be OFF.
Te

 As seen from the equivalent circuit of


Fig. 1.8.2(a), the output Y = + VDD (logic 1).

 So Y = 1 if A = B = 0.

(C-1059) Fig. 1.8.2(c) : Equivalent circuit for A = 1, B = 0

 So Y = 1 if A = 1 and B = 0.

4. With A = 1, B = 1 :
 With A = B = 1 both P-MOSFETs i.e. Q1 and Q2 will be
(C-1059) Fig. 1.8.2(a) : Equivalent circuit for A = B = 0 OFF and both the N-MOSFETs i.e. Q3 and Q4 will be ON.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 1-18 Digital Logic Families

 The equivalent circuit of this mode is shown in  These ICs can operate on higher power supply voltages,
Fig. 1.8.2(d). when higher noise margin is required.
 The other CMOS families such as 74 HC/HCT, 74
AC/ACT and 74 AHC/AHCT operate over a voltage
range of 2 to 6 V.

1.9.2 Logic Voltage Levels :


 The logic voltage levels for different CMOS series have
different values. Table 1.9.1 lists various logic voltage

ns e
levels for different CMOS series.

io dg
Table 1.9.1 : Logic voltage levels (in volts) with
VDD = VCC = + 5 V
CMOS TTL
Para-
(C-1060) Fig. 1.8.2(d) : Equivalent circuit for A = 1, B = 1 4000B 74 74 74 74 74 74 74 74 74 74

 at le
It shows that output Y = 0 (LOW).
So Y = 0 if A = B = 1.
meter

VIH(min) 3.5
HC HCT AC ACT AHC AHCT

3.5 2.0 3.5 2.0 3.85 2.0


LS AS ALS

2.0 2.0 2.0 2.0


ic w
VIL(max) 1.5 1.0 0.8 1.5 0.8 1.65 0.8 0.8 0.8 0.8 0.8
Truth Table :
VOH(min) 4.95 4.9 4.9 4.9 4.9 4.4 3.15 2.4 2.7 2.7 2.7
 Table 1.8.1 shows the truth table of the two input NAND
bl no

VOL(max) 0.05 0.1 0.1 0.1 0.1 0.44 0.1 0.4 0.5 0.5 0.4
gate.
VNH 1.45 1.4 2.9 1.4 2.9 0.55 1.15 0.4 0.7 0.7 0.7
Table 1.8.1 : Truth Table of a CMOS NAND gate
VNL 1.45 0.9 0.7 1.4 0.7 1.21 0.7 0.4 0.3 0.3 0.4
Inputs Transistors Output
Pu K

A B Q1 Q2 Q3 Q4 Y Important points from Table 1.9.1 :

0 0 ON ON OFF OFF 1 1. All the voltage levels shown in Table 1.9.1 correspond to
ch

0 1 ON OFF OFF ON 1 a supply voltage of + 5 Volts.


2. Note that VOH  5 V and VOL  0 V.
1 0 OFF ON ON OFF 1
1 1 OFF OFF ON ON 0 1.9.3 Noise Margins : SPPU : May 10.
Te

1.8.2 CMOS Series : University Questions.


Q. 1 Describe what happens to the following CMOS
 The popular CMOS series are 4000/14000 series, 74C characteristic as VDD is increased : Noise margin
series, 74 HC/HCT (High speed CMOS), 74 AC/ACT and in which applications is CMOS ideally suited ?
(Advanced CMOS). (May 10, 3 Marks)

 Table 1.9.1 contains the high level and low level noise
1.9 Standard CMOS Characteristics : margins VNH and VNL for each CMOS and TTL series.

 Some of the important CMOS characteristics are as  These are calculated as :


follows : VNH = VOH(min) – VIH(min) ,
1.9.1 Power Supply Voltage : VNL = VIL(max) – VOL(max) .

 The 4000/14000 series and 74C series CMOS ICs are  It can be observed that the CMOS devices will have

capable of operating over a wide range of power supply higher noise margins as compared to those of TTL. So

voltage (typically 3 V to 15 V). CMOS ICs should be preferred to TTL for operation in
the noisy environment.
 That means the CMOS ICs from these series are
extremely versatile. They can operate on 3 V batteries as  The noise margins in CMOS will increase further if the

well as 5 V TTL compatible power supplies. supply voltages are increased further.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 1-19 Digital Logic Families

1.9.4 Power Dissipation : SPPU : May 06. These capacitors act as load on the driving gate as
shown in Fig. 1.9.1.
University Questions.
 The charging current for these capacitor has to be
Q. 1 Define power dissipation and give typical values of
supplied by the driving gate. This current should not be
this parameters with respect to CMOS logic family.
too large. This will limit the fan-out to 50.
(May 06, 2 Marks)

 The power dissipation of CMOS devices is extremely low


when they are in the static (stable) state.

ns e
 Typically the dc power dissipation is 2.5 nW per gate

io dg
when VDD = 5 V and 10 nW for VDD = 10 V. This is too
small as compared to TTL gates (PD = 10 mW).

 Hence CMOS devices are preferred for the battery

 at le
operated systems.

But power dissipation is low under the dc operating


conditions and at low frequencies.
(C-1061) Fig. 1.9.1 : Fan out and switching speed are
ic w
dependent on the input capacitance of load gates
 But power dissipation increases as the circuit switching
1.9.6 Switching Speed : SPPU : May 06.
frequency is increased.
bl no

 The relation between power dissipation and frequency University Questions.

is demonstrated in Table 1.9.2. Q. 1 Define speed of operation and give typical values
of this parameters with respect to CMOS logic
Pu K

Table 1.9.2 : Relation between PD and frequency family. (May 06, 2 Marks)

Frequency 0 (dc) 100 kHz 1 MHz  The output resistance of CMOS is low in both the states
ch

(0 or 1) of output.
Power dissipation PD 10 nW 0.1 mW 1 mW
 So eventhough it has to drive large capacitive loads, the
1.9.5 Fan Out : SPPU : May 06. switching speed can still be faster than the NMOS or
Te

PMOS devices.
University Questions.
 The NAND gate of 4000 series has following values of
Q. 1 Define fan out and give typical values of this
propagation delays :
parameters with respect to CMOS logic family.
Average tpd = 50 nS ….. at VDD = 5 V
(May 06, 2 Marks)
Average tpd = 25 nS ….. at VDD = 10 V
 The input resistance of CMOS devices is very high
 The average time delay for various CMOS ICs are given
12
(10 ). So their input current is very very small, almost in Table 1.9.3.
zero.
(C-6352) Table 1.9.3 : Switching speeds for various
 Therefore one CMOS gate can drive a large number of CMOS ICs at VDD = 5 V
other CMOS gates. Hence fan out of CMOS devices will
be large as compared to fan out of TTL.

 Typically the fan out is restricted to 50 for operation


below 1 MHz. 1.9.7 Unconnected Inputs :
 Why is fan out restricted to only 50 ? This is because the  Note that the CMOS inputs should never be left
input capacitance of each CMOS gate is about 5 pF. floating.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 1-20 Digital Logic Families

 All the CMOS inputs should be either connected to 0 V Q. 3 Compare TTL and CMOS logic family. Draw
(ground) or VDD, or to another inputs. CMOS NOR gate. (Dec. 14, 6 Marks)
 This is necessary to avoid permanent damage of CMOS Table 1.10.1 : Comparison of CMOS and TTL
ICs.
Sr.
 Sometimes there are some unused gates on a chip. The Parameter CMOS TTL
No.
inputs of such gates also should be connected to
ground or + VDD . 1. Device used N-channel Bipolar junction

 The CMOS ICs are damaged due to induced voltages at MOSFET and transistor

ns e
the floating inputs due to noise or static charges. P-channel
MOSFET

io dg
 Such voltages can bias the P-MOS or N-MOS in their
2. VIH (min) 3.5 V (VDD = 5 V) 2V
conduction state that may cause overheating and
damage. 3. VIL (max) 1.5 V 0.8 V

1.9.8 Advantages of CMOS : SPPU : Dec. 07. 4. VOH (min) 4.95 V 2.7 V

at le
University Questions.
5.

6.
VOL (max)

High level
0.05 V
VNH = 1.45 V
0.4 V

0.4 V
ic w
Q. 1 What are the advantages of CMOS devices over
noise margin
TTL devices ? Explain in short. (Dec. 07, 4 Marks)
7. Low level VNL = 1.45 V 0.4 V
bl no

1. Low power dissipation.


noise margin
2. High fan out (typically 50).
8. Noise Better than TTL Less than
3. High noise margin for higher values of VDD.
immunity CMOS
Pu K

4. Capable of working over a wide range of supply voltage.


9. Propagation 105 nS (Metal 10 nS.
5. Switching speeds comparable to those of TTL. delay gate CMOS) (Standard TTL)
ch

6. High packaging density since (more devices can be


10. Switching Less than TTL. Faster than
accommodated in the same space) MOS devices need
speed CMOS
less space.
11. Power PD = 0.1 mW. 10 mW
Te

1.9.9 Disadvantages of CMOS : dissipation Hence used for


1. Propagation delays longer than those of TTL per gate. battery backup
(25 to 100 nS). applications

2. Slower than TTL. 12. Speed power 10.5 pJ 100 pJ

3. CMOS ICs can get damaged due to static charge. product.

13. Dependence PD increases with PD does not


4. Latch ups can take place which can damage the device.
of PD on increase in depend on
5. Need protection circuitry.
frequency frequency. frequency.
1.10 Comparison of CMOS and TTL :
14. Fan out Typically 50. 10
SPPU : Dec. 11, Dec. 12, Dec. 14, Dec. 18.
15. Unconnected Unused inputs Inputs can
University Questions. inputs should be remain floating.
Q. 1 List difference between CMOS and TTL. returned to GND The floating
or VDD. They inputs are
(Dec. 11, 4 Marks)
should never be treated as logic
Q. 2 Compare TTL and CMOS logic family.
left floating. 1s.
(Dec. 12, Dec. 18, 6 Marks)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 1-21 Digital Logic Families

Sr. Configuration AND NAND OR NOR XOR XNOR


Parameter CMOS TTL
No.
Triple 3-Input 4073 4023 4075 4025
16. Component More than TTL Less than
density since MOSFETs CMOS since Dual 4-Input 4082 4012 4072 4002

need smaller BJT needs Single 8-Input 4068 4068 4078 4078
space while more space.
fabricating an IC. Quad 2-Input NAND = 4093 (schmitt trigger inputs)

17. Operating MOSFETs are Transistors are Dual 2-Input NAND = 40107 open drain outputs

ns e
areas operated as operated in Other circuits :

io dg
switches. i.e. in saturation or
4008 4-bit binary full adder
the ohmic region cut off regions.
or cut off region. 4006 Shift Registers

18. Power Flexible from 3 V Fixed equal to 4017 Decade Counter

at le supply
voltage
to 15 V. 5 V. 4024

4026
Binary Counter

7-Segment Decoder
ic w
1.11 Case Study : CMOS 4000 series ICs : 4027 Dual M-S JK Flip-Flop

10:1 Multiplexer
bl no

CMOS Series : 4028

 The popular CMOS series are 4000/14000 series, 74C 4046 PLL

series, 74 HC/HCT (High speed CMOS), 74 AC/ACT Monostable/astable


4047
(Advanced CMOS).
Pu K

Multivibrator

 The following is a list of CMOS 4000-series digital logic


ch

integrated circuits. Review Questions

Logic gates :
Q. 1 Which are the different logic families ? Write their
1. One input logic gates : characteristics.
Te

Quad Buffer/Inverter = 4041 (4x CMOS drive) Q. 2 Explain the use of multi-emitter inputs.
Quad Buffer = 40109 (dual power-rails for voltage-level Q. 3 Define the following terms regarding a logic family :
translation)
1. Noise margin
Hex Buffer = 4504 (dual power-rails for voltage-level
2. Propagation delay
translation)
Q. 4 Compare the performance of TTL and CMOS logic.
Hex Buffer = 4050 (4x 74LS drive)
Q. 5 Explain the features of complementary symmetry
Hex Inverter = 4049 (4x 74LS drive)
logic (CMOS).
Hex Inverter = 4069
Q. 6 Mention the advantages and disadvantages of TTL,
Hex Inverter = 40106 (S inputs)
and CMOS IC families.
2. Two to eight input logic gates :
Q. 7 Define any four characteristics of logic gates.
Configuration AND NAND OR NOR XOR XNOR
Q. 8 Classify the IC according to their scale of integration.
Quad 2-Input 4081 4011 4071 4001 4070 4077
Q. 9 Explain what is meant by TTL ?

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 1-22 Digital Logic Families

Q. 10 Draw the circuit diagram of two input TTL NAND Q. 12 Give the important characteristics of CMOS logic
gate and explain its function. family and explain their importance.

Q. 11 Explain briefly the operation of CMOS NAND gate. Q. 13 State specifications of standard TTL family.




ns e
io dg
at le
ic w
bl no
Pu K
ch
Te

Powered by TCPDF (www.tcpdf.org)


Unit 1

Chapter

2
ns e
io dg
at le Number Systems and
ic w
Codes
bl no
Pu K

Syllabus
Binary, BCD, Octal, Hexadecimal, Excess-3, Gray code and their conversions.
ch

Case study : Practical applications of various codes in computers.


Te

Chapter Contents
2.1 Introduction 2.11 Conversion from Binary to Other Systems

2.2 System or Circuit 2.12 Conversion from Other Systems to Binary System

2.3 Binary Logic and Logic Levels 2.13 Conversion from Octal to Other Systems

2.4 Number Systems 2.14 Conversions Related to Hexadecimal System

2.5 The Decimal Number System 2.15 Concept of Coding

2.6 The Binary Number System 2.16 Classification of Codes

2.7 Octal Number System 2.17 Binary Coded Decimal (BCD) Code

2.8 Hexadecimal Number System 2.18 Non – weighted Codes

2.9 Conversion of Number Systems 2.19 Gray Code

2.10 Conversions Related to Decimal System 2.20 Code Conversions

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 2-2 Number Systems and Codes

2.1 Introduction : 2.2.1 Digital Systems :

 In the modern world of electronics, the term “digital” is Definition :


generally associated with a computer.
 We define the digital system as the system (circuit)
 This is because, the term “digital” is derived from the which processes or works on the digital signals.
way computers perform operation, by counting digits.
 The input signal to a digital system is digital and its
 For many years, the main application of digital output signal is also digital.
electronics was only in the computer systems.

ns e
 But today, the digital electronics is used in many other
applications such as : TV, Radar, Military systems,

io dg
Medical equipments, Communication systems, Industrial (B-1706) Fig. 2.2.1 : Digital circuit
process control and Consumer electronics. Examples of digital systems :
Signals :
 The examples of digital circuits are adders, subtractors,

at le
We can define “signal” as a physical quantity, which
contains some information and which is a function of
one or more independent variables.
registers, flip-flops, counters, microprocessors, digital
calculators, computers etc.
ic w
2.3 Binary Logic and Logic Levels :
 The signals can be of two types :

1. Analog signals 2. Digital signals.  A logic statement, is defined as a statement which is


bl no

true if some condition is satisfied and false if that


Digital Signals :
condition is not satisfied.
Definition:
 For example, a bulb turns ON, if we close the switch,
 A digital signal is defined as the signal which has only a
Pu K

otherwise it is OFF.
finite number of distinct values.

 Digital signals are not continuous signal. They are


2.3.1 Positive Logic :
ch

discrete signals.  A “LOW” voltage level represents “logic 0” state and a


Binary signal : comparatively “HIGH” output voltage level represents
“logic 1” state, as shown in Fig. 2.3.1(a).
 If a digital signal has only two distinct values, i.e. 0 and
Te

1 then it is called as a binary signal.


Octal signal :

 A digital signal having eight distinct values is called as


an octal signal. (B-439)Fig. 2.3.1(a) : Positive logic
Hexadecimal signal :  For example, 0 Volt represent a logic 0 state and
 A digital signal having sixteen distinct values is called as + 5 V represent logic 1. This is called as “positive logic”.

the hexadecimal number. Positive Logic :


Logic 0 (LOW) = 0 V, Logic 1 (HIGH) = + 5 V.
2.2 System or Circuit :
2.3.2 Negative Logic :
Definition :
 A “LOW” voltage level represents “logic 1” state and a
 A system or circuit is defined as the physical device or “HIGH” output voltage level represents “logic 0” state,
group of devices or algorithm which performs the as shown in Fig. 2.3.1(b).
required operations on the signal applied at its input.  For example, 0 Volts represent a “logic 1” state and
 System or circuits can be of two types : + 5 V represent “logic 0” state. This is called as “negative
1. Analog circuits 2. Digital circuits logic”.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 2-3 Number Systems and Codes

Negative Logic :  That means the numbers have positional importance.


For example consider the decimal number (349.25)10
Logic 0 (LOW) = + 5 V Logic 1 (HIGH) = 0 V.
shown in Fig. 2.4.1.

(B-439)Fig. 2.3.1(b) : Negative logic

Note : In this chapter, we are going to consider only the


(C-5) Fig. 2.4.1 : Numbers have positional importance

ns e
positive logic. Also we will assume the logic 0 level
 Hence we can implement a common rule for all the
corresponds to 0 Volts and logic 1 level

io dg
numbering systems as follows.
corresponds to + 5 V.
 For a general number, we have to multiply each of digit
2.4 Number Systems : by some power of base (B) or radix as shown in
Fig. 2.4.2.
Definition :

at le
A number system defines a set of values used to
represent a quantity.
ic w
 We talk about the number of people attending class,
the number of modules taken per student and also use
numbers to represent grades obtained by students in
bl no

tests.
(C-6) Fig. 2.4.2
 The study of number systems is not just limited to
computers. 4. Column numbers :
Pu K

 We apply numbers every day and knowing how  The column number is the number assigned to the
numbers work will give us an insight into how a digits placed in relation with the decimal point.
computer manipulates and stores numbers.
ch

 Column numbers to the left of the decimal number start


2.4.1 Important Definitions Related to All with 0 and go up (0, 1, 2, …..) as shown in
Numbering Systems :
Fig. 2.4.2.
 All the numbering systems have a few common
Te

 Column numbers to the right of the decimal number


elements as follows :
start from – 1 and become more and more negative (–
Radix or Base :
1, – 2, – 3, – 4, ….) as shown in Fig. 2.4.2.
1. The number of values that a digit (one character) can
have is equal to the base of the system. It is also called
2.4.2 Various Numbering Systems :
as the Radix of the system.  Various numbering systems used in practice and their
For example for a decimal system, the base is “10” bases are as shown in Table 2.4.1.
because every digit can have 10 distinct values.
Table 2.4.1 : Various number systems and their bases
(0, 1, 2, ….., 9).
Name of number system Base
2. The largest value of a digit is always one less than the
base : Binary 2
For example, the largest digit in a decimal system Octal 8
is 9. (One less than the base 10).
Decimal 10
2. Weight :
 Each place (position or column number) represents a Duodecimal 12
different multiple of base or radix. These multiples are
Hexadecimal 16
also called as weighted values or Weight.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 2-4 Number Systems and Codes

2.5 The Decimal Number System : 2.6 The Binary Number System :

Definition : Definition :

 The number system which has a base of 10, is called as  A number system with a radix 2 is called as the binary

the decimal numbering system. number system.

 We are all familiar with counting and mathematics that  Most modern computer systems use the binary logic for

uses this system. their operation.

 Looking at its make up will help us to understand other  A computer cannot operate on the decimal number

ns e
numbering systems. system.

 A binary number system uses only two digits namely 0

io dg
2.5.1 Characteristics of a Decimal System :
and 1.
Some of the important characteristics of a decimal  The binary number system works like the decimal
system are : number system except one change. It uses the
1.

2. at le
It uses the base of 10.

The largest value of a digit is 9. 


base 2.

Hence the largest value of a digit is 1 and the number of


values a digit can assume is two i.e. 0
ic w
3. Each place (column number) represents a different
multiple of 10. These multiples are also called as and 1.
weighted values. The weighted values of each position  The weighted values for different positions for a binary
bl no

are as shown in Fig. 2.5.1. system are as shown in Fig. 2.6.1.


Pu K
ch

(C-8) Fig. 2.5.1 : Positions and corresponding weighted values (C-10) Fig. 2.6.1 : Weights for different
for a decimal system positions for a binary system
Te

Most Significant Digit (MSD) :  The binary digits (0 and 1) are also called as bits. Thus
 The leftmost digit having the highest weight is called as binary system is a two bit system.
the most significant digit of a number.  The leftmost bit in a given binary number with the

Least Significant Digit (LSD) : highest weight is called as Most Significant Bit (MSB)
whereas the rightmost bit in a given number with the
 The rightmost digit having the lowest weight is called as
lowest weight is called as Least Significant Bit (LSB).
the least significant digit of a number.
Ex. 2.6.1 : Express the binary number 1011.011 in terms
Ex. 2.5.1 : Represent the decimal number 532.86 in terms
of powers of 2.
of powers of 10.
Soln. :
Soln. :
Step 1 : Express the given number in powers of 2 :
The required representation is shown in Fig. P. 2.5.1.

(C-9) Fig. P. 2.5.1 (C-11) Fig. P. 2.6.1

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 2-5 Number Systems and Codes

2.6.1 Binary Number Formats : 4. Each digit has a different multiple of base. This is as
shown in Fig. 2.7.1.
 We typically write binary numbers as a sequence of bits
(bits is short for binary digits).

 We have defined boundaries for these bits. These


boundaries are :

(C-7762) Table 2.6.1 : Binary number formats

(C-13) Fig. 2.7.1 : Weights for different positions for an

ns e
octal system

io dg
Ex. 2.7.1 : Represent the octal number 645 in power of 8.
Soln. :
Representation in power of 8 :

2.7 at le
Octal Number System :
ic w
Definition :
(C-14) Fig. P. 2.7.1
 A number system with a radix 8 is called as the octal
2.8 Hexadecimal Number System :
bl no

number system.

Features : Definition :

 The important features of the octal number systems are  A number system with a radix 16 is called as the
Pu K

as follows : hexadecimal number system.

1. Base : The Base used for octal number system is 8. Features :


ch

2. The number of values assumed by each digit :  The important features of a hexadecimal number system
are as follows :
 Each digit in the octal system will assume 8 different
values from 0 to 7 (0, 1, 2,….., 6, 7). 1. Base : The base of hexadecimal system is 16.
Te

3. The largest value of a digit : 2. Number of values assumed by each digit :

 The largest value of a digit in the octal system will be 7.  The number of values assumed by each digit is 16.
That means the octal number higher than 7 will not be
 The values include digits 0 through 9 and letters A, B, C,
8, instead of that it will be 10.
D, E, F. Hence the sixteen possible values are :
 Table 2.7.1 gives you a clear idea about this.
0123456789ABCDEF
(C-7770) Table 2.7.1 : Octal numbers
 0 represents the least significant digit whereas F
represents the most significant digit.

 The base of 16 needs 16 digits. Hence it borrows 0


through 9 but needs another 6.

 For these the first six letters A, B, C, D, E, F are used.


Here A represents 10, B represents 11 and so on.

 The hexadecimal digits and their values are as shown in


Table 2.8.1.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 2-6 Number Systems and Codes

(C-7772) Table 2.8.1 : Hexadecimal digits and their values Ex. 2.8.1 : Represent the hexadecimal number 6DE in the
powers of 16.
Soln. :

Representation in the powers of 16 :


(C-8060) Table 2.8.2 : Hexadecimal numbers

ns e
(C-17) Fig. P. 2.8.1

io dg
2.9 Conversion of Number Systems :

 The conversion of a number in base “r” to decimal is

at le 
done by expanding the given number in a power series
and adding all the terms.

In the subsequent sections we are going to present a


ic w
 When dealing with large values, binary numbers quickly general procedure for decimal to any base (radix)

become too unwieldy. conversion.


bl no

 The hexadecimal (base 16) numbering system solves  If the given number includes the radix point, then it is
this problem. necessary to separate the number into an integer part

 Hexadecimal numbers offer the two features : and a fraction part.


Pu K

1. Hex numbers are very compact.  Then each part should be converted by considering

2. It is easy to convert from hex to binary and binary separately.


ch

to hex.
2.10 Conversions Related to Decimal
Largest value of a digit : System :
 The largest value of a digit in the hexadecimal number
Te

system is 15 and it is represented by F.  In this section we will perform the following conversions
 The hexadecimal number higher than F will be 10. related to the decimal system :
Table 2.8.2 gives you a clear idea about hexadecimal 1. Decimal to other systems.
numbers.
2. Other systems to decimal.
 The largest two digit hexadecimal number is FF which
corresponds to 255 decimal. The next higher number 2.10.1 Conversion from any Radix r to
Decimal :
after FF is 100.
Positional weights :  The general procedure for conversion from binary to
 The positional weights for a hexadecimal number to the decimal is as given below :
left and right of decimal point are as shown in Fig. 2.8.1. Steps to be followed :

Step 1 : Note down the given number.

Step 2 : Write down the weights corresponding to different


positions.

Step 3 : Multiply each digit in the given number with the


(C-15) Fig. 2.8.1 : Positional weights for a hexadecimal
corresponding weight to obtain product numbers.
system

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 2-7 Number Systems and Codes

Step 4 : Add all the product numbers to get the decimal Hex to decimal :

equivalent. Ex. 2.10.4 : Convert the hex number (4C8.2)16 into its
 The following example demonstrates the conversion of equivalent decimal number.
a binary, octal and hex number to its decimal Soln. :
equivalent.

Binary to decimal :

Ex. 2.10.1 : Convert the binary number 1 0 1 1 . 0 1 into its

ns e
decimal equivalent.
Soln. :

io dg
Steps 1, 2 and 3 :

(C-21) Fig. P. 2.10.4

at le Ex. 2.10.5 : Perform the following operation :


(1011.101)2 = (________)10
Dec. 11, 8 Marks.
ic w
Soln. : Solve it yourself.
Ans. :
(1011.101)2 = (11.625)10
bl no

(C-19)
Ex. 2.10.6 : Perform the following operations :
Step 4 : Addition : (1001.10)2 = (______)10
(May 12, 2 Marks)
 (1 0 1 1 . 01)2 = (11.25)10 …Ans.
Soln. : Solve it yourself.
Pu K

Octal to decimal : Ans. :

Ex. 2.10.2 : Convert the octal number (314) into its (1001.10)2 = (9.5)10
ch

8
decimal equivalent. 2.10.2 Conversion from Decimal to Other
Soln. : Systems :

 If the given decimal number consists of a decimal point,


Te

then we have to first separate out the integer and


fractional parts.

 Then convert them separately to the desired radix, and


(C-6365) combine the converted parts to obtain the complete
 (314)8 = (204)10 …Ans. converted number.

Ex. 2.10.3 : Convert the octal number (365.24)8 into its  The procedures for converting the integer part and the
equivalent decimal number. fractional part are completely different from each other.
Soln. : 2.10.2.1 Successive Division for Integer Part
Conversion :

 Follow the procedure given below to convert the integer


part of the given decimal number into any radix “r”.
Steps to be followed :

(C-20)
Step 1 : Divide the integer part of given decimal number

 D = (245.3125)10 by the base and note down the remainder.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 2-8 Number Systems and Codes

Step 2 : Continue to divide the quotient by the base (r)


until there is nothing left, noting the remainders
from each step.

Step 3 : List the remainder values in reverse order from


the bottom to top to find the equivalent.

 This procedure will be best understood by going


through the following illustrative examples.

ns e
 (204)10 = (314)8 …Ans.
Decimal to Binary :
(C-24) Fig. P. 2.10.8 : Decimal to octal conversion

io dg
Ex. 2.10.7 : Convert (105)10 to the equivalent binary
Ex. 2.10.9 : Do the required conversions for the following
number.
number :
Soln. : (1000)10 = (_______)8 Dec. 11, 6 Marks.


at le
Refer Fig. P. 2.10.7 which shows a simpler method. We
divide the given number by the radix or base of binary
Soln. :
(1000)10 = (_______)8
ic w
system which is 2.
bl no

(C-6106)

Decimal to Hex :
Pu K

Ex. 2.10.11 : Convert the decimal number 259 into its hex
ch

equivalent.

Soln. :
 The conversion takes place as follows :
Te

The base is 16 so we divide the given number by 16.

(C-3050) Fig. P. 2.10.7 : Decimal to binary conversion

Thus, (105)10 = (1101001)2 …Ans.

Decimal to Octal :

Ex. 2.10.8 : Convert (204)10 into its equivalent octal

number. (C-25) Fig. P. 2.10.11

Soln. :  (259)10 = (103)16 …Ans.

 We divide the decimal number by the radix (base) of


Ex. 2.10.12 : Do the required conversion for the following
octal system which is 8.
number :
 The required conversion is as shown in Fig. P. 2.10.8. (1024)10 = (______)16 (May 12, 2 Marks)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 2-9 Number Systems and Codes

Soln. :
(C-3205) So (0.42)10 = (0.01101)2 …Ans.
(1024)10 = (______)16 :
Note : We could have continued further. But the
conversion is generally carried out only upto 5
digits.

Ex. 2.10.14 : Convert (0.8)10 to equivalent binary number.


 (1024)10 = (400)H ...Ans.
Soln. : (C-28)

2.10.2.2 Successive Multiplication for

ns e
Fractional Part Conversion :

io dg
 Now let us see how to convert the fractional part of a
decimal number into any other radix number.

 The general procedure for such a conversion is as


follows :

at le
Steps to be followed :

Step 1 : Multiply the given fractional decimal number by


ic w
the base (radix) r.
Step 2 : Note down the carry generated in this
bl no

 (0.8)10 = (0.11001)2 …Ans.


multiplication as MSD.
Step 3 : Multiply only the fractional number of the product Decimal to Octal :

in step 2 by the base, and note down the carry as


Ex. 2.10.15 : Convert (0.6234)10 into its equivalent octal
Pu K

the next bit to MSD.


number.
Step 4 : Repeat steps 2 and 3 upto the end. The last carry
ch

will represent the LSD of equivalent i.e. converted Soln. :


number. Refer to Fig. P. 2.10.15 for the solution.
 Let us solve some examples to understand fraction
conversion clearly.
Te

Decimal to Binary :

Ex. 2.10.13 : Convert the decimal number (0.42)10 into

binary.

Soln. :

 (0.6234)10 = (0.47713)8 …Ans.

(C-29) Fig. P. 2.10.15 : Conversion of fractional

decimal to octal
Decimal to Hex :

Ex. 2.10.16 : Convert the decimal fraction (0.122)10 to its


equivalent hex number.
(C-27)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 2-10 Number Systems and Codes

Soln. :

ns e
io dg
(C-30)

at le (0.122)10 = (0.1F3B64)16

2.10.2.3 Conversion of Mixed Decimal


…Ans.
ic w
Number to Any Other Radix :
(C-32)
bl no

 We have seen the conversion of the integer and


fractional parts of a decimal number into desired radix Step 4 : Combine the results of steps 2 and 3 :

number separately.  (85.63)10 = (1010101.10100)2 …Ans.

 Now let us see the conversion of mixed decimal number Decimal to Octal :
Pu K

containing the integer and fractional parts into any


Ex. 2.10.18 : Convert (3000.45)10 into its equivalent octal
other radix number.
ch

number.
 Follow the procedure given below for such a conversion.
Soln. :
Steps to be followed :
Step 1 : Separate the integer and fractional parts :
Te

Step 1 : Separate the integer and fractional parts of the


Integer part = 3000 and fractional part = 0.45.
given decimal number.

Step 2 : Convert the integer part into desired radix.

Step 3 : Convert the fractional part into desired radix.

Step 4 : Combine the results of steps 2 and 3 to get the


final answer.

Ex. 2.10.17 : Convert (85.63)10 into its equivalent binary


number.

Soln. :

Step 1 : Separate integer and fractional parts : (C-33) Fig. P. 2.10.18

Step 4 : Combine the results of steps 2 and 3 :


Combining the results of steps 2 and 3 we get the
answer.
(C-31)  (3000.45)10 = (5670.3463)8

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 2-11 Number Systems and Codes

Decimal to Hex : 2.11 Conversion from Binary to Other


Systems :
Ex. 2.10.19 : Convert (2003.31)10 into its equivalent hex
number. 2.11.1 Conversion from Binary to Decimal :
Soln. :
We have already discussed this.
Step 1 : Separate the integer and fractional parts : (C-34)
2.11.2 Binary to Octal Conversion :

 For converting the given binary number into an

ns e
Step 2 : Convert integer part : equivalent octal number, follow the procedure given

below :

io dg
Step 1 : Divide the binary bits into groups of 3 starting
from the LSB.

at le (C-35)
Step 2 : Convert each group into its equivalent decimal.
As the number of bits in each group is restricted
to 3, the decimal number will be same as octal
ic w
Step 3 : Convert the fractional part into hex :
number.
Ex. 2.11.1 : Convert the binary number (1 1 0 1 0 0 1 0)2
bl no

into its equivalent octal number.


Soln. :
Pu K
ch

(C-36)

 (0.31)10 = (0.4F5C2)
16

Step 4 : Combine the results of steps 2 and 3 :


Te

Combining the results of steps 2 and 3 we get


 (2003.31)10 = (7D3.4F5C2) …Ans.
16
(C-37)

Ex. 2.10.20 : Convert the following numbers into equivalent Note : In the third group, there are only 2 bits. Hence we
decimal numbers :
have assumed the number to be 011 instead of 11.
1. (327.4051)8
Always add the extra zeros on the MSB side, not
2. (5A.FF)16
on LSB side.
3. (101110111)2
Ex. 2.11.2 : Convert the following binary numbers to octal
4. (3FFF)16. Dec. 12, 8 Marks)
Soln. : Solve it yourself. then to decimal. Show the steps of

Ans. : conversions.

1. (327.4051)8 = (215.50994)10 1. 11011100.101010


2. 01010011.010101
2. (5A.FF)16 = (90.9960)10
3. 10110011 Dec. 17, 6 Marks
3. (101110111)2 = (375)10
Soln. :
4. (3FFF)16 = (16383)10 1. (11011100.101010)2 = (?)8 = (?)10

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 2-12 Number Systems and Codes

Step 1 : Convert binary to octal : (C-6323) 2.11.3 Binary to Hex Conversion :


 It is easy to convert from an integer binary number to
hex. This is accomplished by :

Step 1 : Divide the binary number into 4-bit sections from


the LSB to the MSB.
 (11011100.101010)2 = (334.52)8 …Ans.
Step 2 : Convert each 4-bit binary number to its hex
Step 2 : Convert octal to decimal : (C-6324) equivalent.

Ex. 2.11.3 : Convert the binary number 1010 1111 1011

ns e
0010 into the equivalent hex number.

io dg
Soln. :

 N = 192 + 24 + 4 + 0.625 + 0.03125 = 220.65625


 (334.52)8 = (220.65625)10 …Ans.
(C-6333)
2.
at le
(01010011.010101)2 = (?)8 = (?)10

Step 1 : Convert binary to octal : (C-6325)


 (1010 1111 1011 0010)2 = (AFB2)16

2.12 Conversion from Other Systems to


…Ans.
ic w
Binary System :
bl no

2.12.1 Conversion from Decimal to Binary :


 (01010011.010101)2 = (123.25)8 …Ans.
 We have already discussed this earlier.
Step 2 : Convert octal to decimal : (C-6326)
Pu K

Ex. 2.12.1 : Express the following numbers in binary.


Show your step by step equations and
calculations.
ch

1. (1010.11)Decimal
2. (428.10)Decimal Dec. 09, 6 Marks.
 N = 64 + 16 + 3 + 0.25 + 0.078125
Soln. : Solve it yourself.
= 83.328125
Te

Ans. :
 (123.25)8 = (83.328125)10 …Ans.
1. (1010.11)10 = (1111110010.0001)2
3. (10110011)2 = (?)8 = (?)10 2. (428.10)10 = (110101100.0001)2

Step 1 : Convert binary to octal : (C-6327) Ex. 2.12.2 : Express the following numbers in binary,
show the step-by-step equations and
calculations :
1. (110.110)Decimal 2. (234.234)Decimal
May 10, 6 Marks.
Soln. : Solve it yourself.
Ans. :
 (10110011)2 = (263)8 …Ans.
1. (110.110)10 = (1101110.0001)2

Step 2 : Convert octal into decimal : (C-6328) 2. (234.234)10 = (11101010.0011)2

2.12.2 Octal to Binary Conversion :


 To get the binary equivalent of the given octal number
we have to convert each octal digit into its equivalent 3-
bit binary number.
 N = 128 + 48 + 3 = 179
 (263)8 = (179)10 …Ans.  This is as explained in the following example.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 2-13 Number Systems and Codes

Ex. 2.12.3 : Convert the octal number (364)8 into Soln. :


equivalent binary number. 1. Decimal to binary :
Soln. : Step 1 : Convert the integer :

(C-6366)

 (364)8 = (0 1 1 1 1 0 1 0 0)2 …Ans.

ns e
Ex. 2.12.4 : Convert (364.25)8 into its equivalent binary

io dg
number.

Soln. :
 Follow the same procedure explained in the previous

at le
example. (C-7434) Fig. P. 2.12.6(a)

Step 2 : Convert the fractional part :


ic w
(C-6334)
bl no

 (364.25)8 = (011 110 100 · 010 101)2 …Ans.

2.12.3 Hex to Binary Conversion :

 It is also easy to convert from an integer hex number to


Pu K

binary. This is accomplished by :


Step 1 : Convert each hex digit to its 4-bit binary
ch

(C-7435) Fig. P. 2.12.6(b)


equivalent.
 (125.12) 10 = (111101.00011)2 ...Ans.
Step 2 : Combine the 4-bit sections by removing the
spaces. 2. Octal to binary :
Te

Ex. 2.12.5 : Convert the hex number AFB2 into equivalent


binary number.
Soln. :
(C-7436)
 Each digit in the given hex number is converted into 4-
 (337.025)8 = (011 011 111. 000 010 101)2 ...Ans.
bit binary numbers as shown in Fig. P. 2.12.5.
3. Hex to binary :

(C-7437)
(C-38) Fig. P. 2.12.5 : Hex to binary conversion  (5 D B)16 = (0101 1101 1011 . 1111 1010)2 ...Ans.
Hence (A F B 2) = (1010 1111 1011 0010)2 …Ans.
16
Ex. 2.12.7 : Express the following numbers in binary
Ex. 2.12.6 : Convert the following numbers in Binary format. Write step by step solution.
form : 1. (7762)octal
1. (125.12)10 = (?)2
2. (432A)hex
2. (337.025)8 = (?)2
3. (2946)decimal
3. (5DB.FA)16 = (?)2 Dec. 18, 6 Marks
4. (1101.11)decimal Dec. 10, 12 Marks.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 2-14 Number Systems and Codes

Soln. : Solve it yourself. 2. Octal to hex :


Ans. :
 (777)8 = (111 111111)8 ...Ans.
1. (7762)octal = (111111110010)binary
 For equivalent hex group the binary number into group
2.  (432A)hex = (0100001100101010)binary
of 4.
3.  (2946)10 = (101110000010)2
(C-1948)
4. (1101.11)10 = (10001001101.0001)2

2.13 Conversion from Octal to Other


Systems :  Equivalent hex is 1FF.

ns e
 We have already discussed the following two 3. Octal to decimal :

io dg
conversions :  (777)8 = (1FF)16 ...Ans.
1. Octal to decimal 2. Octal to binary. (C-1949)

2.13.1 Octal to Hex Conversion :



at le
For converting octal to hex, follow the steps given
below : = 448 + 56 + 7 = 511
 (777)octal  (1FF) Hex  (111111111) Binary  (511) Decimal
ic w
Step 1 : Convert the given octal number into equivalent
binary. ...Ans.
Step 2 : Then convert this binary number into hex.
bl no

Ex. 2.13.3 : Do the required conversions for the following


 The octal to hex conversion is demonstrated in numbers :
Fig. 2.13.1. (377)8 = (________)16 Dec. 11, 2 Marks.
Given octal number = (436) Soln. : Solve it yourself.
Pu K

Step 1 : Convert octal to binary : Ans. :


(377)8 = (FF)16
ch

(C-6335) Ex. 2.13.4 : Do the required conversion for the following


number :
Step 2 : Convert binary to hex :
(36)8 = (_______)16 (May 12, 2 Marks)
Binary  (1 0 0 0 1 1 1 1 0)2
Te

Soln. : Solve it yourself.


Add three zeros on extreme left (on MSB side) to get,
Ans. :
(0001 0001 1110)2
(36)8 = (1E)16

2.13.2 Conversion from Other Systems to


Octal :
 (436)8 = (1 1 E)16 …Ans.
 We have already discussed the following two
(C-6336) Fig. 2.13.1 conversions :
Ex. 2.13.2 : For a maximum 3-digit octal number obtain 1. Decimal to octal
equivalent hex, binary, decimal number. 2. Binary to octal.
May 11, 3 Marks.
Hex to octal conversion :
Soln. :
1. Octal to binary :  For the hex to octal conversion follow the steps given
below :
...Ans.
Step 1 : Represent each hex digit by a 4-bit binary number.
...Ans. Step 2 : Combine these 4-bit binary sections by removing
the spaces.
(C-6337)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 2-15 Number Systems and Codes

 (0.12 E)16 = (0.0456) …Ans.


Step 3 : Now group these binary bits into groups of 3 bits, 8

starting from the LSB side. Note : The bits are grouped starting from the fractional
Step 4 : Then convert each of this 3-bit group into an octal point and moving towards right.
digit.
Conversion of mixed hex number to octal :
Ex. 2.13.5 : Convert the hex number 4CA into its
equivalent octal form.  The procedure to be followed for conversion of mixed
Soln. : hex number is same as the one discussed for fractional
Step 1 : Convert (4CA)16 into binary :
hex conversion.

ns e
4 C A
Ex. 2.13.7 : Convert the hex number (68.4B) into
16

io dg
0100 1100 1010 equivalent octal number.
Step 2 : Combine the 4 bit binary sections by removing Soln. :
spaces :
 (4 C A) = (0100 1100 1010)2
16

at le
Step 3 : Group these binary bits into groups of 3 bits :
(C-6368)

3. Now put additional zeros to extreme left and right :


ic w
(C-39)
bl no

(C-42)
Step 4 :
 (4 C A)16 = (2 3 1 2) …Ans. 4. Form groups of 3 bits :
8

Fractional hex to octal conversion :


Pu K

 To convert the fractional hex number into octal we use


the following steps : (C-43)
ch

Step 1 : Convert the given fractional hex number into its 5. Convert into octal :
equivalent binary number.  (68.4B)16 = (150.226)8 …Ans.
Step 2 : Group the binary bits into groups of 3 bits.
Ex. 2.13.8 : Convert the following octal numbers into its
Te

Step 3 : Convert each group of 3 bits into an octal digit.


equivalent decimal and hex :
Ex. 2.13.6 : Convert (0.12E) into equivalent octal
16 1. (555)octal
number.
2. (777)octal. (May 10, 6 Marks)
Soln. :
Soln. : Solve it yourself.
Step 1 : Convert each hex digit into 4-bit binary word :
Ans. :
1. (555)octal = (365)10 = (16d)H
2. (777)octal = (511)10 = (1FF)H

Ex. 2.13.9 : Convert the following numbers to octal form.


(C-40) Show the steps of conversion :

Step 2, 3 : Group the binary bits into groups of 3 bits 1. (111110001.10011001101)2


and convert each group into an octal number : 2. (3287.51)10
3. (0.BF85)16
4. (1234)16. (Dec. 12, 8 Marks)

(C-41)
Soln. : Solve it yourself.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 2-16 Number Systems and Codes

Ans. : = 192 + 40 + 7 + 0.25


1. (111110001.10011001101)2 = (761.4632)8 ...Ans. = 239 + 0.25 = 239.25
2. (3287.51)10 = (6327.4050)8 ...Ans.  (357.2)8 = (239.25)10 ...Ans.
3. (0.BF85)16 = (0000.1011111110000101)2 ...Ans. 2. (453.54)8 = (?)16 = (?)10 = (?)2 :
4. (1234)16 = (11064)8 ..Ans.
Step 1 : Convert octal to binary :
2.14 Conversions Related to
Hexadecimal System :

ns e
2.14.1 Other Systems to Hex :

io dg
(C-6340)
 We are supposed to discuss the following conversions :
 (453.54)8 = (100101011.101100)2 ...Ans.
1. Decimal to Hex
Step 2 : Convert octal to hex :
2. Binary to Hex 3. Octal to Hex
Binary = (100101011.101100)2

at le
We have already discussed them.

Ex. 2.14.1 : Convert the following number into its


Add three zeros
(000100101011.1011)2
on extreme left to get
ic w
equivalent hexadecimal, decimal and binary
number (show step-by-step process of
bl no

conversion) :
1. (357.2)8 2. (453.54)8
(C-6341)
May 14, 6 Marks)
 (453.54)8 = (12B.B)16 ...Ans.
Soln. :
Pu K

1. (357.2)8 = (?)16 = (?)10 = (?)2 : Step 3 : Convert octal to decimal :


Step 1 : Convert octal to binary :
ch

(C-6338) (C-806)
Te

 (357.2)8 = (011101111.010)2 ...Ans. = 256 + 40 + 3 + 0.625 + 0.0625

Step 2 : Convert octal to hex : = 299.6875


Binary = (011101111.010)2  (453.54)8 = (299.6875)10 ...Ans.

Add one zero on extreme right side to get


Ex. 2.14.2 : Do the following :
(11101111.0100)
1. (735.25)10 = (?)16
2. (101011.111011)2 = (?)8 (?)16
May 17, 2 Marks

(C-6339) Soln. :

 (357.2)8 = (EF.4)16 ...Ans. 1. (735.25)10 = (?)16 :

Step 3 : Convert octal to decimal : Step 1 : Separate integer and fractional part :

(C-4807)

(C-5902)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 2-17 Number Systems and Codes

Step 2 : Convert the integer part :  (357.3)8 = (011101111.011)2 ...Ans.

Step 2 : Convert octal to hex :


Binary = (011101111.011)2

Add one zero on extreme right side to get


(11101111.0110)

ns e
(C-5903)

Step 3 : Convert the fractional part : (C-6339(a))

io dg
 (357.3)8 = (EF.6)16 ...Ans.

Step 3 : Convert octal to decimal :

at le (C-5904)
ic w
Step 4 : Combine results of steps 2 and 3 : (C-4807(a))

(735.25)10 = (2DF.40)16 …Ans. = 192 + 40 + 7 + 0.375


bl no

2. (101011.111011)2 = (?)8 (?)16 = 239.375

Step 1 : Convert binary to octal :  (357.3)8 = (239.375)10 ...Ans.


Pu K

Ex. 2.14.4 : Convert the following numbers to


hexadecimal form. Show the steps of
conversion :
ch

1. (675.625)10 2. (451)8
(C-5907) 3. (95.5)10 4. (11001011101)2.
(101011.111011)2 = (53.73)8 …Ans. (Dec. 12, 8 Marks)
Soln. : Solve it yourself.
Te

Step 2 : Convert binary to hexadecimal : (C-5908)


Ans. :
1.  (675.625)10 = (2A3.A0)16

2.  (451)8 = (129)16

3.  (95.5)10 = (5F.80)H
(C-5706)

 (101011.111011)2 = (2B.EC)16 …Ans. 4.  (11001011101)2 = (65D)16

Ex. 2.14.5 : Convert the following decimal numbers into


Ex. 2.14.3 : Convert the following octal number into its
its equivalent binary, hexadecimal and octal
equivalent Binary, Decimal and Hexadecimal numbers : 1. 456 2. 25.55.
(357.3)8. Dec. 19, 6 Marks
May 07, 6 Marks.
Soln. :
Soln. : Solve it yourself.
(357.3)8 = (?)16 = (?)10 = (?)2 :
Ans. :
Step 1 : Convert octal to binary : (C-6338(a))
1. (456)10 = (111001000)2 = (710)8, = (1C8)16

2. (25.55)10 = (11001  10001)2 = (31.431)8,

= (19.8CC)16

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 2-18 Number Systems and Codes

Ex. 2.14.6 : Convert the following octal numbers into its Step 2 : Group these binary bits into groups of 3 bits :
equivalent Hexadecimal, Binary and Decimal
numbers :

1. (76)8 2. (0.7634)8 3. (1567)8


4. (65.04)8 Dec. 08, 12 Marks. (C-1931)

Soln. : Solve it yourself.  (ABC)16 = (5274)8 …Ans.


Ans. :
Conversion of hex to decimal :
1. (76)8 = (111110)2 = (3E)16 = 56 + 6 = (62)10

ns e
2. (0.7634)8 = (0.111110011100)2 = (0.F9C)16 =

io dg
(0.9755)10

3. (1567)8 = (001101110111)2 = (377)16 = (887)10

4. (65.04)8 = (110101.000100)2 = (35.10)16 = 53.0625


(C-1932)


at le
2.14.2 Hex to Other Systems :

We are supposed to discuss the following conversions : 2.


 (ABC)16 = (2748)10

(DEF)Hex :
…Ans.
ic w
1. Hex to Decimal. Conversion of hex to octal :

2. Hex to Binary Step 1 : Convert each hex digit into 4-bit binary word :
bl no

3. Hex to Octal.

 We have already discussed them.


Pu K

Ex. 2.14.7 : What is the maximum equivalent decimal


(C-1933)
number represented by its maximum
Step 2 : Group these binary bits into groups of 3 bits :
ch

equivalent 4-digit Hex number ? Also convert


the following Hex numbers to get its
equivalent octal and decimal :
Te

1. (ABC)Hex. (C-1934)

2. (DEF)Hex. Dec. 09, 8 Marks.  (DEF)16 = (6757)8 …Ans.

Soln. : Conversion of hex to decimal :


The maximum equivalent decimal number
represented by its maximum equivalent 4 digit hex
number i.e. (FFFF)16 is (65535)10.

1. (ABC)Hex :
(C-1935)
Conversion of hex to octal :
 (DEF)16 = (3567)10 …Ans.
Step 1 : Convert each hex digit into 4-bit binary word :
Ex. 2.14.8 : Convert the following numbers, show all the
steps :
1. (101101.10101)2 = ( )10
2. (247)10 = ( )8
(C-1930)
3. (0.BF85)16 = ( )8 Dec. 14, 6 Marks)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 2-19 Number Systems and Codes

Soln. : Step 2 : Convert the integer part :

1. (101101.10101)2 = (?)10

Convert binary to decimal :

(C-5113)

 (2598)10 = (A26)16
(C-4936)

N = 32 + 8 + 4 + 1 + 0.5 + 0.125 + 0.03125 Step 3 : Convert the fractional part :

ns e
N = 45.65625

io dg
 (101101.10101)2 = (45.65625)10 ...Ans.

2. (247)10 = (?)8 :

Convert decimal to octal :

at le (C-5114)

 (0.675)10 = (0.ACCC)16
ic w
Step 4 : Combine results of steps 2 and 3 :
(C-4937) (2598.675)10 = (A26.ACCC)16 ...Ans.
bl no

 (247)10 = (367)8 ...Ans.


2. (110101.101010)2 = ( ? )8 :
3. (0.BF85)16 = (?)8 :

Convert hex to octal :


Pu K
ch

(C-6342) (C-5115)

3. Form groups of 3 bits :  (110101.101010)2 = (65.52)8 ...Ans.


Te

3. (A72E)16 = ( ? )8 :

Step 1 : Convert hex to binary :


(C-4938)

 (0.BF 85) = (0.137605)8 ...Ans.

Ex. 2.14.9 : Convert the following numbers, show all


steps :
(C-5116)
1. (2598.675)10 = ( ? )16
Step 2 : Combine the 4 bit binary bits into groups of 3
2. (110101.101010)2 = ( ? )8
bits :
3. (A72E)16 = ( ? )8 Dec. 15, 6 Marks)
 (A72E)16 = (101001110010 1110)2
Soln. :
Step 3 : Group these binary bits into groups of 3 bits :
1. (2598.675)10 = ( ? )16 :
Step 1 : Separate integer and fractional parts :

(C-5117)
(C-5112)
 (A72E)16 = (123456)8 ...Ans.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 2-20 Number Systems and Codes

Ex. 2.14.10 : Do the following conversions : 3. (1101.0011)2 = (?)10 :


1. (27.125)10 = (?)2
2. (3A.2F)16 = (?)10
3. (1101.0011)2 = (?)10
Dec. 16, 6 Marks

Soln. : (C-5706)

N = 8 + 4 + 1 + 0.125 + 0.0625 = 13.1875


1. (27.125)10 = (?)2 :
 (1101.0011)2 = (13.1875)10 …Ans.

ns e
Step 1 : Separate integer and fractional part :
Ex. 2.14.11 : Do the required conversions for the following

io dg
numbers :
(BF8)16 = (______)10 Dec. 11, 2 Marks.
(C-5703)
Soln. : Solve it yourself.

at le
Step 2 : Convert the integer : Ans. :
(BF8)16 = (3064)10
ic w
Ex. 2.14.12 : Do the required conversions for the following
numbers :
(1FFF)16 = (_______)10 (May 12, 2 Marks)
bl no

Soln. : Solve it yourself.


Ans. :
 (1FFF)16 = (8191)10
Pu K

Ex. 2.14.13 : Convert the following hexadecimal numbers


(C-5115)
into its equivalent binary, decimal and octal
Step 3 : Convert the fractional part : numbers : 1. (4A)H 2. (2E)H.
ch

May 07, 6 Marks.


Soln. : Solve it yourself.
Ans. :
Te

1. (4A)16 = (74)10 = (0100 1010)2 = (112)8


2. (2E)16 = (46)10 = (0010 1110)2 = (56)8

Ex. 2.14.14 : Express the following numbers in decimal.


Show your step by step equations and
calculations : 1. (10110.0101)2 2. (16.5)16
(C-5704)
Dec. 07, 6 Marks.
Step 4 : Combine the results of steps 2 and 3 :
Soln. : Solve it yourself.
(27.125)10 = (11011.0010)2 …Ans. Ans. :

2. (3A.2F)16 = (?)10 : 1. (10110.0101)2 = (22.3125)10


2. (16.5)16 = (22.3125)10

Ex. 2.14.15 : What is the maximum equivalent decimal


number represented by 4-digit hexadecimal
number ? Convert the following hexadecimal
(C-5705) numbers to equivalent binary and octal
 N = 48 + 10 + 0.125 + 0.0585 = 58.1835 numbers : 1. 68BE H 2. 77BA H

 (3A  2F)16 = (58.1835)10 …Ans. Dec. 07, 6 Marks.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 2-21 Number Systems and Codes

Soln. : Solve it yourself. 2.16.1 Weighted Binary Codes :


Ans.:  Weighted binary codes are those codes which are based
1. (68BE)H = (0110 1000 1011 1110)2 = (64276)8 on the principle of positional weight.

2. (77BA)H = (0111 0111 1011 1010)2 = (73672)8  Each position of a number represents a specific weight.

2.15 Concept of Coding :  Several systems of codes are used to express the
decimal digits 0 through 9. These codes have been
Definition : listed in Fig. 2.16.2.
 When numbers, letters or text characters are  The codes 8421, 2421, 3321 …. all are the weighted

ns e
represented by a specific group of symbols, it is said codes.
that the number, letter or word is being encoded.

io dg
 In these codes each decimal digit is represented by a
 And the group of symbols is called as the code. group of four bits as shown in Fig. 2.16.2.
Binary codes :
 The digital data is represented, stored and transmitted

at le
as group of binary bits. Such a group of binary bits is
also called as binary code.
ic w
 The binary codes can be used for representing the
numbers as well as alphanumeric letters.
bl no

Applications of Binary Codes :

1. In digital communication. 2. In digital computers.

2.16 Classification of Codes :


Pu K

 Fig. 2.16.1 shows the classification of codes. The codes


ch

are broadly categorized into following six categories :


(C-73) Fig. 2.16.2
1. Weighted codes.

2. Non-weighted codes. 2.16.2 Non Weighted Codes :


Te

3. Reflective codes.  The codes in which the positional weights are not
4. Sequential codes. assigned, are known as non weighted codes.

5. Alphanumeric codes.  The examples of non-weighted codes are excess-3 and


gray codes.
6. Error detecting and correcting codes.

2.16.3 Alphanumeric Codes :

 The special code designed to represent numbers as well


as alphabetic characters are called as the alphanumeric
codes.

 Some of these codes are capable to representing some


symbols and instructions as well, in addition to the
numbers and alphabetic characters.

 Examples of alphanumeric codes are : ASCII (American


Standard Code for Information Interchange), EBCDIC
(Extended Binary Coded Decimal Interchange Code) and

(C-72) Fig. 2.16.1 : Classification of codes Hollerith code.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 2-22 Number Systems and Codes

2.17 Binary Coded Decimal (BCD) Code :

Definition :

 BCD is the short form of “Binary Coded Decimal.” In this


code each decimal digit is represented by a 4-bit binary
(C-74) Fig. 2.17.1 : Decimal to 8-4-2-1 BCD conversion
number.
Smallest and largest number in BCD :
 Thus BCD is a way to express each of the decimal digits
 The smallest digit in BCD is (0000) i.e. 0 and the largest
with a binary code. one is (1001) i.e. 9. The next number to 9 will be (10)10

ns e
 The positional weights associated to the binary bits in which is expressed as (0001 0000) in BCD.

io dg
BCD code are 8-4-2-1 with 1 corresponding to LSB and
2.17.1 Comparison with Binary :
8 corresponding to MSB.
3 2 1 0
1. BCD is less efficient than binary :
 These weights are actually 2 , 2 , 2 and 2 which are
 Conversion of a decimal number 78 into BCD and binary
same as those used in the normal binary system.


at le
Conversion from decimal to BCD :

The decimal digits 0 to 9 are converted into a BCD,



is illustrated in Fig. 2.17.2.
Fig. 2.17.2 illustrates that in order to encode the same
decimal number, BCD needs more number of bits than
ic w
exactly in the same way as binary. binary.
 Hence BCD is less efficient as compared to binary.
 For example decimal 4 corresponds to 0100 BCD which
bl no

is same as binary.

 Similarly the BCD numbers corresponding to the


decimal numbers 0 to 9 are exactly same as the
Pu K

corresponding binary numbers. This is illustrated in


Table 2.17.1.
(C-75) Fig. 2.17.2 : BCD is less efficient than binary
ch

(C-6142)Table 2.17.1
2. BCD arithmetic is more complicated than Binary
arithmetic.
3. The advantage of a BCD code is that the conversion
from Decimal to BCD or vice versa is simpler.
Te

4. Table 2.17.2 shows the comparison of binary and BCD


Invalid BCD codes : numbers from 0 to 15 decimal.
(C-6143)Table 2.17.2
 With four bits we can represent sixteen numbers (0000
to 1111).

 But in BCD code only the first ten of these are used
(0000 to 1001).

 The remaining six code combinations ie 1010 to 1111


are invalid in BCD.

Conversion of big decimal numbers to BCD :

 To express any decimal number in BCD, simply express


each decimal digit with its equivalent 4-bit BCD code as
illustrated below.

 Fig. 2.17.1 shows the BCD codes for various decimal


numbers.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 2-23 Number Systems and Codes

Packed BCD : Q. 3 With the help of suitable example, explain the


meaning of self complementing code.
 The BCD numbers corresponding to decimal numbers (Dec. 08, 2 Marks)
beyond 9 are called as packed BCD. Some examples of
packed BCD are presented in Table 2.17.3.  Excess – 3 is also called as XS - 3 code. It is a

nonweighted code used to express decimal numbers as


Table 2.17.3 : Examples of packed BCD

Decimal Packed BCD shown in Table 2.18.1.

25 0010 0101  The Excess-3 code words are derived from the 8421
BCD code words by adding (0011)2 or (3) 10 to each code

ns e
169 0001 0110 1001
523 0101 0010 0011 word in 8421. The excess - 3 codes are obtained as

io dg
2.17.2 Advantages of BCD Codes : follows :

1. It is very similar to decimal system. (C-6683)

2. We need to remember binary equivalents of decimal

at le
numbers 0 to 9 only.

2.17.3 Disadvantages :
 Excess – 3 codes for the single digit decimal numbers

are listed in Table 2.18.1.


ic w
1. The addition and subtraction of BCD have different (C-6146) Table 2.18.1 : Excess – 3 codes
rules.
bl no

2. The BCD arithmetic is little more complicated.

3. BCD needs more number of bits than binary to


represent the same decimal number. So BCD is less
Pu K

efficient than binary.


ch

Ex. 2.17.1 : Convert the following decimal numbers to


BCD : (a) 35 (b) 174 (c) 2479.
Soln. :
Te

 Excess – 3 is a sequential code because we get any


(C-6143(a)) code word by adding binary 1 to its previous code word
as illustrated in Table 2.18.1.
2.18 Non – weighted Codes :
 Excess – 3 is a self complementing code. This is
These codes do not work of the principle of
because in Excess – 3 we get the 9’s complement of a
positional weights. We will consider two such codes : 1.
number by just complementing each bit that means by
Excess – 3 code and 2. Gray code.
replacing a 0 by 1 and 1 by 0.
2.18.1 Excess – 3 Code :
SPPU : Dec. 04, May 07, Dec. 08. Ex. 2.18.1 : Obtain the XS - 3 code for (428)10.

University Questions. Soln. :


Q. 1 With the help of suitable example, explain the
meaning of self-complementing code.
(Dec. 04, 3 Marks)
Q. 2 What is excess-3 code of binary numbers : 0010 B,
0110 B and 0111 B. (May 07, 3 Marks) (C-6147)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 2-24 Number Systems and Codes

 XS - 3 equivalent : 0111 0101 1011 …Ans. Step 1 : Convert decimal to binary :

Ex. 2.18.2 : Convert the following decimal numbers into


excess-3 code :

1. (5)10

2. (37)10

3. (247.6)10 (C-7146) Fig. P. 2.18.3(a)

ns e
Soln. :  (25)10 = (11001)2 …Ans.

io dg
Decimal to excess - 3 conversion Step 2 : Convert decimal to BCD :

1. N = (5)10 :  (25)10 = (00100101)BCD …Ans.

Step 1 : Write the BCD equivalent : Step 3 : Convert BCD to Excess-3 :

at le (5)10 = 0101
ic w
Step 2 : Convert to excess-3 :

Excess-3 of (5)10 = 0101 + 0011 = 1000 …Ans.


bl no

2. N = (37)10 :
(C-7147) Fig. P. 2.18.3(b)

 (25)10 = (58)EXCESS-3 …Ans.


Pu K

2.19 Gray Code :


SPPU : May 06, Dec. 06, May 07.
ch

University Questions.

(C-2956) Q. 1 Prepare a table of 4 bit gray code along with


Te

3. N = (247.6)10 : relationship with binary code. (May 06, 2 Marks)

Q. 2 What will be the gray code of any given 4-bit binary

number ? Show the truth table. (Dec. 06, 6 Marks)

Q. 3 What is gray code ? What will be the gray code of

binary number 1100B, 0111B and 1101 B


(May 07, 6 Marks)

 Gray code is another non-weighted code. It is not an


(C-2957)
arithmetic code.

Ex. 2.18.3 : Do the following : Convert the decimal  It has a very special feature that only one bit in the gray
number 25 into Binary format, Excess-3 code will change, each time the decimal number is
format and BCD format. May 18, 3 Marks incremented as shown in Table 2.19.1.

Soln. :  As only one bit changes at a time, the Gray code is

(25)10 = (?)2 = (?)EXCESS-3 = (?)BCD called as a unit distance code. The Gray code is a
Cyclic code.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 2-25 Number Systems and Codes

(C-91) Table 2.19.1  Hence a 180 error in the disc position would result and
the user would not even notice it, because in binary
codes, any number of digits can change their values at a
given instant of time.

 This problem can be eliminated by using the Gray code


instead of binary.

 In Gray code, only one bit changes at a time. So in a 3-


bit code the probability of error introduction will be

ns e
reduced to 33% whereas in a 4-bit code it reduces to
25%. This is the advantage of using gray code.

io dg
2.19.3 Gray-to-Binary Conversion :
Steps :

 For gray to binary conversion, follow the steps given

at le below :

Step 1 : The MSB of Gray and binary are same. So write it


ic w
directly.
Step 2 : Add (mod-2 addition) binary MSB to the next bit
bl no

of Gray code. Note down the result and ignore


the carries.
2.19.1 Application of Gray Code :
Step 3 : Continue this process until the LSB is reached.
SPPU : May 06, May 12.
 Note that the addition mentioned in step 2 is a modulo
Pu K

University Questions.
2 addition (MOD - 2). It is equivalent to an Ex-OR
Q. 1 How gray codes are useful in digital system ?
operations hence denoted by  sign instead of simple
ch

(May 06, 2 Marks) (+) sign.


Q. 2 State the applications of gray code.
 The rules of MOD-2 addition are as follows :
(May 12, 2 Marks)
Table 2.19.2 : Rules of MOD-2 addition
 Gray code is popularly used in the shaft position
Te

encoders. 0  0 = 0
 A shaft position encoder produces a code word which 0 1 = 1

represents the angular position of the shaft.
1  0 = 1
2.19.2 Advantages of Gray Code :
1  1 = 0
SPPU : May 06.
 The gray to binary conversion is illustrated in the
University Questions.
following example.
Q. 1 What are the advantages of gray code over pure
Ex. 2.19.1 : Convert 1110 gray to binary.
binary code ? (May 06, 2 Marks)
Soln. :
 Consider the disc which produces the binary codes.
Imagine a situation in which the existing position is 111
and the position is about to change to 000.

 If one light source out of the three is slightly ahead of


the others, then the detector output would be “011”
instead of 111 or 000.
(C-94)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 2-26 Number Systems and Codes

In general we can say that the conversion of a 4-bit 3. BCD to Excess – 3.


gray number G3 G2 G1 G0 into a 4-bit binary number B3 B2 4. Excess – 3 to BCD.
B1 B0 takes place as follows : 2.20.1 Binary to BCD Conversion :
B3 (MSB) = G3 (MSB),
 For the binary to BCD conversion the steps to be
B2 = B3  G2 followed are as given below :

B1 = B2  G1, Step 1 : Convert the Binary number to decimal.

B0 = B1  G0 Step 2 : Convert decimal number into BCD.

ns e
2.19.4 Binary to Gray Conversion : Ex. 2.20.1 : Convert the binary number (110101)2 into BCD.

io dg
Soln. :
Steps :
Step 1 : Conversion from binary to decimal :
 A straight binary number can be converted in gray by
following the steps given below :

at le
Step 1 : Record the MSB as it is, because the MSB of
Gray is same as that of binary. (C-2255)
ic w
Step 2 : Add this bit to the next position, note down the
Step 2 : Decimal to BCD :
sum and neglect any carry.
bl no

Step 3 : Repeat step 2.


(C-2256)
Ex. 2.19.2 : Convert binary 1011 to gray.
Soln. :  (110101)2 = (0101 0011)BCD …Ans.
Pu K

2.20.2 BCD to Binary Conversion :

Steps to be followed :
ch

Step 1 : Convert the BCD number into decimal.

Step 2 : Convert decimal number to binary.


Te

(C-95) Ex. 2.20.2 : Convert (0101 0011)BCD into binary.

In general the conversion of binary to gray takes Soln. :

place as follows : Step 1 : Convert BCD to decimal :

G3 (MSB) = B3 (MSB),

G2 = B3  B2 (C-2257)

G1 = B2  B1,  Equivalent decimal number is 53.


(LSB) G0 = B1  B0 Step 2 : Convert decimal to binary :
 The  sign in the above expressions, also indicates an  Use the long division method for decimal to binary
EX-OR (exclusive OR) function. conversion to get,
 (53)10 = (110101)2
2.20 Code Conversions :
 (0101 0011)BCD = (110 101)2
 In this section we are going to learn the following code
conversions : 2.20.3 BCD to Excess – 3 :
1. Binary to BCD.  For BCD to Excess – 3 conversion, follow the steps given
2. BCD to Binary. below :

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 2-27 Number Systems and Codes

Step 1 : Convert BCD to decimal. Ex. 2.20.7 : Represent the decimal numbers : (a) 396 and

Step 2 : Add (3)10 to this decimal number. (b) 4096 in : 1. BCD 2. Excess - 3 code.
May 08, 4 Marks.
Step 3 : Convert the decimal number of step 2 into binary, Soln. : Solve it yourself.
to get the excess – 3 code.
Ans. :
Ex. 2.20.3 : Convert (1001)BCD to excess – 3.
(396)10 = (0011 1001 0110)BCD
Soln. : (C-2258)
= (0110 1100 1001)xs – 3

ns e
(4096)10 = (0100 0000 1001 0110)BCD

= (0111 0011 1100 1001)xs – 3

io dg
Ex. 2.20.8 : Represent decimal number 327 in :
1. BCD code 2. Excess-3 code.
Dec. 08, 2 Marks.
Ex. 2.20.4 : Convert (0101 0011)BCD into excess – 3.
Soln. :
at le
(C-2259)
Soln. : Solve it yourself.
Ans. :
(327)10 = (0011 0010 0111)BCD
ic w
= (011001011010)xs – 3

Ex. 2.20.9 : Find gray codes for the following binary


bl no

numbers : 1. 11001100 2. 01011110 .


Dec. 08, 2 Marks.
Soln. : Solve it yourself.
Alternative method for BCD to XS – 3 conversion :
Pu K

Ans. :
 Add (0011)2 to each 4-bit BCD number to obtain the 1. (11001100)2 = (10101010)gray
corresponding Excess – 3 number.
ch

2. (01011110)2 = (01110001)gray

Ex. 2.20.5 : Convert BCD 0101 0011 to excess – 3. Review Questions


Soln. : (C-6372)
Q. 1 Define base of radix of a number system.
Te

Q. 2 Explain the following :

(a) Bit (b) Nibble


 (01010011)BCD = (10000110)XS – 3 …Ans.
(c) Word (d) Double word
2.20.4 Excess – 3 to BCD Conversion :
Q. 3 What are the disadvantages of a binary system ?
 Subtract (0011)2 from each 4 bit excess – 3 digit to
Q. 4 Describe the hexadecimal system.
obtain the corresponding BCD code.
Q. 5 Write a short note on : Octal system.
Ex. 2.20.6 : Obtain the BCD code equivalent of Q. 6 What is BCD code ?
(1001 1010).
Q. 7 What is excess-3 code ?
Soln. :
Q. 8 Write short note on ASCII code.

Q. 9 What are the advantages of BCD code over binary


code ?

Q. 10 What are the different types of codes used in digital


systems ? Explain them.
(C-6362)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 2-28 Number Systems and Codes

Q. 11 What is gray code ? What are its applications ? Q. 16 Give four comparison between BCD code and Gray
code.
Q. 12 What is BCD code ?
Q. 17 Differentiate between Binary and Gray code.
Q. 13 What is excess-3 code ?
Q. 18 Distinguish between Excess-3 code and Gray code.
Q. 14 Write short note on ASCII code. Q. 19 Explain the rules of BCD addition.
Q. 15 What are the different types of codes used in digital
systems ? Explain them.

ns e


io dg
at le
ic w
bl no
Pu K
ch
Te

Powered by TCPDF (www.tcpdf.org)


Unit 1

Chapter

3
ns e
io dg
at le
ic w
Binary Arithmetic
bl no
Pu K

Syllabus

Signed binary number representation and arithmetic : Sign magnitude, 1’s complement and
ch

2’s complement representation, Unsigned binary arithmetic (Addition, subtraction, multiplication and
division), Subtraction using 2’s complement; IEEE standard 754 floating point number representations.
Case study : Four basic arithmetic operations using floating point numbers in a calculator.
Te

Chapter Contents
3.1 Introduction 3.7 IEEE-754 Standard for Representing Floating
Point Numbers

3.2 Unsigned Binary Numbers 3.8 Introduction to Boolean Algebra

3.3 Sign-Magnitude Numbers 3.9 Definition of Boolean Algebra

3.4 Complements 3.10 Two Valued Boolean Algebra

3.5 2’s Complement Arithmetic 3.11 Basic Theorems and Properties of Boolean
Algebra

3.6 Floating Point Representation 3.12 Boolean Expression and Boolean Function

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 3-2 Binary Arithmetic

3.1 Introduction : unsigned arithmetic all the magnitudes were restricted


between 0 and (255)10 or (00)H to (FF)H.
 Let us develop various rules for carrying out the
3. That means all the numbers being added or subtracted
arithmetic operations such as addition, subtraction,
must be in the range 0 to 255. More important is that
multiplication and division.
the answer also should be in the range 0 to 255.
 Binary arithmetic is essential in all the digital computers
4. For the magnitudes greater than 255 we have to use 16-
and many other digital systems.
bit arithmetic.
3.2 Unsigned Binary Numbers : Overflow :

ns e
 In some applications, all the data is either positive or  If the addition or multiplication of two 8-bit numbers

io dg
results in generation of a number, greater than (255)10,
negative.
then it is said that overflow has taken place.
 Then we can just forget about the (+) or (–) signs, and
concentrate only on the magnitude (absolute value) of 3.2.2 Unsigned Binary Arithmetic :
the data.
 at le
For example, the smallest 8 bit binary number is 0000
0000 i.e. all zeros, and the largest 8 bit binary number is
 In the following subsections we will discuss the four
basic arithmetic operations on the binary numbers:
Addition, subtraction, multiplication and division
ic w
1111 1111.
 Hence the complete range of unsigned 8 bit binary 3.2.3 Binary Addition :
bl no

numbers extends from (00)H to (FF)H or from (00)10 to


 Binary addition is the key for binary multiplication,
(255)10.
subtraction and division.
It is important to note that we have not included (+) or (–)
 The four most basic cases of binary addition are shown
signs with these numbers.
Pu K

in Table 3.2.1.
 Similarly for 16-bit numbers the complete range is given
(C-6107) Table 3.2.1 : Four cases of binary addition
by,
ch

Smallest : 0000 0000 0000 0000 = (0000) H

Largest : 1111 1111 1111 1111 = (FFFF)H

 This represents the magnitude of 0 to 65,535 decimal.


Te

 For cases 1, 2 and 3 of Table 3.2.1, the binary addition


takes place by following the rules of decimal addition.
 But concentrate on case 4. Addition of binary 1 + 1
(C-6109) Fig. 3.2.1 : Unsigned 8 bit binary number
represents the combining of one pebble and one
 Data of this type is called as unsigned binary numbers
pebble to obtain a total of two pebbles.
because all of the bits in a binary number are used to
1+1 =   two pebbles
represent the magnitude of the corresponding decimal
 Since binary 10 stands for   two pebbles, the result of
number, as shown in Fig. 3.2.1.
binary addition 1 + 1 is 10.
3.2.1 Important Features of Unsigned  1 + 1 = (10)2 …(3.2.1)
Numbers :
3.2.4 Sum and Carry :
1. We can add or subtract the unsigned binary numbers if
 Thus the fourth case yields a binary two (10). When the
certain conditions are satisfied.
binary numbers are added, the fourth case in Table 3.2.1
2. The microcomputers of first generation were able to creates a sum of 0 in the given column and a carry of 1
process only 8 bits at a time. Therefore with 8-bit over to the next column.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 3-3 Binary Arithmetic

 The four basic rules of binary addition in terms of sum 3.2.6 Subtraction and Borrow :
and carry are as follows.
 These two words will be used very frequently for the
(C-6902(a)) Table 3.2.2 : Rules of binary addition binary subtraction.
 For binary subtraction we have to remember the
following four cases given in Table 3.2.3.
(C-6343) Table 3.2.3 : Four basic rules for binary subtraction

ns e
io dg
Ex. 3.2.1 : Add the following binary numbers : 011 and
101.
Soln. :
 Consider case 4 in Table 2.1.3. It is [0 – 1]. Hence a logic

at le 
1 is borrowed.
This will change the subtraction
[0 – 1] to [10 – 1] that means [ – ] =  = 1.
from
ic w
Ex. 3.2.2 : Subtract the decimal numbers (38)10 and (29)10
(C-57)
by converting them into binary.
bl no

3.2.5 Binary Subtraction : Soln. :


Rules : Step 1 : Convert (38)10 and (29)10 into their binary
equivalents :
 In order to understand the binary subtraction, we
Pu K

Convert the given decimal numbers into binary numbers


should remember some of the important rules of
to get,
decimal subtraction.
(38)10 = (1 0 0 1 1 0)2 , (29)10 = (1 1 1 0 1)2
ch

 They are as follows :


Step 2 : Perform column by column subtraction from
1. To carry out the subtraction (A – B) where A and B are LSB to MSB :
the two single digit decimal numbers. We have to
Te

consider two cases.


2. Case I : Digit A > Digit B :

Let A = (5)10 and B = (3)10

Then A – B = () – () = 


(C-63)
 (5)10 – (3)10 = (2)10
Step 3 : Repeat the procedure for all the columns :
3. Case II : Digit A < Digit B :
 If A = (3)10 and B = (5)10 then we cannot perform (3 – 5)
because we cannot take out 5 pebbles from 3.
Therefore we have to borrow 1. After borrowing, the
subtraction is changed to

(C-64)

Column 4 : 10 – 1 = 1
(C-4898)
Column 5 : 10 – 1 – 1 =  – – = 0
 We have to do the same thing for subtracting the binary
Column 6 : 1 – 1 = 0
numbers.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 3-4 Binary Arithmetic

3.2.7 Binary Multiplication :  (11001)2  (101)2 = (1111101)2 ...Ans.


3.2.8 Binary Division :
 The procedure used for binary multiplication is exactly
 The division of binary numbers takes place in a similar
same as that for the decimal multiplication.
way as that of decimal number. It is called as the long
 In fact binary multiplication is simpler than decimal division procedure.
multiplication because only 0s and 1s are involved.
Ex. 3.2.5 : Perform following operation without
Rules of binary multiplication :
converting to any other base.
Rules of binary multiplication are as follows :
(10101011)2  (101)2

ns e
00 = 0 Soln. :
01 = 0

io dg
(10101011)2  (101)2 : (C-2172)
10 = 0
11 = 1

 The multiplication process of two binary numbers has

at le
been illustrated in the following example.

Ex. 3.2.3 : Perform the following binary multiplication


…Ans.
ic w
101.11  111.01. Ex. 3.2.6 : Perform (11010)2  (101)2
Soln. : Soln. :
bl no

(C-2174)
(11010)2  (101)2 :
Pu K
ch

(C-6108)
 Ans. : 1 0 1 0 0 1 . 1 0 1 1 Ex. 3.2.7 : Perform (11001)2  (101)2

Cross check : Soln. :


(11001)2  (101)2 :
A = (101.11)2 = (5.75)10
Te

and B = (111.01)2 = (7.25)10 (C-3704)

 A  B = (41.6875)10

 (11001)2  (101)2 = (101)2 ...Ans.

3.3 Sign-Magnitude Numbers :


(C-6344) …Ans. SPPU : Dec. 07, Dec. 11.
Thus we have the correct answer.
University Questions.

Ex. 3.2.4 : Perform : (11001)2  (101)2 Q. 1 What do you mean by signed magnitude
Soln. : representation of a number ? (Dec. 07, 2 Marks)

(11001)2  (101)2 : (C-6771) Q. 2 What are different ways of representing signed


binary numbers ? Explain with examples.
(Dec. 11, 6 Marks)

 If the data has positive as well as negative numbers then


the signed binary numbers should be used, for their
representation.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 3-5 Binary Arithmetic

 For a sign-magnitude representation, the + or – signs  The largest magnitude is 127, which is approximately
are also represented in the binary form i.e. by using 0 or half of the largest magnitude obtained for unsigned
1. So a 0 is used to represent the (+) sign and 1 is used binary numbers.

to represent the (–) sign.  With the sign-magnitude numbers, we can use the 8-bit
arithmetic as long as the input data range falls in
 The MSB of a binary number is used to represent the
decimal – 127 to + 127.
sign and the remaining bits are used for representing
 It is still necessary to check all the sums for overflow.
the magnitude. 8-bit signed binary numbers are shown
 If the magnitude of data is greater than 127 then 16 bit
in Fig. 3.3.1.

ns e
arithmetic should be used.
 With 16-bit numbers the range of sign-magnitude

io dg
numbers extends from decimal (– 32,767) to (+ 32,767).

Advantages of sign-magnitude numbers :


(C-6110) Fig. 3.3.1(a) : A positive binary number
1. The main advantage is their simplicity.

at le 2. We can easily find the magnitude by deleting the sign


bit.
ic w
Disadvantage :
Fig. 3.3.1(b) : A negative binary number 8-bit
signed binary numbers
 The sign-magnitude numbers have a limited use
bl no

because they require complicated circuits.


Definition of signed numbers :
 These numbers are often used in analog-to-digital
 The numbers shown in Fig. 3.3.1 contain a sign bit
(A to D) converters.
followed by magnitude bits.
Pu K

3.4 Complements :
 Numbers represented in this form are called as
sign-magnitude numbers or only sign-numbers.
ch

 Complements are used in the digital computers in order


to simplify the subtraction operation and for the logical
3.3.1 Range of Sign-Magnitude Numbers :
manipulations.
 The unsigned 8-bit numbers cover the decimal range of
3.4.1 1’s Complement :
Te

0 to 255.

 But in the sign magnitude numbers, the MSB is utilized  The 1’s and 2’s complement of a binary number are
for representing the sign. important because we can use them for representation
 Therefore the range gets modified. (as there are only 7 of negative numbers.
bits left to represent the magnitude).
Definition :
 For an 8-bit sign-magnitude number, the largest
negative number is (– 127) given by,  The 1’s complement of a number is found by inverting

all the bits in that number. This is called as taking


(C-6369) complement.
 And the largest positive number is (+ 127) given by,  This complemented value represents the negative of the

(C-6369)
original number. The 1’s complement system is very
easy to implement merely using inverters.
In this way the range of sign-magnitude, 8-bit binary
numbers is modified to decimal (– 127) to (+127) from 0 to Ex. 3.4.1 : Obtain the 1’s complement of the following
255. numbers. (1010)2, (11010101)2.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 3-6 Binary Arithmetic

Soln. :  Table 3.4.1 shows the full range of 4 bit numbers in the
1’s complement form.
(C-6111) Table 3.4.1

(C-6279(a))

Ex. 3.4.2 : Convert following binary numbers to 1’s


complement (a) (1101)2 (b) (1011)2
Soln. :

ns e
(a) Given number : 1 1 0 1
1’s complement : 0 0 1 0 ...Ans.

io dg
(b) Given number : 1 0 1 1
1’s complement : 0 1 0 0 …Ans.

3.4.2 Representation of Signed Numbers


at le
using 1’s Complement :

Positive number in 1’s complement form are


ic w
represented in the same way as the positive sign  One of the difficulties in using 1’s complement is its
magnitude numbers. That means (+ 5)10 can be representation of zero.

represented as 0101.  As seen from Table 3.4.1 both 0 0 0 0 and 1 1 1 1


bl no

represent zero. The [0 0 0 0] is called as positive zero


 But the negative numbers are the 1’s complement of
and [1 1 1 1] is called as the negative zero.
the corresponding positive numbers.
 For example (– 5)10 is represented as follows : 3.4.3 2’s Complement :
Pu K

Definition :

 The 2’s complement of a binary number is obtained by


ch

adding 1 to the LSB of 1’s complement of that number.


 2’s complement = 1’s complement + 1
(C-69)
 The method of 2’s complement arithmetic is commonly
Te

 Thus (– 5)10 is represented as 1010 in the 1’s used in computers to handle negative numbers.
complement form.
Ex. 3.4.3 : Obtain the 2’s complement of (1011 0010)2.
Range of 1’s complement number system :
Soln. :

1. From the discussion done till now, the maximum

positive number in the 1’s complement form for a 4 bit

number is 0111.
n–1
 For n = 4 the maximum positive number is 7. i.e. [2 – 1] (C-6345)

2. The maximum negative number is represented as 1 1 1 Hence the 2’s complement of (1011 0010)2 is
(0100 1110)2.
1 . Hence for n = 4 the maximum negative number is – 7
n–1
i.e. [2 – 1]. 3.4.4 Representation of Signed Numbers
using 2’s Complement :
 Range of 1’s complement number system is given by,
n–1  Positive numbers in 2’s complement form are
Maximum positive number  (2 – 1)
represented the same way as in the sign-magnitude and
n–1
Maximum negative number  – (2 – 1) 1’s complement form.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 3-7 Binary Arithmetic

 For example (+ 6)10 is represented as 0110 in the 2’s


complement form.
 Negative numbers are the 2’s complements of the
(C-6113)
corresponding positive numbers.
 For example (– 6)10 is represented in the 2’s  Similarly the 2’s complement of other negative numbers
are obtained.
complement form as follows :
Important observations from Table 3.4.2 :

 Two’s complement numbers have some interesting

ns e
characteristics. They are as follows :
1. There is one unique 0.

io dg
(C-6112) 2. The two’s complement of 0 is 0.
 2’s complement is another method of storing negative 3. The MSB of a sign-magnitude number cannot be used
values. In a microcomputer the positive and negative to express quantity. It can only be used as a sign bit.


at le
numbers
complement.
are represented with the help of

The representation of four bit (+) and (–) numbers using


2’s 4. There are eight negative integers, seven positive
integers and one zero, making a total of 16 unique
numbers.
ic w
2’s complement is shown in Table 3.4.2.

 The range of a 4 - bit sign-magnitude number extends


bl no

from decimal (– 7) to (+ 7). The (0)10 and positive


numbers are represented by normal binary numbers in
Table 3.4.2.
(C-6115)
Pu K

(C-6114) Table 3.4.2 : Two’s complement numbers


5. To convert a negative number to a positive number, find
its 2’s complement.
ch

6. 2’s complement of a number results in the original


number itself. For example if the given number is
(4)10 = (0 1 0 0)2 then its 2’s complement is (1 1 0 0) and
Te

the 2’s complement of (1 1 0 0) is (0 1 0 0) i.e. (4)10.

3.4.5 Signed Complement Numbers :

 When arithmetic operations are carried out in a


computer, it is more convenient to use a different
system for representing the negative numbers.
 This system is called as the signed – complement
system.
 In this system a negative number is represented by its
complement.

 But the negative numbers are represented in the 2’s  In the signed – magnitude system a number is negated

complement form. by changing its sign, but the signed complement system
will negate the number by taking its complement.
 However note that the MSB (sign bit) is not changed
 We know that a positive number always starts with a
while obtaining the 2’s complement. For example 2’s
complement of (– 6)10 is obtained as follows : zero (plus) in its MSB position, therefore the

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 3-8 Binary Arithmetic

complement will always start with a 1, indicating a Observations :


negative number. 1. Table 3.4.3 shows that the positive numbers with all the
 The signed – complement system can use either 1’s three representations are identical and they have a 0 in
complement or 2’s complement, but the 2’s their left most position.
complement is most commonly used. 2. The signed 2’s complement system has only one
 As an example let us see how to represent – (7)10 using representation for 0 which is always positive but the
different techniques using 8 binary bits. other two systems have either a positive zero or a

ns e
negative zero.

3. All the negative numbers have a 1 in the leftmost bit

io dg
position. This is how we can distinguish the negative
numbers from the positive ones.

4. With four bits we can represent 16 binary numbers. But

at le (C-71)
in the signed magnitude and 1’s complement systems
there are eight positive and eight negative numbers
ic w
 In the signed-magnitude representation, the left most 1 including two zeros.
represents (–) sign and the remaining seven bits  In 2’s complement system there are eight positive
bl no

represent the magnitude 7. numbers including one zero and eight negative
 In the signed 1’s complement representation, – 7 is numbers.
obtained by complementing all the bits of + 7 including
 The signed magnitude system is awkward when used in
Pu K

the sign bit.


computer arithmetic because the sign and magnitude
 In the signed 2’s complement, – 7 is obtained by need to be handled separately.
ch

obtaining the 2’s complement of + 7, including the sign


 Therefore the signed complement system is preferred in
bit.
computer arithmetic.
 Table 3.4.3 shows all the possible 4 bit signed binary
 The 1’s complement creates some difficulties and is
Te

numbers in the three representations discussed now.


generally not used for the arithmetic operations.
(C-8115) Table 3.4.3 : Signed binary numbers
 But it is useful as a logical operation because the
change of 0 to 1 and 1 to 0 is equivalent to logical
complement operation.

 The signed 2’s complement system is commonly used in


the computer arithmetic.

Ex. 3.4.4 : Express the decimal numbers +25 and – 25 in

the 8 bit sign magnitude, 1’s complement and


2’s complement forms.
Soln. :

Representation of + 25 :

+ 25 is represented in the sign magnitude, 1’s

complement and 2’s complement forms as follows :

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 3-9 Binary Arithmetic

 That means if the signs of numbers A and B are same,


then we add their magnitudes and give the sum their
common sign.
 If the signs are different, then we subtract the smaller
number from the bigger one and give the sign of the
larger number to the result.
 The same procedure applies to the binary numbers in
the signed magnitude representation.

ns e
 For example (+ 16) + (– 25) = – (25 – 16) = – 9 and this
subtraction is performed by subtracting the smaller

io dg
magnitude of 16 from the larger magnitude of 25 and
giving the sign of larger number to the subtraction.
 This subtraction requires the comparison of signs and

at le magnitudes of the two numbers. The same procedure is


applicable to the binary numbers in signed magnitude
representation.
ic w
(C-6346)
Addition in the signed complement form :
Ex. 3.4.5 : Represent +40 and –40 decimal numbers
 The rule for adding numbers in signed complement
bl no

using 2’s complement method.


form does not need comparison or subtraction but it
Dec. 12, 2 Marks)
requires only the addition.
Soln. :
 This procedure is very simple and can be stated as
Pu K

Representation of + 40 : follows :

+ 40 is represented in sign magnitude and 2’s Procedure :


ch

complement form as follows :


 The addition of two signed binary numbers with
(+ 40)10 = 00101000 ...Ans.
negative numbers represented in signed 2’s
Representation of – 40 in 2’s complement form : complement form is obtained from the addition of two
Te

numbers, including the sign bits.


 A carry out of the sign bit position is discarded.
 This procedure can be explained with the help of
following examples :
(C-3223)
Case 1 : Addition of both positive numbers :
3.4.6 Addition of Signed Magnitude Numbers :

 In the previous section we have learnt how to represent


signed numbers in three different forms.

 In this section, we will discuss the signed numbers.

 The 2’s complement form of representation is the most (C-6116)

widely used form in computers and microprocessor –  The answer is positive (sign bit is 0) and equal to
based systems. 00010011 i.e. 19.

 The addition of two numbers in signed-magnitude Case 2 : Smaller number is negative :


system should be carried out as per the rules of  Let A = – 7 and B = + 12 and their addition is to be
ordinary arithmetic. performed.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 3-10 Binary Arithmetic

 Express (– 7) in the signed 2’s complement form as 3.5.1 Subtraction of Unsigned Binary using
follows : 2’s Complement :
+ 7 = 0 0 0 0 0 1 1 1  signed magnitude form
 If the subtraction of two binary numbers A and B is to
– 7 = 1 1 1 1 1 0 0 1  2’s complement of + 7 be performed using the 2’s complement, then the
 Perform the addition following steps are to be followed.

Steps to be followed :
Step 1 : Add (A)2 to the 2’s complement of (B)2.

ns e
Step 2 : If the carry is generated then the result is
positive and in its true form.
(C-6117)

io dg
Step 3 : If the carry is not produced, then the result is
 The final carry is discarded to obtain the correct answer.
negative and in its 2’s complement form.

Note : Carry is always to be discarded in the subtraction

at le (C-6118)

Case 3 : Bigger number negative : Ex. 3.5.1 :


using 2’s complement.

Perform (9)10 – (5)10 using 2’s complement


ic w
 Let A = + 7 and B = – 12 method.

Soln. :
bl no

Step 1 : Obtain 2’s complement of (5)10 :

Decimal Binary 2’s complement


(C-6347) (5)10 (0101)2
Pu K

1011
 Therefore bring the answer in it’s true form by
Step 2 : Add (9)10 to 2’s complement of (5)10 :
subtracting 1 from the answer and then inverting all the
ch

bits to get the final answer as follows :


 The answer is
sign (–) 1 0 0 0 0 1 0 1 =–5
Te

Case 4 : Both numbers negative :


(C-70(a))

 (9)10 – (5)10 = (4)10 …Ans.

Note : The final carry bit acts as a sign bit for the answer.
If it is 1 then the answer is positive, and if it is 0

(C-6119) then the answer is negative.

Ex. 3.5.2 : Perform (4)10 – (9)10 using the 2’s complement


3.5 2’s Complement Arithmetic :
method.
 The direct binary subtraction becomes complicated as Soln. :
the number size increases.
Convert both the numbers to binary.
 Therefore we can represent the subtraction of A – B in (4)10 = (0100)2 and (9)10 = 10012
the form of addition as : A + (– B).
Step 1 : Obtain 2’s complement of (9)10 :
 We can represent number B (which is to be subtracted)
in its 1’s complement or 2’s complement form and use Decimal Binary 2’s complement

addition instead of subtraction to get the result of A – B. (9)10 (1001)2 (0111)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 3-11 Binary Arithmetic

Step 2 : Add (4)10 to 2’s complement of (9)10 : Soln. :


– (48)10 – (– 23)10 :

We have to perform – (48)10 + (23)10


Step 1 : Convert number (– 48)10 to its 2’s complement :
(C-8292)

(C-6124)

ns e
Step 2 : Add 2’s complements of (– 48)10 and (23)10 :
Step 3 : Convert the answer into its true form :

io dg
(C-4183)

at le
Thus the answer is – (0101)2 i.e. (– 5)10.
(C-6125)

…Ans.
 As final carry is not generated the answer is negative
and in 2’s complement form.
ic w
Ex. 3.5.3 : Perform subtraction using 2’s complement for Step 3 : Convert the answer into its true form :
given numbers (– 48) – (+23) use 8 bit
(C-8293)
representation of number.
bl no

Dec. 13, 3 Marks, May 19, 3 Marks.


Soln. :
 – (48)10 – (– 23)10 = – (11001)2 = – (25)10 ...Ans.
Step 1 : Convert both the numbers to their 2’s
complement : Ex. 3.5.5 : Subtract (27.50)2 from (68.75)2 using 2's
Pu K

complement method.
Decimal Binary 2’s complement
(48)10 (00110000)2 Dec. 14, 4 Marks, May 18, 6 Marks)
(11010000)
ch

Soln. :
(23)10 (00010111)2 (11101001)
Step 1 : Convert decimal number to binary :
Step 2 : Add 2’s complement of (48)10 and (23)10 :
A = (27.50)10 = (00011011.1000)2
Te

B = (68.75)10 = (01000100.1100)2

Step 2 : Obtain 2’s complement of (27.50)10 :

(C-6128)

Step 3 : Convert answer to its true form :


(C-8294)

Step 3 : Add (68.75)10 and 2’s complement of (27.50)10 :

(C-6349)

 – (01000111)2 = (– 71)10
 (– 48) – (+ 23) = – (01000111)2 = (– 71)10
(C-4934)
Ex. 3.5.4 : Perform the following operations using 2’s
complement method.  Final carry indicates that the answer is positive and in its
– (48)10 – (– 23)10 true form.
Dec. 13, 6 Marks, May 19, 3 Marks.  (68.75)10 – (27.50)10 = (00101001.0100)2

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 3-12 Binary Arithmetic

 Convert answer into decimal, Step 2 : Add 2’s complement of (7)10 and (11)10 :

(C-4935)

N = 32 + 8 + 1 + 0.25 = 41.25
 (68.75)10 – (27.50)10 = (41.25)10 ...Ans. (C-6131)

Step 3 : Convert answer to its true form :

ns e
Ex. 3.5.6 : Perform 2’s complement arithmetics of :
1. (7)10 – (11)10 2. (–7)10 – (11)10

io dg
3. (–7)10 + (11)10 May 15, 6 Marks

Soln. :
1. (7)10 – (11)10 :

at le
Step 1 : Convert both numbers to their binary form :
(7)10 = (0111)2 and (11)10 = (1011)2
(C-6132)

 (– 7)10 – (11)10 = – (18)10 ...Ans.


ic w
Step 2 : Obtain 2’s complement of (11)10 : 3. (–7)10 + (11)10 :

(C-8295) Step 1 : Convert (7)10 to its 2’s complement :


bl no

Decimal Binary 2’s complement

(7)10 (0111)2 (1001)


Step 3 : Add (7)10 to 2’s complement of (11)10 :
Pu K

Step 2 : Add 2’s complement of (7)10 and (11)10 :


(C-6129)
ch
Te

(C-6133)
Step 4 : Convert the answer into its true form :
Answer = (0100)2 = (4)10
 (– 7)10 + (11)10 = + (4)10 ...Ans.

Ex. 3.5.7 : Do the following : (7F)16 – (5C)16 using 2’s


complement method. May 17, 2 Marks)
(C-6130) Soln. :
Convert answer into decimal – (0100) i.e. (– 4)10
1. (7F)16 – (5C)16 using 2’s complement method :
 (7)10 – (11)10 = – (4)10 ...Ans.
Step 1 : Convert hex to binary :
2. (–7)10 – (11)10 :
(7F)16 = (0111 1111)2 and
Step 1 : Convert both numbers to their 2’s complement : (5C)16 = (0101 1100)2
Decimal Binary 2’s complement Step 2 : Obtain 2’s complement of (5C)16 :
(7)10 (00000111)2 (11111001)

(11)10 (00001011)2 (11110101)


(C-6371)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 3-13 Binary Arithmetic

Step 3 : Add (7F)16 and 2’s complement of (5C)16 : Conclusion :

 The binary numbers in the signed complement system


are added and subtracted by using the rules of the
conventional addition and subtraction procedures for
the unsigned numbers.

 Due to this the computers need only one common


(C-5909) hardware circuit to handle the addition as well as
(7F)16 – (5C)16 = (23)16 …Ans. subtraction.

ns e
 The user must interprete the results of such addition or
3.5.2 Subtraction of Signed Binary

io dg
subtraction differently depending on whether it is
Numbers :
assumed that the numbers are signed or unsigned.
 Subtraction of two signed binary numbers when the
negative numbers are in the 2’s complement form can
3.6 Floating Point Representation :

at le
be performed as follows :

Take the 2’s complement of the subtrahend (including the


 We need to use many bits in order to represent very
large integer (whole) numbers.
ic w
sign bit) and add it to the minuend (including the sign bit). A  The same problem is faced when numbers with both
carry out of the sign bit position should be discarded. integer and fractional parts such as 64.8629 needs to be
bl no

 We can use this procedure because a subtraction is represented.

equivalent to an addition if the sign of the subtrahend is  The floating point number system is the remedy to
changed. this problem. It is based on scientific notation and is
capable of representing very large and very small
 This is demonstrated as follows :
Pu K

numbers without increasing the number of bits.

 This system can be used for representing numbers


ch

which have both integer and fractional parts. It uses


power of ten.
(C-6386)
3.6.1 Parts of Floating Point Numbers :
Te

 Consider the subtraction (– 7) – (– 12) = + 5 in binary


with eight bits.  The floating point numbers are also called as the real
numbers. They consist of two parts and a sign. The two
 The numbers (– 7) and (– 12) are expressed in the 2’s
parts are :
complement form as (11111001) and (1111 0100)
1. Mantissa 2. Exponent
respectively.
 Mantissa is the part of a floating point number which
 The subtraction of these numbers is changed to
represents the magnitude of the number.
addition by taking the 2’s complement of subtrahend
 Exponent is the part of the floating point number that
(– 12) to (+ 12).
represents the number of places that the decimal point
 The 2’s complement of – 12 = (0000 1100). Then the (or binary point) is to be moved.
addition is performed as follows :
Ex. 3.6.1 : Convert the following decimal number into a
floating point number.
N = 541, 369, 841.
Soln. :

(C-6134)
 The given number can be represented in the floating
point number as follows :
 Discarding the final carry we get the answer of + 5.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 3-14 Binary Arithmetic

Ex. 3.6.2 : Represent the following binary number using a


single precision floating point format.
N = (1001010011101)2.

(C-1789) Soln. :

 Thus mantissa is the magnitude and exponent (9) Step 1 : Represent the given number in fractional form :
represents the number of places that the decimal point  The given binary number contains 13 bits. Let us express
is moved. it as 1 plus a fractional binary number by moving the
binary point 12 places to the left and then multiply by
3.6.2 Binary Floating Point Numbers :

ns e
appropriate power of two as follows :
 For binary floating point numbers, the format is defined

io dg
by ANSI / IEEE standard 754 – 1985 in the following
three forms :

1. Single precision floating point binary numbers. …(1)


2.

3.
at le
Double precision floating point binary numbers.

Extended precision floating point binary numbers.


(C-4922)

Step 2 : Decide the values of S, E and F :


ic w
3.6.3 Single Precision Floating Point Binary
 Assuming the number to be the positive one the sign
Numbers :
bit will have a zero value.
bl no

 Fig. 3.6.1 shows the 32 bit standard format for a single  S = 0

precision floating point binary number.  The exponent is actually 12 because we have shifted the
binary point by 12 places. In order to get the biased
 The MSB is assigned to indicate the sign (S). The next
Pu K

exponent we have to add 127 to the actual exponent.


eight bits indicate the exponent (E) and the last 23 bits
 Biased exponent E = Actual exponent + 127
are used for representing the mantissa or the fractional
ch

 E = 12 + 127 = (139)10
part (F).
 Convert the biased exponent to binary
 In the mantissa (F) part, the binary point is assumed to (C-6257)

placed to the left of the 23 bits as shown in Fig. 3.6.1.


Te

 The eight bit exponent (E) actually represents a biased


exponent. That means this exponent is obtained by
adding 127 to the actual exponent.

 The bias is used to allow very large or very small


numbers without requiring a separate sign bit for the
exponent. T

 he biased exponent will allow the actual exponent to


occupy the range of – 126 to + 128.
 The mantissa is the fractional part (F) of the binary
number. Because there is always a 1 to the left of the
binary point as shown in Equation (1), it is not included
in the mantissa.
 The 12 bits in the mantissa are same as those in
(C-1790) Fig.3.6.1 : Format of single precision Equation (1) but eleven zeros are appended to these 12
floating point binary numbers bits to form the 23 bit mantissa as follows :

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 3-15 Binary Arithmetic

– A 32 bit floating point number can replace a binary


integer number having 129 numbers.

(C-4922(a))
– The number 0.0 is represented by all 0s and infinity is
represented by all 1s in the exponent and all zeros in
Step 3 : Write the complete floating point number :
the mantissa.
 The complete single precision floating point number is
as follows :
Ex. 3.6.4 : Represent the following decimal numbers in
(C-8299)
single precision floating point format :

ns e
1. 255.5 2. 110.65

io dg
May 15, 6 Marks
Conversion from floating point to binary : Soln. :
1. (255.5)10 :
 Now let us see the conversion from floating point

 at le
number to binary number.

Use the formulae given below for the same.


Step 1 : Convert decimal to binary :
ic w
S E – 127
Binary number = (– 1) (1 + F) (2 ) …(2)

 The values of S, F and E are to be obtained from the


bl no

given floating point number.

Ex. 3.6.3 : Convert the following floating point number into


equivalent binary number.
Pu K

(C-8300)
ch

Soln. :
Te

From the given floating point number we get,

S = 1, E = 10001011, F = 001010011101
S E – 127
 Binary number = (– 1) (1 + F) (2 ) (C-5125)
1 139 – 127
 Binary number = (– 1) (1 + 0.001010011101) (2 )  (255.5)10 = (11111111.10)2 ...(1)
12
= – (1.001010011101)  2 Step 2 : Decide the values of S, E and F :
= – 1001010011101

(C-1791) (C-5126)

 Assuming the number to be the positive one the sign


Note :
bit will have a zero value.
– Since the biased exponent (E) can take any value
between – 126 to + 128, it is possible to express  S = 0

extremely large and small numbers using the floating  Exponent E = Actual exponent + 127 = 7 + 127
point number systems. = (134)10

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 3-16 Binary Arithmetic

 Convert the biased exponent to binary 3.7 IEEE-754 Standard for Representing
Floating Point Numbers :
E = (134)10 = (10000110)2

 The mantissa is the fractional part of binary number.  Representation of floating point number discussed in
section 3.6 has many subtle problems.
To form 23 bit mantissa, fourteen zeros are appended.
 IEEE floating point standards addresses a number of
 F = 11111111000000000000000
such problems.
Step 3 : Write the complete floating point number :
 Zero has definite representation in IEEE format.
(C-8319)

ns e
   has been represented in IEEE format. A + 
indicated that the result of an arithmetic expression is

io dg
too large to be stored.
2. (110.65)10 :
 If an underflow occurs, implying that a result is too
Step 1 : Convert decimal to binary : small to be represented as a normalized number, it is

at le 
encoded in a denormalized scale.
Fig. 3.7.1 gives the representation of floating point
numbers.
ic w
bl no
Pu K

(C-5127)
ch

 (110.65)10 = (1101110.111001)2

Step 2 : Decide the values of S, E and F :


Te

(C-5128)

 Assuming the number to be the positive one the sign (co 2.51)Fig. 3.7.1 : IEEE standard format
bit will have a zero value. (a) Single precision (32 bits) :
 S = 0 Exponent Significant
Value/comments
(E) (N)
Exponent E = Actual exponent + 127 = 6 + 127
255 Not equal to 0 Does not represent a
= (133)10
number
 E = (133)10 = (10000101)2
255 0 –  or +  depending
 The mantissa is fractional part of binary number. on sign bit
 To form 23 bit mantissa, 11 zeros are appended. Normalized 0 < E < 255 Any  (1.N) 2E – 127
scale
 F = 10111011100100000000000
Denormalized 0 Not equal to 0  (0.N) 2– 126
Step 3 : Write the floating point number : (C-8320)
scale
0 0  0 depending on sign
bit

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 3-17 Binary Arithmetic

(b) Double precision (64 bits) : Double precision :


(C-8302)
Exponent Significant
Value/comments
(E) (N)

2047 Not equal to 0 Does not represent a


number
Ex. 3.7.2 : Represent – (0.125)10 in both single precision
2047 0 –  or +  depending
as well as double precision
on sign bit
Soln. :
Normalized 0 < E < 2047 Any  (1.N) 2E – 1023

ns e
Step 1 :
scale
(0.125)10 = 2 = (0.010)2

io dg
Denormalized 0 Not equal to 0  (0.N) 2– 1022
scale Step 2 : Normalization :
–3
0 0  0 depending on sign 10  2
bit Step 3 : Single precision :

at le
Fig. 3.7.2 : Values of floating point numbers as
per IEEE format
Biased exp = –3 + 127 = (124)10
= (01110)2
ic w
Ex. 3.7.1 : Represent (20.625)10 in both single precision Double precision
as well as double precision = –3 + 1023 = 1020
bl no

Soln. : = (01111110)2
Step 1 : Convert to binary : Single precision :
(C-8303)
16 20
Pu K

8
16 12 C  (20)10 = (C8)16

= (110100)2
ch

0
Double precision :
0.625  16 = 10 = A
(C-8304)
 (0.625)10 = (0.A)16 = (01010)2
Te

(20.625)10 = (110100.1010)2

Step 2 : Normalization :
7 Note : The biased exponent can be in the range 1-254 for
110100101  2
single precision (exponent range is – 126 to + 127)
Step 3 : Calculate biased exponent :
and 1 – 2046 for double precision (exp range
For single precision
– 1022 to + 1023). The biased exp values 0 and
Biased exp = exp + 127 = (134)10 = (100110)2 255 for single precision && (0 and 2047 for double
For double precision : precision) are used to represent zero,,
Biased exp = exp + 1023 = (1030)10 denormalized form, NaN (Not a Number.)

= (10000110)2
Ex. 3.7.3 : Represent (178.1875)10 in single and double
Step 4 : Find representation : precision floating point format.
Single precision : (C-8301) Soln. :
Convert given decimal number into its equivalent binary

178 = 101110

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 3-18 Binary Arithmetic

.1875  2 = 0.3750 .750  2 = 1.500

.3750  2 = 0.750 .500  2 = 1.00


 309.1875 = 10110101.011
.750  2 = 1.50
(a) Single precision format :
.50  2 = 1.00
 In IEEE single precision format, the value of a number
 (178.1875)10 = (101110.011)2
for given exponent (E) and significant (N) is given by
(a) Single precision format : E – 127
(1.N)  2 .
 In IEEE format, the value of a number for given
In order to represent 10110101.011, we must convert it

ns e
E – 127
exponent (E) and significant (N) is given by (1.N) 2 E – 127
into the form (1.N)  2 .
in order to represent (101110.011)2, we must convert it

io dg
8
E – 127 10110101.011 = 1.0110101011  2
into the form (1.N) 2 .
5 8 = E – 127
10111.011 = 1.0111011  2
 E = 127 + 8 = 135
5 = E – 127

at le  E = 127 + 5 = 132
(132)10 = (10010)2
(135)10 = (10011)2
ic w
bl no

(b) Double precision format :

(b) Double precision format : In IEEE double precision format, the value of a number
for given exponent (E) and significant (N) is given by,
In IEEE double precision format, the value of a number
Pu K

E – 1023
for given exponent (E) and significant (N) is given by (1.N)  2
E – 1023
(1.N) 2 . In order to represent 10110101.011, we must convert it
ch

E – 1023
In order to represent (101110.011)2, we must convert it into the form (1.N)  2 .
E – 1023 8
into the form (1.N) 2 . 10110101.011 = 1.0110101011  2
5
10111.011 = 1.0111011  2 5 = E – 1023
Te

5 = E – 1023  E = 1023 + 5 = 1031


 E = 1023 + 5 = 1028 (1031)10 = (1000011)2
(1028)10 = (1000010)2

Ex. 3.7.5 : Express (35.25)10 in the IEEE single


Ex. 3.7.4 : Represent (309.1875)10 in single precision and
precision standard of floating point
double precision format. representation.
Soln. : Soln. :

Convert given decimal number into its equivalent (35.25)10


binary.
Step 1 : (35)10 = (23)16 = (010011)2
(309)10 = (10110101)2 (0.25)10 = (0.01)2
.1875  2 = 0.3750  (35.25)10 = (10011.01)2
5
.3750  2 = 0.750 Step 2 : (35.25)10 = (1.001101)2  2

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 3-19 Binary Arithmetic

Step 3 :  In 1850s, the Irish mathematician George Boole


Biased exponent = 127 + 5 = (132)10 = (100 010)2 developed a mathematical system for formulating logic
5 Biased exponent mantissa (C-8305) statements with symbols so that problems can be
written and solved in a manner similar to ordinary
algebra.

Ex. 3.7.6 : Convert (127.125)10 in IEEE-754 single and  This is called Boolean algebra and it is extremely useful
double precision floating point representation.
in the design and analysis of digital systems.
Dec. 16, 10 Marks.

ns e
Soln. :  Boolean algebra is used to write or simplify the logical
expressions.

io dg
Step 1 :

3.8.1 Basic Logical Operations


(Logic Variables) :

at le (C-8306)

 (127)10 = (7F)16 = (11111111)2


 To solve or simplify the logical expressions, used in
digital circuits we need to use “logical operators”. The
three main logic operators are :
ic w
0.125  16 = 5.00
1. AND operator.
 (0.125)10 = (2)16 = (0.01)2
bl no

2. OR operator.
 (0.125)10 = (2)16 = (0.00)
3. NOT operator (Inverter).
Step 2 : (127 – 125)10 = (011 111.010)2
6
3.8.2 NOT Operator (Inversion) :
= (1.111101)2  2
Pu K

 The NOT operation represents a process of logical


Step 3 : Biased exponent :
inversion called as complementing.
ch

1. For single precision  127 + 6 = (133)10 = (100101)


 It is implemented by using an inverter.
2. For double precision  1023 + 6 – (1029)10
 The NOT operation is denoted by a bar (–) over the
= (10000101)
variable which is being inverted.
Te

(a) Single precision : –


 A = NOT A …Logical inversion.
(C-8307)

3.8.3 AND Operator :

 The AND operation produces a high (1) output only if all


the inputs of a logic circuit are high (1).
(b) Double precision :
 The “AND” operator represents logical multiplication. It
(C-8308)
is denoted by a dot between the two variables to be
multiplied. i.e.

A·B …Logical multiplication

 However sometimes this dot is not used and we denote


3.8 Introduction to Boolean Algebra :
the logical multiplication of A and B as AB.
 In the decimal system which is used in our day-to-day
3.8.4 OR Operator :
life, the arithmetic operations such as addition,
subtraction, multiplication, division, square, square root  The OR-operation produces a HIGH (1) output when any
etc. are used to solve arithmetic equations. one or all the inputs of a logic circuit are HIGH(1).

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 3-20 Binary Arithmetic

 The “OR” operator represents logical addition. It is (C-196) Table 3.8.1 : Various logic gates

denoted by a (+) sign between the two variables to be


added.

 If A and B are to be added in a logic circuit then it is


represented as,

A + B …Logical addition

 And it is to be read as “A OR B”.

ns e
3.8.5 Logic Gates :

io dg
 Logic gates are the logic circuits which act as the basic

building blocks of any digital system.

It is an electronic circuit having one or more than one


at le
inputs and only one output.
ic w
 The relationship between the input and the output is

based on a “certain logic”.


bl no

 Depending on this logic, the gates are named as NOT

gate, AND gate, OR, NAND, NOR etc.

Truth table :
Pu K

Note : A and B are the inputs whereas Y denotes the


 The operation of a logic gate or a logic circuit can be
output.
best understood with the help of a table called “Truth
ch

Table”. 3.8.7 Postulates : SPPU : Dec. 04, Dec. 08.

 The truth table consists of all the possible combinations University Questions.

of the inputs and the corresponding state of output Q. 1 With suitable examples, explain the following in
Te

Boolean algebra :
produced by that logic gate or logic circuit.
1. Commutative law
Boolean expression :
2. Associative law
 The relation between the inputs and the outputs of a 3. Distributive law (Dec. 04, Dec. 08, 3 Marks)
gate can be expressed mathematically by means of the 2. Associative law :
Boolean Expression.
 For a given set “S” a binary operator * can be said to be
 Let us now discuss the operation of various logic gates. associative if the following equation is satisfied.

3.8.6 Gates, Symbols and Boolean (A * B) * C = A * (B * C) for all (A, B, C)  S …(3.8.1)

Expression :  Note that * can be an operator such that  (product) or


+ (addition) etc.
 In order to understand Boolean algebra , we need to 3. Commutative law :
use the gates.
For a given set “S” a binary operator * is said to be
 So the symbols and Boolean expressions should be commutative, if the following equation is satisfied.
known to us. Table 3.8.1 gives information. A * B = B * A for all A, B  S …(3.8.2)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 3-21 Binary Arithmetic
5. Inverse :
1. Symbols used in Boolean algebra (usually letters) do not
 A set S having an identity element x with respect to a represent numerical values.
binary operator * is said to have an inverse if for every A
2. Arithmetic operations (addition, subtraction, division
which belongs to S, there exists another element B
etc.) are not performed in boolean algebra. Also there
which also belongs to S such that,
are no fractions, negative numbers, square, square root,
A*B = x
logarithms, imaginary numbers etc.

3. Third and most important point is Boolean algebra


allows only two possible values (“0” to “1”) for any

ns e
(C-128) variable.

io dg
 Take the example of set of integers (I) for which 0 is the Rules in boolean algebra :

identity element with respect to + operation that means  There are some rules to be followed while using a
x = 0 and * = +. Hence the inverse of an element A Boolean algebra, these are :
would be – A because only then the condition A * B = x

6.
at le
is satisfied as follows :
Distributive law :
1.

2.
Variables used can have only two values. Binary 1 for
HIGH and Binary 0 for LOW.
Complement of a variable is represented by a overbar
ic w

If * and · are two binary operators working on a set S, (–). Thus complement of variable B is represented as B.
then * is said to be distributive over · if the following – –
Thus if B = 0 then B = 1 and if B = 1 then B = 0.
bl no

condition is satisfied.
3. ORing of the variables is represented by a plus (+) sign
A * (B · C) = (A * B) · (A * C) …(3.8.3) between them. For example ORing of A, B, C is
 The operators and postulates of such a field have the represented as A + B + C.
4. Logical ANDing of the two or more variables is
Pu K

following meanings :
1. We define the binary operator + as addition. represented by writing a dot between them such as
A  B  C  D  E. Sometimes the dot may be omitted like
ch

2. The identity element for + operator is 0.


ABCDE.
3. We define the additive inverse as subtraction.
3.9.1 Boolean Postulates and Laws :
4. The binary operator · is defined as multiplication.
5. The identity element for · operator is 1.  Boolean algebra is an algebraic structure, which is
Te

defined by a set of elements and two binary operators


6. We define the multiplicative inverse of A as 1/A
and it is called as division. namely (+) and (·) if and only if it satisfies the postulates
given below :
7. The only distributive law that is applicable is that of
1(a) Closure with respect to operator (+) :
· over +. That means,
When operator (+) i.e. OR is used over two binary
A · (B + C) = (A · B) + (A · C)
elements in a given set, it produces a unique binary
3.9 Definition of Boolean Algebra : element e.g. 1 + 0 = 1, 0 + 0 = 0.
1(b) Closure with respect to operator (·) :
Definition :
When operator (·) i.e. AND is used over two binary
 Boolean algebra is used to analyze and simplify the
elements in a given set, it produces a unique binary
digital (logic) circuits.
element e.g. 1  0 = 0, 1  1 = 1 etc.
 Since it uses only the binary numbers i.e. 0 and 1 it is
2(a) An identity element with respect to (+), designated
also called as “Binary Algebra”, or “Logical Algebra”. by 0 :
 It was invented by George Boole in the year 1854. This is because for any A, the following expression is
 The rules of Boolean algebra are different from those of always true.

the conventional algebra in the following manner : A+0 = 0+A=A

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 3-22 Binary Arithmetic

2(b) An identity element with respect to (·), designated Table 3.10.1(a) : AND operator Table 3.10.1(b) : OR
by 1 : operator

This is because for any A the following expression is Inputs Output Inputs Output
always true. A B A·B A B A+B
0 0 0 0 0 0
A · 1 = 1 · A = A.
0 1 0 0 1 1
3(a) Commutative law with respect to (+) : 1 0 0 1 0 1
A+B = B+A 1 1 1 1 1 1
Table 3.10.1(c) : NOT operator
3(b) Commutative law with respect to (·) :

ns e
Input Output
A·B = B·A

io dg
A A
4(a) (·) is distributed over (+) :
0 1
A · (B + C) = (A · B) + (A · C)
1 0
4(b) (+) is distributed over (·) :
 We are now going to demonstrate that Huntington

5.
at le A + (B · C) = (A + B) · (A + C)

For every binary element A, there exists an element


postulates are valid for the two valued Boolean Algebra.
Let S = {0, 1} and the operators be +, · and NOT.
ic w
– 1. Closure :
called complement of A (it is denoted by A) such that,
– –  The closure is obvious from Tables 3.10.1(a), (b) and (c)
A + A = 1 and A · A = 0
bl no

which shows that the result of any operation (AND, OR,


– –
Note that if A = 0 then A = 1 and if A = 1 then A = 0. NOT) is either 0 or 1, and we know that 0, 1  S.
6. In a set of binary elements, there always exist atleast 2. Identity elements :
two elements A and B such that A  B. If A = 0 then  From Tables 3.10.1(a) and (b), we can write that,
Pu K

B = 1 and vice versa. 0+0 = 0 0+1 = 1+0 =1

3.10 Two Valued Boolean Algebra : 1·1 = 1 1·0 = 0·1 = 0


ch

 From these expressions, two identity elements are


SPPU : Dec. 04, Dec. 08. established which are 0 for + and 1 for · as defined by
University Questions. postulate 2.
Te

Q. 1 With suitable examples, explain the following in 3. The commutative laws :


 The commutative laws are
Boolean algebra :
0+1 = 1+0 and 0·1 =1·0
1. Commutative law 2. Associative law
 These laws are satisfied due to the symmetry of
3. Distributive law (Dec. 04, Dec. 08, 3 Marks)
Tables 3.10.1(a) and (b). This is illustrated by using the
 It is possible to formulate many Boolean algebras on following truth table.
(C-6135)Table 3.10.2 : Verification of the commutative law
the basis of elements we choose and the rules of
operation.

 In this book we are going to deal with only the two


valued Boolean algebra.

 It is called two valued because it has only two elements


0 and 1 with binary operators AND (·), OR (+) and NOT
(inversion).
4. The distributive law :
 The following tables show the truth tables for these
(a) The distributive law of A · (B + C) = (A · B) + (A · C) can
operators. be proved by using the truth tables as shown below.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 3-23 Binary Arithmetic

(C-6136)Table 3.10.3 : Verification of the distributive law 3.11 Basic Theorems and Properties of
Boolean Algebra :

3.11.1 Duality :

 According to the duality theorem the following


conversions are possible in a given Boolean expression :

1. Change each AND operation to an OR operation.

2. Change each OR operation to an AND operation.

ns e
3. Complement any 1 or 0 appearing in the expression.
Duality theorem is sometimes useful in creating new

io dg

(b) The distributive law of + over · can also be proved using expressions from the given Boolean expressions.
the truth tables in the similar way.  For example if the given expression is A + 1 = 1 then
replace the OR (+) operation by AND ( · ) operation and
5. Inverse :


at le
From Table 3.10.1(b) it is easily seen that,

A+A=1 i.e. 0 + 1 = 1 and 1 + 0 = 1
take complement of 1 to write the dual of the given
relation as,
A·0 = 0
ic w
And from Table 3.10.1(a) it can be shown that  The dual of A (B + C) = AB + AC is given by,
– A + (B · C) = (A + B) · (A + C)
AA =0 i.e. 0  1 = 0 and 10=0
bl no

This verifies postulate 5.  It is possible to verify this theorem by constructing truth


tables for both the sides of equations.
6. Postulate 6 says that there exists at least two elements A
and B such that A  B. As Boolean algebra has two 3.11.2 Basic Theorems : SPPU : May 12.
distinct elements 0 and 1 (A and B), this postulate is
Pu K

getting satisfied because 0  1 (A  B), University Questions.


Q. 1 State and prove any two theorems of Boolean
7. Associative law :
ch

algebra. (May 12, 4 Marks)


 This law states that the order in which the logic
1. AND Laws :
operations are performed is not important because for
any order the effect is the same. These laws are related to the AND operation therefore
Te

i.e. (A · B) · C = A · (B · C) they are called as “AND” laws. The AND laws are as follows :

and (A + B) + C = A + (B + C) A0=0:
 The associative law can be verified by referring to the  That means if one input of an AND operation is
following truth table.
permanently kept at the LOW (0) level, then the output
(C-6137)Table 3.10.4 : Verification of the associative law is zero irrespective of the other variable.
A·1=A:

 That means if one input of an AND operation is HIGH


(1) permanently then the output is always equal to the
other variable.
 So if A = 0, then Y = 0 · 1 = 0 i.e. Y = A and if A = 1,
then Y = 1 · 1 = 1 so Y = A.

A·A=A:

That means if both the inputs in an AND operation have


 Similarly we can verify the other statement i.e. the same value either “0” or “1” then the output will also
have the same value as that of the input.
A + (B + C) = (A + B) + C

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 3-24 Binary Arithmetic

A·A=0: Summary of OR Laws :

 This law states that the result of an “AND” operation on 1. A+0=A 3. A+A=A
– –
a variable (A) and its complement (A ) is always LOW (0). 2. A+1=1 4. A+A =1

 If A = 0 then A = 1 and Y = 0 · 1 = 0 whereas if A = 1 3. INVERSION Law :

then A = 0 and Y = 1 · 0 = 0.  This law uses the “NOT” operation. The inversion law
states that if a variable is subjected to a double
Summary of AND Laws :
inversion then it will result in the original variable itself

ns e
1. A·0=0 3. A·A=A
i.e.

2. A·1=A 4. A·A =0 ––

io dg
A = A
2. OR Laws :  Inversion law is being illustrated in Fig. 3.11.1 which
 These laws use the OR operation. Therefore they are – ––

shows that if A = 0 then A = 1 and ( A ) = 0 Y = A,
called as OR laws. The OR laws are as follows : – ––

A+0=A:


at le
That means if one variable of an “OR” operation is LOW
whereas if A = 1 then A = 0 and ( A ) = 1  Y = A.
ic w
=
(0) permanently, then the output is always equal to the (B-481) Fig. 3.11.1 : Illustration of inversion law : A = A

other variable. Table 3.11.1 : Collection of Boolean laws


bl no

 B = 0 permanently therefore for A = 0, Y = 0 + 0 = 0 i.e. Sr. No. Name Statement of the law
Y = A and for A = 1, Y = 1 + 0 = 1 i.e. Y = A. 1. Commutative Law A·B=B·A
A+1=1: A+B=B+A
(A · B)  C = A · (B · C)
Pu K

2. Associative Law
 That means if one variable of an “OR” operation is HIGH
(A + B) + C = A + (B + C)
(1) permanently, then the output is HIGH (1)
3. Distributive Law A · (B + C ) = AB + AC
ch

permanently irrespective of the value of the other


4. AND Laws A·0=0
variable.
A·1=A
 Here B = 1 permanently, so if A = 0 then Y = 0 + 1 = 1 A·A=A
and if A = 1 then Y = 1 + 1 = 1. –
Te

A·A=0
 Thus output remains HIGH (1) always, irrespective of the
5. OR Laws A+0=A
value of A. A+1=1
A+A=A: A+A=A

 This law states that if both the variables of an OR A+A =1
6. Inversion Law =
operation have the same value either “0” or “1” then the A =A
output also will be equal to the input i.e. 0 or 1 7. Other Important Laws A + BC = (A + B) (A + C)
respectively. – –
A + AB = A + B
 For A = 0, Y = 0 + 0 = 0 i.e. Y = A and for A = 1, – – – –
A + AB = A + B
Y = 1 + 1 = 1 i.e. Y = A.
A + AB = A
– –
A+A=1: A+AB=A+B

 This law states that the result of an “OR” operation on a Ex. 3.11.1 : Obtain the dual of following Boolean
variable and its complement is always 1 (HIGH). equations :
– –
 If A = 0 then A = 1 and Y = 0 + 1 = 1 whereas if A = 1 1. A + AB = A 2. A + A B = A + B
– –
then A = 0 and Y = 1 + 0 = 1. 3. A + A = 1 4. (A + B) (A + C) = A + BC.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 3-25 Binary Arithmetic

Soln. :

1. A + AB = A :

 Replace (+) by (·) and (·) by (+) to get the dual of the
given equation as :
A · (A + B) = A …Ans.
 Similarly the duals of the other expressions are shown in
–– – –
Table P. 3.11.1. (C-6138) Fig. 3.11.3 : Verification of the theorem AB = A + B

Table P. 3.11.1 ––––– – –


Theorem 2 : A + B = A · B : NOR = Bubbled AND :

ns e
Given expression Dual  This theorem is illustrated in Fig. 3.11.4. The LHS of this

io dg
– –
A+AB=A+B A · (A + B) = A · B theorem represents a NOR gate with inputs A and B
– – whereas the RHS represents an AND gate with inverted
A+A=1 A·A=0
(A + B) (A + C) = A + BC AB + AC = A · (B + C) inputs.
 Such an AND gate is called as “Bubbled AND”. Thus we
3.11.3
at le De-Morgan’s Theorems :
SPPU : Dec. 05, May 08, Dec. 09.
can state De-Morgan's second theorem as a NOR
function is equivalent to a bubbled AND function.
ic w
University Questions NOR  Bubbled AND

Q. 1 What are De Morgan’s theorems ? How will define


bl no

them in terms of your own words ?


(Dec. 05, May 08, Dec. 09, 4 Marks)
(B-484) Fig. 3.11.4 : Illustration of De-Morgan's
 The two theorems suggested by De-Morgan and which second theorem
Pu K

are extremely useful in Boolean algebra are as follows :


 This theorem can be verified by writing a truth table for
–– – –
Theorem 1 : AB = A + B : NAND = Bubbled OR : both the sides of the theorem statement. This truth
ch

table is shown in Fig. 3.11.5, which shows that


 This theorem states that the, complement of a product
LHS = RHS.
is equal to addition of the complements.

 This rule is illustrated in Fig. 3.11.2.


Te

 The Left Hand Side (LHS) of this theorem represents a


NAND gate with inputs A and B whereas the Right Hand
Side (RHS) of the theorem represents an OR gate with
inverted inputs.
(C-6139) Fig.
3.11.5 : Truth table to
 Such an OR gate is called as “Bubbled OR”. Thus we can verify De-Morgan’s theorem
state De-Morgan’s first theorem as a NAND operation is
3.11.4 Operator Precedence :
equivalent to a bubbled OR operation.
NAND  Bubbled OR  The operator precedence i.e. which logical operator
should be considered first for solving Boolean
expressions is as follows :

1. Parenthesis 2. NOT

(B-482)Fig. 3.11.2 : Illustration of De-Morgan’s first theorem 3. AND 4. OR

 This theorem can be verified by writing a truth table as  That means the expression inside the parenthesis
should be evaluated before all the operations.
shown in Fig. 3.11.3.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 3-26 Binary Arithmetic
3
 The next priority should be given to the inversion or  Hence there are 2 = 8 possible input combinations of
complement i.e. NOT operation, followed by the AND inputs from ABC = 000 to ABC = 111.
and OR operations. (C-6141)Table 3.12.1 : Truth table for f (A, B, C) = A + BC
–––––
 For example if y = ( A + B ) is to be evaluated, then we
have to evaluate the expression inside the parenthesis
first and the result should then be complemented.

3.12 Boolean Expression and Boolean

ns e
Function :

io dg
 Boolean algebra deals with binary variables and logic
operations.
2
 A Boolean function is described by an algebraic  If there are two inputs (n = 2), then there are 2 = 4

expression called Boolean expression which consists of combinations of inputs. For four inputs (n = 4), there are

at le
binary variables, the constants 0 and 1 and the logic
operation symbols. 3.12.2
4
2 = 16 combinations.

Examples on Reducing the Boolean


ic w
 Consider the following example : Expression :

Ex. 3.12.1 : Prove the following Boolean expressions :


bl no

(C-6140) …(3.12.1) –
1. A + AB = A 2. A + A B = A + B
3. (A + B) (A + C) = A + BC
 Here the left side of the equation represents the output Soln. :

Y of a logic circuit. So we can state Equation (3.12.1) as, 1. To prove that A + AB = A :


Pu K

– LHS = A + AB = A (1 + B)
Y = A + BC + ADC …(3.12.2)
But (1 + B) = 1  LHS = A · 1But A · 1 = A
ch

 Another example of Boolean function is as follows :


 LHS = A = RHS
F (A, B, C) = A + BC …(3.12.3)
 A + AB = A ...Proved.
or simply Y = A + BC …(3.12.4)

2. To prove that A + AB = A + B :
Te

3.12.1 Truth Table Formation :


– –
LHS = A + AB = A + AB + A B
 It is possible to convert the switching equation into a
… since A = A + AB
truth table. For example consider the following

switching equation.  LHS = A + B (A + A)

f (A, B, C) = A + BC But (A + A) = 1  LHS = A + (B · 1) = A + B

 It shows that there are three inputs A, B, and C and one  LHS = RHS

output f (A, B, C) or simply F.  A+AB = A+B … Proved.

 The output will be high (1) if A = 1 or BC = 1 or both are 3. To prove that (A + B) (A + C) = A + BC :


1 due to the (+) i.e. OR function present between A and LHS = (A + B) (A + C)
BC.  LHS = AA + AC + BA + BC

 The truth table for this equation is shown by ……. According to the distributive law.
n
Table 3.12.1. The number of rows in the truth table is 2 But AA = A,

where n is the number of input variables (n = 3 for the  LHS = A + AC + BA + BC

given equation).  LHS = A (1 + C) + AB + BC

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 3-27 Binary Arithmetic

But (1 + C) = 1 –––––––––––––
– – –
 Y = (A + B + A + AB)
 LHS = A + AB + BC = A (1 + B) + BC
– – –
But (1 + B) = 1 But A + A = A
––––––––––
– –
 LHS = A + BC
 Y = (A + B + AB)
Thus (A + B) (A + C) = A + BC … Proved.
Now use De-Morgan’s second theorem which states that,
– – – ––––––––– – – –
Ex. 3.12.2 : Prove that (A + B + AB) (A + B ) (A B) = 0 A+B+C = A·B·C
Soln. : ––
–– ––
–– –––
 Y = A · B · AB

ns e
– – –
LHS = (A + B + AB) (A + B) (A B) ––
–– ––
––
But A = A and B = B

io dg
But A + AB = A … Refer to Ex. 3.12.1
–––
– – –  Y = A · B · AB
LHS = (A + B) (A + B) (A B)
––– – –
– – –– – But AB = (A + B ) …. De-Morgan’s second theorem.
= (AA + AB + AB + BB) (A B) – – – –

at le But A · A = A and
– – – –
A B + AB = B (A + A) = AB
– – –
B · B = B and
 Y = A · B (A + B) = AAB + ABB
– –
But AA = 0 and BB = 0
ic w
 Y = 0·B+A·0=0+0
– – –
 LHS = (A + AB + B) (AB) …since 0 · B = 0 and A · 0 = 0
– – –
bl no

= [A (1 + B) + B] (AB)  Y = 0
– – –
But (1 + B) = 1  LHS = [(A · 1) + B ] (AB) 3.12.3 Complement of a Function :
– – –
 LHS = (A + B) (AB) … since A · 1 = A  The complement of a function F is denoted by F . We
Pu K

– – – –
= AAB+ABB can obtain F by replacing 0’s by 1’s and 1’s by 0’s while
– – calculating the value of F.
ch

But AA = 0 and BB = 0
 It is possible to derive the complement of a function
 LHS = 0 + 0 = 0 … Proved.
algebraically using De Morgan’s theorems. We can
– extend De Morgan’s theorems to three or more
Ex. 3.12.3 : Prove that A + AB + AB = A + B.
Te

variables as well.
Soln. :
– –  The three variable form of De Morgan’s first theorem is
LHS = A + AB + AB = A + B (A + A)
derived as follows :

But A + A = 1 ________
 LHS = A + B = RHS …Proved. LHS = ( A + B + C )
Let B+C = D

Ex. 3.12.4 : Simplify : ABCD + ABCD. _____ __ __
– –  LHS = ( A + D ) = A  D
Soln. : Y = ABCD + ABCD = ACD (B + B)
….As per De Morgan’s theorem

But (B + B) = 1 __ ____ __ __ __
= A  ( B + C ) = A  (B  C )
 Y = ACD …Ans.
________ __ __ __
Ex. 3.12.5 : Simplify the following expression :  (A+B+C) = A  B  C …… Proved.
––––––––––
––– –  On the same lines we can derive the De Morgan’s
Y = ( AB + A + AB) theorems for any number of variables. These theorems
––––––––––––
––– – in the generalized form can be stated as follows :
Soln. : Y = ( AB + A + AB)
––– ____________ __________ __ __ __ __ __
– –
But AB = A + B…De-Morgan’s first theorem 1. ( A + B + C + D ……….. + G ) = A  B  C  D  …….. G

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 3-28 Binary Arithmetic

___________ __________ __ __ __ __ __ Q. 4 What is two’s complement number ?


2. (A  B  C  D  ………… G ) = A + B + C + D +……..+ G
Q. 5 Explain the binary multiplication and division with
Ex. 3.12.6 : Find the complement of the following example.
functions.
Q. 6 What is two’s complement number ?
_ _ __ _ _
F1 = A B C + A B C and F2 = A (B C + BC) Q. 7 Find out 2’s complement number of following
Soln. : decimal number :
_ _ _ _
1. F1 = A B C + A B C 1. 17 2. 20 3. 107

ns e
__ ______________
_ _ _ _______
_ _ _______
_ _ Q. 8 Subtract using 2’s complement (11011011)2 from
 F1 = ( A B C + A B C) = ( A B C )  ( A B C ) (0101010)2.

io dg
__ _ _ Q. 9 Subtract using 2’s complement 11001 from 01101.
 F1 =(A+B+C)(A+B+C) …Ans.
Q. 10 Subtract 1101101 – 11010 using 2’s complement.
_ _
2. F2 = A (B C + BC) Q. 11 State AND laws.

at le 
__
F2 =
_
____________
_ _
[ A (B C + BC ) ]
_________
_
Q. 12

Q. 13
State OR laws.

State De Morgan’s theorems.


ic w
= [A + (B C + BC)] Q. 14 What is AND-OR-NOT logic ?
_ _____
_ _ ___
Q. 15 Define truth table.
= A + (B C)  (BC)
bl no

__ _ _ _ Q. 16 Write the Boolean expressions for the following :


 F2 = A + ( B + C ) (B + C) …Ans.
1. OR gate 2. EX-NOR gate

Q. 17 State the rules of Boolean algebra.


Review Questions
Pu K

Q. 18 State and explain the commutative law.

Q. 1 Explain with example the binary addition and Q. 19 State and explain the associative law.
ch

subtraction. Q. 20 State and explain the distributive law.


Q. 2 Give and explain the advantages of One’s Q. 21 Write a note on : Duality theorem.
complement representation.
Q. 22 State and prove De-Morgan's first and second
Te

Q. 3 Explain with example 2’s complement method used theorem.


for subtraction.



Powered by TCPDF (www.tcpdf.org)


Unit 1

Chapter

4
ns e
io dg
at le Logic Minimization
ic w
bl no

Syllabus
Pu K

Representation of logic function : Logic statement, Truth-table, SOP form, POS form; Simplification of
logical functions using K-Maps upto 4 variables.
ch
Te

Chapter Contents
4.1 System or Circuit 4.5 Karnaugh-Map Simplification (The Map Method)

4.2 Standard Representations for Logical 4.6 Simplification of Boolean Expressions using K-map
Functions

4.3 Concepts of Minterm and Maxterm 4.7 Minimization of SOP Expressions


(K Map Simplification)

4.4 Methods to Simplify the Boolean Functions 4.8 Product of Sum (POS) Simplification

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 4-2 Logic Minimization

4.1 System or Circuit : (C-8064) Table 4.1.1

 A system or circuit is defined as the physical device or


group of devices or algorithm which performs the
required operations on the signal applied at its input
which can be either analog or digital.

 Depending on the type of input signal (analog or


digital) the systems or circuits can be classified into two

ns e
2. Switching equations :
types :
1. Analog systems  The relation between inputs and output(s) can be

io dg
presented in the form of equation(s) called as switching
2. Digital systems.
equations.
4.1.1 Digital Systems :  The input variables are called as switching variables. The
output (Y) is written on the left hand side of the

at le
We may define the digital system as the system which
processes or works upon the digital signals to produce
another digital signal.
equation whereas the terms containing the input
variables are written on the right hand side of the
switching equation as shown below :
ic w
– –––
Y = ABC + ABC + ABC …(4.1.1)
– – ––
bl no

Or f (A, B, C) = ABC + ABC + ABC …(4.1.2)



Or f (A, B, C) = (A + B + C)  (A + B + C) …(4.1.3)
(C-215) Fig. 4.1.1 : A digital system
 The switching equations are also called as the Boolean
equations or the system equations.
Pu K

 Thus the input signal to a digital system is digital and its


output signal is also digital.  The system equations or switching equations can be of
two different types :
ch

Input Output Relation :


1. Sum of Products (SOP) format.
 In Fig. 4.1.1, A, B, C … are the inputs whereas Y or 2. Product of Sums (POS) format.
f (A, B …) represents the output of the system. The
3. Logic diagram :
system may have a single output or it can have multiple
Te

This is a diagrammatic way of expressing the input-


outputs.
output relationship of a digital circuit.
 The system output is a function of inputs. Hence it is
4.1.2 Types of Digital Systems :
denoted by f (A, B, C, …).

 The relation between input and output can be  The digital systems in general are classified into two
represented in various different ways as given below : categories namely :

1. Truth table. 1. Combinational logic circuits.


2. Sequential logic circuits.
2. Switching equations.
3. Logic diagram. Combinational circuits :

1. Truth table :  It is a logic circuit, whose output at any instant of time,


depends only on the levels present at input terminals.
 Here the relation between the input variables
 The combinational circuits do not use any memory.
(A, B, C …) with the output Y is presented in a tabular
Hence the previous state of input does not have any
form as shown in Table 4.1.1.
effect on the present state of the circuit.
 The state of output (0 or 1) is written for all the possible
 The sequence in which the inputs are being applied also
combinations of inputs. has no effect on the output of a combinational circuit.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 4-3 Logic Minimization

 The block diagram of a combinational circuit is shown in 2. Use of Medium Scale Integration (MSI).
Fig. 4.1.2.  In this chapter we will discuss the traditional method.
Steps followed in traditional circuit design method :

1. A truth table is given.


2. Obtain the Boolean expression from the truth table.
3. Simplify the Boolean equation using standard methods.
(C-332) Fig. 4.1.2 : Block diagram of a combinational circuit
4. Realize the simplified equation using gates i.e. build the
 A combinational circuit can have a number of inputs logic circuit.

ns e
and a number of outputs.
Methods to simplify the boolean equations :
 The circuit of Fig. 4.1.2 has “n” inputs and “m” outputs.

io dg
 The methods used for simplifying the Boolean functions
 Between the inputs and outputs, logic gates are
are as follows :
connected so combinational circuit basically consists of
logic gates. 1. Algebraic method.
2. Karnaugh-map (K-map) simplification.

at le
A combinational circuit operates in three steps :
1.
2.
It accepts n-different inputs.
The combination of gates operates on the inputs.
3.
4.
Quine-Mc Cluskey method and
Variable Entered Mapping (VEM) technique.
ic w
3. “m” different outputs are produced as per  The Boolean theorems and De-Morgan’s theorems are
requirement. useful in simplifying the given Boolean expressions. We
bl no

Examples of combinational circuits : can then realize the logical expressions using either the
conventional gates or universal gates.
 Following are the examples of some combinational
circuits :  We should use the minimum number of logic gates for
the realization of a logical expression.
1. Adders, subtractors
Pu K

2. Comparator 4.2 Standard Representations for


3. Code converters Logical Functions :
ch

4. Encoders, decoders  Consider that the logic expression given to us is as


5. Multiplexers, demultiplexers follows :

4.1.3 Combinational Circuit Design : Y = AC + BC


Te

 Then it can be realized using basic gates as shown in


 In this chapter we are going to discuss the design of
Fig. 4.2.1.
combinational circuits.
 In this Boolean expression, Y is the result or output and
 To design a combinational circuit we have been given
A, B, C are called as literals.
the specifications or the requirements of combinational
circuits. Such specifications or requirements can be
specified in the following ways :
1. A list of statements
2. Boolean expressions
3. Truth table. (C-1217) Fig. 4.2.1 : Realization of given logic expression

 From the specified requirements we have to design a  When we realize the Boolean equation by using gates,
combinational circuit using a combination of gates each literal acts as an input as shown in Fig. 4.2.1.
which will fulfill all the requirements.  Any logic expression can be expressed in the following
 We can adopt one of the following two approaches to two standard forms :
the combinational circuit design : 1. Sum-of-Products form (SOP) and
1. Traditional methods. 2. Product-of-Sums form (POS)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 4-4 Logic Minimization

 These two forms are suitable for reducing the given – – –


Y = (A + B + C)  (A + B)  (A + C)
logic expression to its simplest form. – –
A = (X + Y)  (X + Y + Z)
4.2.1 Sum-of-Products (SOP) Form : – –
Y = (P + R)  (P + Q)  (P + R)
 Refer to logic expression shown in Fig. 4.2.2. It is in the  The literals are ORed together to form the sum terms
form of sum of three terms AB, AC and BC with each and the sum terms are ANDed to get the expression in
individual term is product of two variables. Say AB or the POS form.
AC etc. 4.2.3 Standard or Canonical SOP and POS

ns e
Forms :
Canonical or standard forms :

io dg
 The word standard is used in order to describe a
(C-1217(a)) Fig. 4.2.2 : Sum-of-Products (SOP) form condition of switching equation. The meaning of the
word standard is conforming to a general rule.
 Therefore such expressions are known as expression in


at le
SOP form.
The sums and products in the SOP form are not the


This rule states that each term used in a switching
equation must contain all the available input variables.
The two formats of a switching equation in the standard
ic w
actual additions or multiplications. In fact they are the
form are :
OR and AND functions.
1. Sum of Product (SOP) format.
 A, B and C are the literals or the inputs of the
bl no

2. Product of Sum (POS) format.


combinational circuit.
 When we simplify a Boolean equation, sometimes an
 Some more examples of the SOP expressions are as input variable is eliminated to simplify the equation.
follows :  But standard expressions are not simplified. It is said
Pu K

– that the standard expression is the opposite of


Y = ABC + BCD + ABD,
– – simplification. So it contains redundancies.
A = XY + XY + XY,
ch

– – –  Many times, switching equations written in the SOP or


Y = P Q + PQR + PQR POS form are not standard. That means each term may
 Thus in each product term there can be one or more not contain all the input variables.
than one literals ANDed together. The literals can be in  Consider the SOP and POS expressions shown in
Te

their complemented or uncomplemented form. Fig. 4.2.4.

4.2.2 Product of the Sums Form (POS) :

 Refer to the logic expression shown in Fig. 4.2.3. It is in


the form of product of three terms (A + B), (B + C) and
(A + C), with each term is in the form of sum of two
variables.
 Such expressions are said to be in the Product of Sums
(POS) form.
 Some other examples of POS expressions form are as
follows : (C-1219(a)) Fig. 4.2.4 : Standard SOP and POS forms
 Referring to Fig. 4.2.4 we can say that a logic expression
is said to be in the standard SOP or POS form if each
product term (for SOP); and sum term (for POS) consists
of all the literals in their complemented or
(C-1218) Fig. 4.2.3 : Product of Sums (POS) form uncomplemented form.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 4-5 Logic Minimization

Non-standard forms : –
Ex. 4.2.1 : Convert the expression Y = AB + AC + BC into
 The two standard forms discussed earlier are the basic the standard SOP form.
forms that are obtained from the truth table. Soln. :

 However these forms are not used often because these  The given expression has 3-input variables A, B and C.

equations are not in the minimized form because each Step 1 : Find the missing literal for each term : (C-2382)

term contains all the literals.


 There is another way to express the Boolean functions.
It is called as the non-standard form.

ns e
 In this form each term may contain one, two or any Step 2 : AND each term with (Missing literal + Its

io dg
number of literals. It is not necessary that each term complement) : (C-2383)
should contain all the literals.
 There can be two types of non-standard forms :
1. Non-standard SOP form.


at le
2. Non-standard POS form.
The examples of standard and non-standard SOP and
ic w
POS expressions are given in Table 4.2.1.
Step 3 : Simplify the expression to get the standard
(C-8065) Table 4.2.1 : Non-standard and standard SOP and SOP :
POS forms
bl no

– – – –
Y = AB ( C + C ) + AC ( B + B ) + BC ( A + A )
– – –– –
= ABC + ABC + ABC + ABC + ABC + ABC
– – –– –
= (ABC + ABC) + ( ABC + AB C ) + ABC + AB C
Pu K

But A + A = A
– – –
 (ABC + ABC) = ABC and ( ABC + AB C ) = ABC
ch

4.2.4 Conversion of a Logic Expression to


Standard SOP or POS Form :
Te

(C-6152)
 Let us see how to convert the given non-standard SOP
and POS expressions into the corresponding non- Conversion from non-standard POS to standard POS
standard SOP and POS forms. form :

Conversion from non-standard SOP to standard SOP  The conversion of non-standard POS expression into
form : standard POS form can be obtained by following the
steps given below :
 The procedure to be followed for converting a non-
standard SOP expression into a standard SOP Steps to be followed :

expression is as follows : Step 1 : For each term in the given non-standard POS
Steps to be followed : expression, find the missing literal.

Step 1 : For each term in the given non-standard SOP Step 2 : Then OR each such term with the term formed by

expression find the missing literal. ANDing the missing literal in that term with its
complement.
Step 2 : Then AND this term with the term formed by
ORing the missing literal and its complement. Step 3 : Simplify the expression to get the standard POS.

Step 3 : Simplify the expression to get the standard SOP Ex. 4.2.2 : Convert the expression Y = (A + B) (A + C)
expression. –
(B + C) into standard POS form.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 4-6 Logic Minimization

Soln. : Maxterm :
Step 1 : Find the missing literal for each term :  Each individual term in the standard POS form is called
as maxterm. This is shown in Fig. 4.3.1.

(C-2384)

Step 2 : OR each term with (Missing literal. Its


complement) :

ns e
(C-1220(a)) Fig. 4.3.1 : Concept of maxterm and minterm

io dg
 Table 4.3.1 gives the minterms and maxterms for a three
variable/literal logic function. Let Y be the output and A,
B, C be the inputs.

(C-8067) Table 4.3.1 : Minterms and maxterms for three

at le –
(C-6153)

Step 3 : Simplify the expression to get standard POS :


– – –
variables
ic w
Y = (A + B + C C ) ( A + C + BB ) ( B + C + AA )
But A + BC = (A + B) (A + C)

bl no

 Y = (A + B + C) (A + B + C ) (A + C + B)
– – – –
(A + C + B ) (B + C + A) (B + C + A )
But AA = A
 (A + B + C) (A + C + B) = (A + B + C)
Pu K

– – –
and (A + B + C ) (B + C + A) = (A + B + C )
(C-6154)
ch

3
 The number of minterms and maxterms is 2 = 8. In
general for “n” number of variables the number of
Te

n
Ex. 4.2.3 : Convert the following expressions into their minterms or maxterms will be 2 .
standard SOP or POS forms :  Each minterm is represented by mi where i = 0, 1, ….,
n
1. Y = AB + AC + BC 2 – 1 and each maxterm is represented by Mi.
– Writing minterm for a particular combination of ABC :
2. Y = (A + B) (B + C)

3. Y = A + BC + ABC  Consider ABC = 011. Assume that A, B, C are input to an


Soln. : Solve it yourself. AND gate.
Ans. :  We want the output of the AND gate to be 1. For that
– – – all its inputs should be 1.
1. Y = ABC + ABC + A B C + A BC
– – – –  So take the complement of that input which is 0 i.e. A in
2. Y = (A + B + C) (A + B + C) (B + C + A) (B + C + A)
this case and write the minterm.
– – –– –
3. Y = ABC + ABC + A B C + AB C + BCA –
 Minterm = A B C corresponding to ABC = 0 1 1
4.3 Concepts of Minterm and Maxterm : Similarly other minterms in Table 4.3.1 can be obtained.

Minterm : Writing maxterm for a particular combination of ABC :

 Each individual term in the standard SOP form is called  Let ABC = 011.
as minterm. This is shown in Fig. 4.3.1.  Assume that ABC are inputs to an OR gate.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 4-7 Logic Minimization

 We want the output of the OR gate to be 0. So all the  Hence it is possible to obtain the logic expression in the
inputs to the OR gate should be 0s. standard SOP or POS form if a truth table is given to us.
 So invert the inputs which are 1s. i.e. B and C in this case
4.3.3 To Write Standard SOP Expression for
and write the maxterm.
– –
a Given Truth Table :
 Maxterm = (A + B + C ) corresponding to ABC = 0 1 1
Similarly other maxterms in Table 4.3.1 can be obtained.  The procedure to be followed for writing the standard
SOP expression from a given truth table is as follows :
Ex. 4.3.1 : For the truth table of two variables write the Step 1 : From the given truth table, consider only those
minterms and maxterms.

ns e
combinations of inputs which produce an output
Soln. : Y = 1.

io dg
Refer Table P. 4.3.1 for solution. Step 2 : Write down a product term interms of input

(C-8068) Table P. 4.3.1 : Solution to Ex. 4.3.1 variables for each such combination.
Step 3 : OR all these product terms produced in step 2 to
get the standard SOP.

at le Ex. 4.3.2 : From the truth table P. 4.3.2, obtain the logical
expression in the standard SOP form.
ic w
(C-8114) Table P. 4.3.2 : Given truth table
bl no

4.3.1 Representation of Logical Expressions


using Minterms and Maxterms :

 We can represent the logical expression using the


Pu K

minterms and maxterms as follows : Soln. :


Step 1 : Consider only those combinations of inputs which
ch

correspond to Y = 1.
Steps 2 and 3 :

(C-6155)  For the second and the third entries in


Te

Table P. 4.3.2(a) write the product terms.


where  denotes sum of products.
(C-6157) Table P. 4.3.2(a)

(C-6156)
 Now OR (Add) all the product terms to write the final
4.3.2 Writing SOP and POS Forms for a
expression in standard SOP form as follows :
Given Truth Table :
 Y = Y1 + Y2

 We know that a logic expression can be represented in – –


= A B + AB = m1 + m2
the truth table form. For example, the expression =  m (1, 2)
––
Y = AB + A B which is the Boolean expression for an
Ex. 4.3.3 : For the truth table shown in Table P. 4.3.3
EX-NOR gate can also be represented using a truth
write the logic expression in the standard SOP
table. form.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 4-8 Logic Minimization

(C-6412) Table P. 4.3.3 : Given truth table Step 2 : AND (take product of) all the maxterms to get
standard POS form :
 ANDing (taking product of) all the maxterms written in
step 1 we get,
– – – – – –
Y = (A + B + C)  (A + B + C ) (A + B + C ) (A + B + C)
 Y = M0  M3  M5  M6 =  M (0, 3, 5, 6)

4.3.5 Conversion from SOP to POS and Vice


Versa :

ns e
 It is important to note that the SOP and POS forms
written for the same truth table are always logically

io dg
Soln. : equivalent.
Step 1 : Product terms corresponding to combinations of  This point can be proved by solving the following
inputs for which Y = 1. example.

 at le
Step 2 : OR (Add) all the product terms :
ORing all the product terms we get,
–– ––
Ex. 4.3.5 : For the given truth table write the logical
expressions in the standard SOP and POS
forms and prove their equivalence.
ic w
Y = A BC + ABC + A B C …Ans.
(C-6414) Table P. 4.3.5 : Given truth table
 Y = m1 + m4 + m7 =  m (1, 4, 7)
bl no

4.3.4 To Write a Standard POS Expression


for a Given Truth Table :
 Follow the procedure given below to get the expression
in standard POS expression from a given truth table.
Pu K

Step 1 : From the given truth table, consider only those


combinations of inputs which produce a low
ch

output (Y = 0).
Step 2 : Write the maxterms only for such combinations.
Step 3 : AND these maxterms to obtain the expression in
Soln. :
Te

standard POS form.


Step 1 : Minterms and maxterms for the combinations of
Ex. 4.3.4 : Write the logic expression in standard POS
form for the truth table shown in Table P. 4.3.4. inputs producing Y = 1 and 0 respectively.
(C-6413) Table P. 4.3.4 : Given truth table Step 2 : Write the standard SOP and POS expressions :

Standard SOP form :


–– ––
Y = A B C + ABC + ABC

Standard POS form :


– – –
Y = (A + B + C) (A + B + C) ( A + B + C )
– – – –
(A + B + C ) (A + B + C)

Step 3 : To prove the equivalence between SOP and


POS forms :
 Consider the standard POS form.
Soln. :
– – –
Y = (A + B + C) (A + B + C) (A + B + C )
Step 1 : Write maxterms for the combinations of input which
produce Y = 0. – – – –
(A + B + C ) (A +B + C)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 4-9 Logic Minimization

– –  m (1, 4, 7) =  M (0, 2, 3, 5, 6) ...(1)


But (A + B + C) (A + B + C) = (A + C + B  B)
= (A + C) Complementary relationship between minterms and
maxterms :
(C-6354)
 From Equation (1) we can conclude that the relationship
 Multiply the terms (1  2) and (3  4) to get,
between the expressions expressed using minterms and
– – – –
Y = [A A + AB + AC + AC + BC + CC ] maxterms is complementary relationship.
– – –– – – – – –  We can exploit this complementary relationship to write
[A A +A B +A C + A B + B B + BC + A C
–– – the expression in terms of maxterms if the expression in

ns e
+ BC + CC ]
terms of minterms is known and vice versa.
– – – – – –
But AA = A, CC = 0, A A = A , B B = 0, CC = 0

io dg
 For example, if a SOP expression for 4-variables is given
– – –
 Y = [A + AB + AC + AC + BC] by,
– –– – – – – –– Y =  m ( 0, 1, 3, 5, 6, 7, 11, 12, 15 )
[A +A B +A C + A B + BC + A C + BC ]
– – –  Then we can get the equivalent POS expression using

at le –
= [A (1 + B) + A (C + C) + BC]


– – –


– – ––
[(1 + B)A + A(C +C) +AB + BC + BC]
the complementary relationship as follows :
Y =  M ( 2, 4, 8, 9, 10, 13, 14 )
ic w
But (1 + B) = 1, (C +C)= 1, (1+B)= 1
4.4 Methods to Simplify the Boolean
– – – – ––
 Y = [A + A + BC] [A + A +A B + B C + BC ] Functions :
bl no

But A + A = A
 The methods used for simplifying the Boolean
– – –
And A + A = A expressions are as follows :
– – – ––
 Y = (A + BC) [A +A B + B C + BC] 1. Algebraic method.
Pu K

– – ––
= (A + BC) [A (1 + B) + B C + BC] 2. Karnaugh-map simplification.
But ( 1 + B ) = 1 3. Quine Mc-Cluskey method and
ch

– – ––
 Y = (A + BC) (A + BC + BC ) 4. Variable Entered Mapping (VEM) technique.
– –– –– –  The Boolean theorems and De-Morgan’s theorems are
= AA + ABC + AB C + A B C + (BC  BC )
– –– useful in simplifying the given Boolean expressions.
+ (BC  BC )
Te

– – – ––  We can then realize the logical expressions using either


But AA = 0, (BC  BC ) = 0 and (BC  BC ) = 0
the standard gates or universal gates.

 We should use the minimum possible number of logic


gates for the realization of a logical expression.
(C-6363)
 This is possible if we can simplify the logical expressions.
Step 4 : Express output in terms of minterms and  In this chapter we will discuss one of the simplification
maxterms : techniques called Karnaugh map or K-map and the
 The output can be expressed in terms of minterms and Quine Mc-Cluskey Method or the Tabular Method.
maxterms as follows :
4.4.1 Algebraic Simplification :
Y = m1 + m4 + m7 …In terms of minterms.
 We have studied the Boolean laws and De-Morgan’s
=  m (1, 4, 7)
theorems.
and Y = M0 M2 M3 M5 M6 …In terms of maxterms.  We can use them to simplify the given Boolean
=  M (0, 2, 3, 5, 6) expression in the following way.

 Due to equivalence between SOP and POS forms we can  The most important thing is to convert the given

write, expression into SOP form.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 4-10 Logic Minimization

Standard procedure for algebraic simplification :  Y = (A + B) (AB + ABC)

Step 1 : Bring the given expression into the SOP form by = AAB + AABC + BAB + BABC
using the Boolean laws and De-Morgan’s = AB + ABC + AB + ABC
theorems. = AB + AB + ABC + ABC
Step 2 : Simplify this SOP expression by checking the But AB + AB = AB and ABC + ABC = ABC
product terms for common factors.  Y = AB + ABC = AB ( 1 + C)
Ex. 4.4.1 : Simplify the expression given below : Y = AB ...since 1 + C = 1.

Y = AB + (A + B) (A + B).  This is the simplified expression.

ns e
Soln. : 4.5 Karnaugh-Map Simplification (The

io dg
Step 1 : Bring the given expression in SOP form : Map Method) :
Given expression :
–  This is another simplification technique to reduce the
Y = AB + (A + B) (A + B)
Boolean equation.
– –
= AB + (AA + AB + A B + BB) It overcomes all the disadvantages of the algebraic
at le
Step 2 : Search for common factors and simplify :
– –
Y = AB + AA + AB + A B + BB


simplification technique.
K-map (short form of Karnaugh map) is a graphical
ic w
– – method of simplifying a Boolean equation.
= AB + AB + BB + A A + A B
–  K-map is a graphical chart made up of rectangular
But AB + AB = AB, BB = B and A A = 0
boxes.
bl no

– –
 Y = AB + B + A B = B (A + 1) + A B  The information contained in a truth table or available in
But (A + 1) = 1 the SOP or POS form can be represented on a K-map.
– – –
 Y = B + A B = B (1 + A ) = B …since (1 + A ) = 1  The K-map can be used for systematic simplification of
Pu K

 Y = B ….Ans. Boolean expression.

 This is simplified expression.  K-maps can be written for 2, 3, 4 … upto 6 variables.


Beyond that the K-map technique becomes very
ch

Ex. 4.4.2 : For the logic circuit shown in Fig. P. 4.4.2 write cumbersome.
the Boolean expression and simplify it. K-map is ideally suitable for designing the combinational
logic circuits using either a SOP method or a POS method.
Te

 The K-map is drawn for output Y and the input variables


(A, B, C… etc.) are used for making the entries in the
boxes.

4.5.1 K-map Structure :


 The structure of a 2 input (variable) Karnaugh is shown
in Fig. 4.5.1(a).
(C-1221) Fig. P. 4.4.2 : Given logic circuit  This K-map is drawn for output Y of any two input
Soln. : combinational circuit such as a logic gate with inputs A
and B. i.e. AB = 00, 01, 10 and 11.
Step 1 : Write the Boolean expression :
 The expression for output of the given logic circuit is,
Y = (A + B) (AB) (A + C)
Step 2 : Bring this expression in SOP form :
 Multiply the terms to get the expression into SOP form.
 Y = (A + B) (AAB + ABC)
But AA = A Fig. 4.5.1(Contd...)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 4-11 Logic Minimization

 If the labeling is done as per the gray code, only then


the elimination of variables and therefore the
simplification will take place.

 When pairs, quads or octets are identified with the


normal labeling, this elimination will not take place.

 Hence gray code is used in labeling the cells of a K-map.

4.5.2 K-map Boxes and Associated Product

ns e
Terms :

io dg
 The rectangular boxes in a K-map are to be filled with
the values of output Y corresponding to different
combinations of inputs, as shown in Fig. 4.5.2.
 Each row and column of a K-map is labelled with a

at le variable, or a group of variables or their complements.


For example, see the shaded box in Fig. 4.5.2(a).
This box corresponds to the first row which is labelled
ic w

– –
by A and first column labelled by B.
––
bl no

 Hence the product term written inside this box is A B as


(C-217) Fig. 4.5.1 : Structures of 2, 3, 4 variable K-map shown in Fig. 4.5.2(a).
2  Another example is the shaded box shown in
 A 2 variable K-map consists of 2 = 4 rectangular boxes.
––
Fig. 4.5.2(c). The first row is labelled with A B and the
Inside these boxes we have to enter the values of
Pu K

– –
first column with C D . Hence the product term written
output Y for different combinations of inputs A and B.
––– –
in this box is A B C D .
ch

 The K-map comprises a box for every line in the truth


 Similarly the other product terms have been inserted, in
table. For a 2-input combinational circuit there are
the remaining boxes.
4-lines in the truth table, so there will be 4-boxes in the
Te

2 variable K-map.

 For a 3-variable K-map there will be 8 boxes, for


4-variable map there will be 16 boxes and so on.

 The 0’s and 1’s written on top or sides of the boxes


represent the values of the corresponding variables.

The sequence is funny (It is gray code) :

 In truth tables, the values of inputs follow a standard


binary sequence (00, 01, 10, 11).

 But in K-maps the input values are ordered in a different


sequence (00, 01, 11, 10).

 This is as per the gray code and not binary code. So


that as we move along a row or column only one
variable will change its value at a time.
(C-218) Fig. 4.5.2 : K-map boxes and associated product terms

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 4-12 Logic Minimization

4.5.3 Alternative Way to Label the K-map :  Referring to Fig. 4.5.4(a) we conclude that inside the
 We can label the rows and columns of a K-map in a boxes of the K-map we have to enter the values of
different way as shown in Fig. 4.5.3.
output Y corresponding to various combinations of
 Instead of labelling the rows and columns with the
– – A and B.
inputs and their complements (A, A , AB etc.), their
values in terms of 0s and 1s is used for labelling.  Fig. 4.5.4(b) shows the representation of a truth table
 And inside the boxes, instead of writing the actual using a 3-variable K-map and Fig. 4.5.4(c) shows the
product term, the corresponding shorthand minterm representation of a truth table using a 4-variable K-map.
notations m0, m1 ….. are entered.

ns e
io dg
at le
ic w
bl no
Pu K

(C-219) Fig. 4.5.3 : Alternative way to label K-map

4.5.4 Truth Table to K-map :


ch

 Whenever the K-map is to be practically used for


simplification, the entries inside the boxes are to be
written by referring to the given truth table. (C-221) Fig. 4.5.4(b) : Relation between a truth
table and K-map for 3-variables
Te

 In this section we are going to learn how to represent


the given truth table on a Karnaugh map. (C-6693) Truth table
Relation between a truth table and K-map entries :
 Fig. 4.5.4(a) illustrates the mapping of a truth table on
the K-map.

(C-220) Fig. 4.5.4(a) : Relation between a truth table and


K-maps for 2 variables

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 4-13 Logic Minimization

ns e
(C-224) Fig. P. 4.5.2 : Representation of canonical
(C-222) Fig. 4.5.4(c) : Relation between the truth table
SOP on Karnaugh map
and 4-variable K-map

io dg
4.6 Simplification of Boolean
4.5.5 Representation of Standard SOP Form
Expressions using K-map :
on K-map :
 Simplification of Boolean expressions using K-map is
 The logical expression in standard SOP form can be based on combining or grouping the terms in the

at le
represented with the help of a K-map by simply
entering 1’s in the cells (boxes) of the K-map 
adjacent cells (or boxes) of a K-map.
Two cells of a K-map are said to be adjacent if they
ic w
corresponding to each minterm present in the equation. differ in only one variable as shown in Fig. 4.6.1.

 The remaining cells (boxes) are filled with zeros.


bl no

 Ex. 4.5.1 illustrates the concept of transferring a


standard SOP expression on K-map.

Ex. 4.5.1 : Represent the equation given below on


(C-225) Fig. 4.6.1 : Adjacent or non-adjacent cells
Pu K

Karnaugh map.
– –– – – –– –  Note the cells on left or right side or at the bottom and
Y = A BC + A B C + ABC + ABC + ABC. top of cells are adjacent cells. But the cells connected
ch

Soln. : diagonally are not the adjacent ones.

 The given expression is in the standard SOP form. Each  The left most cells are adjacent to their corresponding

term represents a minterm. right most cells as shown in Fig. 4.6.2(a).

We have to enter 1’s in the boxes corresponding to


Te


each minterm, as shown in Fig. P. 4.5.1.

(C-223) Fig. P. 4.5.1 : Representation of standard


SOP on K-map

Ex. 4.5.2 : Plot the following Boolean expression on


K-map.
–– – – –– – – – –
Y=ABCD+ABCD+ABCD+ABCD
(a) (b)
Soln. : Refer Fig. P. 4.5.2. (C-226) Fig. 4.6.2 : Illustration of adjacent cells

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 4-14 Logic Minimization

 And the top cells are adjacent to their corresponding Simplification :


bottom cells. – – –
Y = A B C + A BC
4.6.1 How does Simplification Takes Place ? – –
= A B (C + C )
 Once we transfer the logic function or truth table on a – –
= A B ….since C + C = 1
Karnaugh map, we have to use the grouping technique
for simplifying the logic function.  Thus C is eliminated.

 Grouping means combining the terms in the adjacent Conclusion :


cells.
 Due to formation of pair the variable C is eliminated. So

ns e
 The grouping of either adjacent 1’s or adjacent 0’s henceforth just by looking at the pair we should be able

io dg
results in the simplification of Boolean expression in the to identify the variable that will be eliminated.
SOP or POS forms respectively.
 The other types of pairs and corresponding
 If we group the adjacent 1’s then the result of simplifications are as shown in Fig. 4.6.4(a) and
simplification is in SOP (Sum Of Products) form. Fig. 4.6.4(b).

at le
And if the adjacent 0’s are grouped, then the result of
simplification is in the POS (Product Of Sums) form.
Given K-map :
ic w
4.6.2 Way of Grouping (Pairs, Quads and
Octets) :
bl no

 While grouping, we should group most number of 1’s


(or 0’s).
 The grouping follows the binary rule i.e. we can group
1, 2, 4, 8, 16, 32 ….. number of 1’s or 0’s. We cannot
Pu K

(C-228) Fig. 4.6.3(b)


group 3, 5, 7, …. number of 1’s or 0’s.
1. Pairs : A group of two adjacent 1’s or 0’s is called as a Simplification :
ch

pair. –– –
Y = A BC + A B C
2. Quad : A group of four adjacent 1’s or 0’s is called as a
– –
quad. = BC (A + A)
3. Octet : A group of eight adjacent 1’s or 0’s is called as – –
Y = BC ….since (A + A) = 1
Te

octet.  Thus A is eliminated.


4.6.3 Grouping Two Adjacent One’s (Pairs) : Given K-map :

If we group two adjacent 1’s on a K-map, to form a pair,


then the resulting term will have one less literal (variable)
than the original term. That means by pairing two
adjacent 1’s we can eliminate one variable.

 The grouping of two adjacent 1’s and elimination of one


variable due to pairing is illustrated in Figs. 4.6.3(a)
to (d).

(C-228) Fig. 4.6.3(c)


Simplification :
– –– – –
Y = A BC + ABC = AC (B + B)
– –
Y = AC …since (B + B) = 1

(C-228) Fig. 4.6.3(a)  Thus B is eliminated.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 4-15 Logic Minimization

Given K-map : Note : In this K-map three pairs were possible to be


formed. However only two pairs are sufficient to
include all the 1’s present in the K-map. Then the
third pair is not required.

4.6.4 Grouping Four Adjacent Ones (Quad) :


 If we group four 1’s from the adjacent cells of a K-map,
then the group is called as a Quad.
 In such a quad two variables associated with the

ns e
minterms are same and the other two are not the same.

io dg
 After forming a Quad, the simplification takes place in
such a way that the two variables which are not same
(C-228) Fig. 4.6.3(d)
will be eliminated.
Simplification :
 Thus a quad eliminates two variables.
– –– –– –– –

at le Y = A BC D + ABC D = BC D (A + A)
––
 Y = BC D
Thus A is eliminated.
 Fig. 4.6.5(a) to (f) shows various types of Quads and the
corresponding mathematical simplification.
Note that overlapping is possible in Quads.
ic w
 
Example of overlap : Given K-map :
Given K-map :
bl no
Pu K
ch

(C-229) Fig. 4.6.4(a) (C-230) Fig. 4.6.5(a)


Simplification :
Te

 Final expression –– – –– – – –
– – Y = ABC D + ABC D + ABC D + A BC D
Y = A+B
– – – – –
= AB [C D + C D + C D + C D]
 Note that in order to cover all the 1’s, we have to
– – – –
overlap two pairs as shown. = AB [C (D + D) + C (D + D )]
– – –
Given K-map :  Y = AB [C + C] = AB
Thus C and D are eliminated.

Given K-map :

(C-229) Fig. 4.6.4(b) : Different types of pairs and the


corresponding simplifications

 Add the two product terms to get, Y = A B + AC (C-231) Fig. 4.6.5(b)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 4-16 Logic Minimization

Simplification :
Leftmost and rightmost 1’s forming a Quad :
– –– – – – ––
Y = A BC D + A BC D + ABC D + A BC D
– –– – –
= C D [A B +A B + A B + AB]
– – – –
= C D [A (B + B) + A (B + B)]
– – –
= C D [A + A] = C D

Thus A and B are eliminated.

ns e
Four adjacent ones forming a square : (C-231(c)) Fig. 4.6.5(e)

io dg
– – – – – – – –
Y = A BC D + A BC D + A BCD + A B C D
– – – – –
= BC D (A + A) + B C D (A + A)
– – –
= BCD+BCD

at le – – –
Y = B D (C + C) = BD
Thus A and C are eliminated.
ic w
1’s corresponding to corners forming a Quad :
bl no

(C-231(a)) Fig. 4.6.5(c)


– – – – – – – –
Y = A BC D + ABC D + A BC D + A BC D
– – – – –
= BC D (A + A) + B C D (A + A)
Pu K

– – – – –
= BC D + BC D = BC (D + D)
ch


 Y = BC Thus A and D are eliminated.

Top and bottom 1’s forming a Quad : (C-232) Fig. 4.6.5(f)


– –– – – – – –– – – –
Y = A BC D + A BCD + ABC D + ABCD
Te

–– – – – – –
= BC D (A + A) + BCD (A + A )
–– – – – –– –
= BC D + BCD = BD (C + C)
––
 Y = BD

Thus A and C are eliminated.


Overlapping of Quads and pairs :

(C-231(b)) Fig. 4.6.5(d)

– –– –– –– –
Y = A BCD + ABCD + ABCD + ABCD
–– – – –
= A BD (C + C) + ABD (C + C)
–– –
= A BD + A BD
– – –
 Y = BD (A + A) = BD

Thus A and C are eliminated. (C-233) Fig. 4.6.5(g)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 4-17 Logic Minimization

4.6.5 Grouping Eight Adjacent Ones (Octet) :


 It is possible to form a group of eight adjacent ones.
Such a group is called as octet.
 When an octet is formed three variables will change and
only one variable will remain same in all the minterms.
 The three variables that change will be eliminated and
the variable which does not change will appear as
output.
(C-234(b)) (c)
Thus octet eliminates three variables.

ns e

 Fig. 4.6.6(a) to (d) shows various types of octets and the

io dg
corresponding output.
Given K-map :

at le
ic w
(d)
bl no

(C-235)

Fig. 4.6.6

(C-234) Fig. 4.6.6(a)


4.6.6 Summary of Rules Followed for K-Map
Simplification :
Pu K

Simplification :
Summary :
– –– – – –– –– –– –
Y = A BC D + A BC D + A BC D + A BC D
1. No zeros allowed.
ch

– – – – – – – –
+ A BC D + A BC D + A B C D + A B C D 2. No diagonals.

– –– – –– – 3. Only power of 2 number of cells in each group.


 Y = A BC (D + D) + A BC (D + D )
4. Groups should be as large as possible.
Te

– – – – –
+ A B C (D + D ) + A B C (D + D ) 5. Every one must be in at least one group.

–– – – – 6. Overlapping allowed.
 Y = A B (C + C) + A B (C + C)
7. Wrap around allowed.
– – –
 Y = A (B + B) = A 8. Fewest number of groups possible.


 The only variable that remains same is A . So it appears 4.7 Minimization of SOP Expressions
as output. (K Map Simplification) :

 The K-map can be used to simplify the logical


expression to a level beyond which it cannot be further
simplified.
 After such a simplification, it will require minimum
number of gates with minimum number of inputs to the
gates.
 Such an expression is called as a minimized expression.
 For minimizing the logical expression, follow the
procedure given below.
(C-234(a)) (b)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 4-18 Logic Minimization

Minimization procedure :

Step 1 : Prepare the K-map and place 1’s according to the


given truth table or logical expression. Fill the
remaining cells by 0’s.
Step 2 : Locate the isolated 1’s i.e. the 1’s which cannot be
combined with any other 1. Encircle such 1’s.
Step 3 : Identify the 1’s which can be combined to form a
pair in only one way and encircle them.
Step 4 : Identify the 1’s which can form a quad in only one

ns e
way and encircle them.
Step 5 : Identify the 1’s which can form an octet in only (C-251) Fig. P. 4.7.2

io dg
one way and encircle them.  Minimized expression is,
Step 6 : After identifying the pairs, quads and octets, check (C-6158)
if any 1 is yet to be encircled. If yes then encircle
them with each other or with the already encircled


at le 1’s (by means of overlapping).
Note that the number of groups should be minimum.
Ex. 4.7.3 : For the logical expression given below draw
the K-map and obtain the simplified logical
expression. Y =  m (1, 5, 7, 9, 11, 13, 15).
ic w
 Also note that any 1 can be included any number of Realize the minimized expression using the
times without affecting the expression. basic gates.
bl no

Soln. :
 Let us solve some examples on this to make the concept
 The given expression is,
clear.
Y = m1 + m5 + m7 + m9 + m11 + m13 + m15
Ex. 4.7.1 : A logical expression in the standard SOP form  It can be expressed on K-map as shown in
Fig. P. 4.7.3(a).
Pu K

is as follows :
–– – – – – –
Y = A B C + A B C + ABC + ABC
ch

Minimize it using the K-map technique.

Soln. : Y = m (0, 2, 3, 5)
 The required K-map is as shown in Fig. P. 4.7.1.
Te

(C-250) Fig. P. 4.7.1

Ex. 4.7.2 : The logical expression representing a logic (C-252) Fig. P. 4.7.3(a)
circuit is Y = m (0, 1, 2, 5, 13, 15). Draw the
Realization :
K-map and find the minimized logical
 Equation (1) can be realized as shown in Fig. P. 4.7.3(b).
expression.

Soln. :
 From the given expression, it is clear that the number of
variables is 4.
Y = m0 + m1 + m2 + m5 + m13 + m15
(C-253) Fig. P. 4.7.3(b) : Realization with minimum
 The required K-map is as shown in Fig. P. 4.7.2. number of gates

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 4-19 Logic Minimization

Ex. 4.7.4 : Minimize the following Boolean expression Step 2 : Realization using gates :
using K-map and realize it using the basic
gates. Y = m (1, 3, 5, 9, 11, 13)
Soln. :
 The given expression can be expressed in terms of the
minterms as,
Y = m1 + m3 + m5 + m9 + m11 + m13
 The corresponding K-map is shown in Fig. P. 4.7.4(a). (C-256) Fig. P. 4.7.5(b) : Realization with minimum number of
gates

ns e
Ex. 4.7.6 : Solve the following using minimization
technique :

io dg
z = f (A, B, C, D) =  (0, 2, 4, 7, 11, 13, 15)
Dec. 09, 10 Marks.
Soln. :

at le Simplification using K-map :


ic w
Fig. P. 4.7.4(a)

Realization :
bl no

 Fig. P. 4.7.4(b) shows realization using gates..


Pu K

(C-1692) Fig. P. 4.7.6


(C-254) Fig. P. 4.7.4(b)
–– – – – –
ch

Ex. 4.7.5 : Minimize the following expression using z = A B D + A C D + ABD + BCD + ACD
K-map and realize using the basic gates :
Ex. 4.7.7 : Solve the following equations using
Y = m (1, 2, 9, 10, 11, 14, 15) corresponding minimization techniques, also
Te

May 19, 6 Marks draw MSI design for the minimized output
Soln. : equation :
Step 1 : K-map simplification : Z = f (A, B, C, D) =  (0, 3, 4, 9, 10, 12, 14).
May 10, 12 Marks.
Soln. : Solve it yourself.
Ans. :
– – – – – – – – ––
Z = f(A, B, C, D) = A C D + B C D + A C D + A B C D + A B C D

Ex. 4.7.8 : Using K-map realize the following expression


using minimum number of gates.
Y =  m (1, 3, 4, 5, 7, 9, 11, 13, 15)

(C-256)Fig. P. 4.7.5(a) : K-map simplification Soln. : Solve it yourself.


Ans. : Y = A’BC’+D

Ex. 4.7.9 : Minimize the following expression and realize


using basic gates.

Y = m (0, 2, 5, 6, 7, 8, 10, 13, 15)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 4-20 Logic Minimization

Soln. : Solve it yourself. Conclusion :


– – –
Ans. : Y= B D + BD + ABC  If we encircle the quad, then the expression for output
consists of an additional term. So quad should not be
4.7.1 Elimination of a Redundant Group :
encircled. It is called as the redundant group.
 If all the 1’s in a group are already being used in some  Hence the correct K-map is shown in Fig. P. 4.7.10(b).
other groups, then that group is called as a redundant
group.
 A redundant group has to be eliminated, because it
increases the number of gates required.

ns e
 Effect of a redundant group and its elimination is
illustrated in the Ex. 4.7.8.

io dg
Ex. 4.7.10 : Minimize the following expression using the
K-map.
Y =  m (1, 5, 6, 7, 11, 12, 13, 15)
Soln. :
1. at le
The given expression can be expressed in the standard
SOP form as follows :
ic w
Y = m1 + m5 + m6 + m7 + m11 + m12 + m13 + m15 (C-259) Fig. P. 4.7.10(b)

where m1, m5 ….. m15 are the minterms. – – – –


Y = ACD+ABC+ABC+ACD
bl no

2. The required K-map is shown in Fig. P. 4.7.10(a).


– – – –
3. There are no isolated 1’s. So encircle the separate pairs = ACD+ACD+ABC+ABC
as shown in Fig. P. 4.7.10(a).
4. We would visualize a quad shown by dotted lines in (C-6160)
Pu K

Fig. P. 4.7.10(a).
––––––
 Y = D ( A  C ) + B (A  C)
ch

4.7.2 Minimization of Logic Functions not


Specified in Standard SOP Form :
 Till now we have seen how to use the K-map to
Te

minimize an expression which is given in standard SOP


form.
 Now let us see the use of K-map for minimization when
the given expression is not in the standard form. The
procedure to be followed under such conditions is as
follows :
 The given expression,
(C-258) Fig. P. 4.7.10(a)
–– – – ––– ––– – –
 The question is should we encircle this quad ? Y = ABCD+ABCD+ABC+ABD+AC+B
Procedure :
 The answer will be obtained by writing the expression of
– – – –
Y without and with this quad as follows : Step 1 : Enter 1 for minterms (i.e. A BCD and ABCD)
present in the given expression.
Step 2 : Enter a pair of 1’s for each term with one less
– – – – – –
variable than total (i.e. for A B C, A B D because
these terms have one less variable)
Step 3 : Enter four adjacent 1’s for each term with two less

variables than total (i.e. AC).
(C-6159)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 4-21 Logic Minimization

Step 4 : Repeat for the other terms in a similar way. –


Step 3 : Now consider the fifth term AC. Enter 1 in four
Step 5 : Once the K-map is obtained, the minimization
cells corresponding to A = 1 and C = 0, as shown
procedure is same as the one discussed earlier.
in Fig. P. 4.7.11(c).
 This procedure will be clear as you solve the example
given below.

Ex. 4.7.11 : Minimize the logic equation given below :

ns e
(C-6357)

Soln. :

io dg
Step 1 : Enter 1 in the cell with A = 0, B = 0, C = 1, D = 1
– –
for the first term A BCD and in the cell with A = 0,
– –
B = 1, C = 1, D = 0 for the second term ABCD as

at le
shown in Fig. P. 4.7.11(a).

Step 4 :
(C-262) Fig. P. 4.7.11(c)

Finally corresponding to the sixth term, enter eight


ic w
1’s corresponding to B = 0 as shown in
Fig. P. 4.7.11(d).
bl no
Pu K
ch

(C-262) Fig. P. 4.7.11(d)


Te

 The final K-map is as shown in Fig. P. 4.7.11(e). Use the


normal simplification techniques discussed earlier to
simplify this K-map and get the minimized expression.

(C-261) Fig. P. 4.7.11

– – –
Step 2 : Consider the third term A B C. Enter 1 in two cells

which correspond to A = 0, B = 0 and C = 0, as

shown in Fig. P. 4.7.11(b). Now consider the


– – –
fourth term A B D and enter 1 in two cells which (C-263) Fig. P. 4.7.11(e) : Final K-map
correspond to A = 0, B = 0 and D = 0 as shown in  The simplified equation is,
Fig. P. 4.7.11(b).  – – – –
Y = B + AC + ACD

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 4-22 Logic Minimization

4.7.3 Don’t Care Conditions : ––


 Simplified expression Y = CD + A B
 For SOP form, we enter 1’s corresponding to the
Note : Every don’t care mark need not be considered
combinations of input variables which produce a high
output. And we enter 0’s in the remaining cells of the while grouping.
K-map.
Ex. 4.7.13 : Solve the following equation using K map
 For the POS form we enter 0’s corresponding to the
minimization technique. Draw the MSI design
combinations of inputs which produce a low output and
enter 1’s in the remaining cells of the K-map. for the minimized output :

 But it is not always true that the cells not containing 1’s Z = f(A, B, C, D) = m (1, 3, 6, 7, 12, 13)

ns e
(in SOP) will contain 0’s, because some combinations of + d (0, 2, 8, 9) (May 12, 6 Marks)
input variable do not occur.

io dg
Soln. :
 Take the example of a 4 bit BCD counter. It will have
valid outputs from 0000 to 1001 only. Step 1 : Reduction using K-map :
 Also for some functions the outputs corresponding to
certain combinations of input variables do not matter.

at le
That means for such input combinations it does not
matter whether the value of output is 0 or 1.
In such situation we are free to assume a 0 or 1 as
ic w

output for each of such input combinations.
 These conditions are known as the “Don’t care
bl no

conditions” and in the K-map it is represented as 


(cross) mark in the corresponding cell.
Important :
 The don’t care condition () may be assumed to be 0 or
Pu K

1 as per the need for simplification.

Ex. 4.7.12 : Simplify the expression given below using


ch

K-map. The don’t care conditions are


indicated by d ( ). (C-3224) Fig. P. 4.7.13(a)

Y = m (1, 3, 7, 11, 15) + d (0, 2, 5). Step 2 : MSI design :


Dec. 12, 5 Marks
Te

 Implementation using logic gates is as shown in


Soln. : The given equation is, Fig. P. 4.7.13(b).

(C-6161)

 The required K-map is shown in Fig. P. 4.7.12.

(C-3225)Fig. P. 4.7.13(b)

Ex. 4.7.14 : Minimize the following function using K-map

and implement using basic logic gates.


f (A, B, C, D) = m (1, 3, 5, 8, 9, 11, 15)
(C-264) Fig. P. 4.7.12 + d (2, 13) Dec. 17, 6 Marks

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 4-23 Logic Minimization

Soln. :  In the K-map corresponding to POS form equation we


Step 1 : Minimization using k-map : (C-6329)
have to enter a 0 corresponding to each maxterm.

 The K-maps corresponding to the POS form are shown


in Fig. 4.8.1.

ns e
io dg
–– – ––
 f (A, B, C, D) = A B C + C D + AD + AB C (C-286) Fig. 4.8.1(a) : Two variable K-map for POS form
Step 2 : Implementation using basic gates :

at le
ic w
bl no
Pu K

(C-6330) Fig. P. 4.7.14(a) : Implementation using basic gates (C-286) (b) Three variable K-map for POS form

Ex. 4.7.15 : Solve the following reduction using K-map,


ch

also draw MSI circuit for the output :

1. Z = f (A, B, C, D)
=  (1, 2, 7, 8, 10, 12, 15) + d (0, 5, 6)
Te

2. Z = f (A, B, C, D)
=  (1, 3, 4, 6, 8, 11, 15) + d (0, 5, 7)

May 11, 12 Marks.

Soln. : Solve it yourself. (c) Four variable K-map for POS form

Ex. 4.7.16 : Solve the following equation using (C-286) Fig. 4.8.1
corresponding minimization technique. Draw
 The entries in the K-map can be shown in terms of the
the diagram for the output :
maxterms M0, M1 …. etc. as shown in Fig. 4.8.2.
Z = f(A, B, C, D) = m (2, 4, 6, 11, 12, 14) +
d (3, 10). Dec. 11, 6 Marks.
Soln. : Solve it yourself.

4.8 Product of Sum (POS)


Simplification :

4.8.1 K-map Representation of POS Form :


(a)
 The POS form equations consist of maxterms. Fig. 4.8.2(Contd...)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 4-24 Logic Minimization

(b)

ns e
io dg
(C-288) Fig. P. 4.8.1 : Representation of standard

POS on K-map

at le (c) Ex. 4.8.2 : Represent the following standard POS


ic w
(C-287) Fig. 4.8.2 : K-map in terms of maxterms equation on the Karnaugh map.
– – –
4.8.2 Representation of Standard POS form Y = (A + B + C + D ) (A + B + C + D)
– – – – –
bl no

on K-map : (A + B + C + D) (A + B + C + D)

 Logical expressions in the standard POS form can be Soln. :

represented on K-map by entering 0’s in the cells of  The given expression contains four maxterms as
Pu K

K-map corresponding to each maxterm present in the follows :



given equation. (A + B + C + D ) = M1
ch

 The remaining cells are filled with 1’s. – – –


(A + B + C + D ) = M7
 This technique is illustrated in Ex. 4.8.1. – –
(A + B + C + D ) = M3
Ex. 4.8.1 : Represent the following standard POS – –
Te

(A + B + C + D) = M12
expression on Karnaugh map.
– – –
Y = (A + B + C) (A + B + C) (A + B + C).  Enter a 0 corresponding to each maxterm as shown in
Soln. : Fig. P. 4.8.2.
 Each term in the given logical equation is a maxterm.

 Enter a 0 corresponding to each maxterm as shown in

Fig. P. 4.8.1.

 The given expression has three maxterms as follows,

(A + B + C) = M0 ,

(A + B + C) = M2 ,
– –
(A + B + C ) = M6

 Hence we have to write the structure of 3 variable


K-map as usual and enter 0’s at M0 , M2 and M6 as

shown in Fig. P. 4.8.1.


Fig. P. 4.8.2(Contd...)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 4-25 Logic Minimization


Group 1 = A + B
Conclusion :

 When a pair of 0s is formed, the variable which changes


will get eliminated e.g. C changes for the pair of 0’s in
group 1. Hence it is eliminated.

ns e
io dg
(C-291)

(C-289) Fig. P. 4.8.2 : Representation of – –


 Group 2  B + C
standard POS on K-map
4.8.3 Simplification of Standard POS Form  Final minimized equation is as follows :

at le
using K-map :
Minimization procedure :
1. The given POS expression consists of maxterms.
(C-291(a))
ic w
Ex. 4.8.4 : Minimize the following standard POS
2. Corresponding to every maxterm enter a 0 in the K-
map. expression using K-map :
bl no

3. Enter 1’s in the remaining cells of K-map. Y = M (0, 2, 3, 5, 7)

4. Encircle/group 0’s instead of 1’s for carrying out the Soln. :


simplification.
5. Rules of simplification are exactly same as those used
Pu K

for the SOP form.


ch

Ex. 4.8.3 : Find the expression in the POS form for the
K-map shown in Fig. P. 4.8.3.
Te

(C-292) Fig. P. 4.8.4

 Minimized expression is as follows :


– – – –
Y = (A + C) (A + C) (B + C) …Ans.
(C-290) Fig. P. 4.8.3 : Given K-map
Ex. 4.8.5 : Find the expression for the output in the POS
Soln. : form for the K-map shown in Fig. P. 4.8.5.

Simplification :
– – –
Group 1  (A + B + C) (A + B + C)
– – –– – – –– –
= AA + AB + AC + AB + B B + BC + AC + B C + CC
–– – – – – –
But AA = A, BB= B , CC= 0, AB+AB= AB
– – – – ––
 Group 1 = A + AB + AC + B + BC + AC + B C
– – – ––
= A (1 + B) + A (C + C) + B (1 + C) + B C (C-293) Fig. P. 4.8.5 : Given K-map

(C-6358) Soln. :
 Minimized expression is as follows :

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 4-26 Logic Minimization

(C-293(a))

Ex. 4.8.6 : Write the expression for output in the POS


form for the K-map shown in Fig. P. 4.8.6.

(C-297) Fig. P. 4.8.8(a) : Given K-map

ns e
Soln. :

io dg
1. Figs. P. 4.8.8(b) and (c) shows two different but valid
ways of grouping 0’s.
2. The corresponding equations for the output in the POS
form are obtained after that.

Soln. :
at le (C-295) Fig. P. 4.8.6 3. Referring to the grouping shown in Fig. P. 4.8.8(b) we
get,
ic w
Expression for output Y = B + D
(C-6359)
Ex. 4.8.7 : For the K-map shown in Fig. P. 4.8.7 write the
bl no

expression for output in the POS form.  This shows that for the same K-map we can get
completely different answers and all of them are correct.
Pu K
ch
Te

(C-296) Fig. P. 4.8.7

Soln. : (b) One way of grouping


 The expression for output is,

(C-6361)

Note : Sometimes there are more than one correct ways

of encircling the 0’s, and the answers obtained as a

result of all of them are correct. This is illustrated in

Ex. 4.8.8.

Ex. 4.8.8 : Group the 0s given in the K-map of the


Fig. P. 4.8.8(a) in different ways to obtain the
(c) Another way of grouping
expression for output in the POS form. (C-298) Fig. P. 4.8.8

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 4-27 Logic Minimization

Ex. 4.8.9 : Solve the following equation using


corresponding minimization technique. Draw
the MSI design for the minimized output :

Z = f(A, B, C, D) = M (1, 3, 5, 6, 7, 10, 11)


+ d(2, 4) (May 12, 6 Marks)

Soln. :
Step 1 : Simplification using K-map :

ns e
(C-3228) Fig. P. 4.8.10

io dg
– –
 f(A, B, C, D) = (A + B ) (A + C + D) ...Ans.

Ex. 4.8.11 : Minimize the following function using K-map


and implement using basic logic gates :

at le Soln. :
f (A, B, C, D) =
+ d (3, 6, 7)
 m (0, 1, 2, 4, 8, 9, 12, 13)
May 14, 6 Marks)
ic w
Step 1 : K-map :
bl no

(C-3226) Fig. P. 4.8.9(a)


– – –
 Z = (A + D )  (A + B )  (B + C )

Step 2 : Realization using NOR gates :


Pu K

Taking double inversion of R.H.S


–––––––––––––––––––––––
–––––––––––––––––––––––
– – –
ch

Z = (A + D )  (A + B )  (B + C )
–––––––––––––––––––––––––
–––––– –––––– ––––––
– – –
Z = (A + D ) + (A + B ) + (B + C )
(C-4805) Fig. P. 4.8.11(a)
Te

– – – ––
 f (A, B, C, D) = AC + C D + A B

Step 2 : Implementation using basic gates :

(C-3227)Fig P. 4.8.9(b) : Implementation using NOR

Ex. 4.8.10 : Minimize the following four variable functions


using K-map. ‘d’ represents don’t care (C-4805) Fig. P. 4.8.11(b)
conditions : f(A, B, C, D) = M (4, 5, 6, 7, 8,
Ex. 4.8.12 : What is De Morgan’s theorem ? Solve the
12)  d(1, 2, 3). (Dec. 12, 5 Marks)
following using minimization technique :
Soln. :
z = f (A, B, C, D) = (1, 2, 3, 6, 8, 11, 14, 15).
 Required K-map is as follows :
Dec. 09, 10 Marks.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 4-28 Logic Minimization

Soln. : Solve it yourself. Table 1

A B C Y
Ex. 4.8.13 : Solve the following equations using
0 0 0 0
corresponding minimization techniques, also
0 0 1 1
draw MSI design for the minimized output
0 1 0 0
equation : Z = f (A, B, C, D) = (2, 7, 8, 10,
0 1 1 0
11, 13, 15) May 10, 12 Marks.
1 0 0 1
Soln. : Solve it yourself. 1 0 1 0

ns e
Ex. 4.8.14 : Solve the following equation using K map 1 1 0 0

io dg
minimization technique. Draw the diagram for 1 1 1 1

the output : Z = f(A, B, C, D) = M (0, 1, 6, 7, Q. 6 For the same truth table write the standard POS
8, 9) Dec. 11, 6 Marks. expression.

at le
Soln. : Solve it yourself.

Review Questions
Q. 7
Q. 8

Q. 9
Explain the K-map reduction technique.
Draw the structure of four input K-map.

Why is gray code followed for K-map?


ic w
Q. 10 Define a redundant group.
Q. 1 Explain the following terms :
Q. 11 State the importance of don’t care terms in K-map.
bl no

1. Product term
Q. 12 Draw the structure of four variable K-map to
2. Sum term
represent the standard POS form.
Q. 2 Explain the term SOP and POS related to Boolean
Q. 13 Solve the following with K-maps :
Pu K

function.
Q. 3 Convert the equation into standard POS form 1. f (A,B,C) = m (0,1,3,4,5)
––
Y = (A + B) (A + C) (B + C). f(A,B,C) = m (0,1,2,3,6,7)
ch

2.
Q. 4 State the disadvantages of algebraic method of
simplification. Q. 14 Explain the different methods used to simplify the

Q. 5 Write the standard SOP equation for the truth table Boolean function.
Te

shown in Table 1.



Powered by TCPDF (www.tcpdf.org)


Unit 2

Chapter

5
ns e
io dg
at le Combinational Logic
ic w
Design
bl no
Pu K

Syllabus
ch

Design using SSI chips : Code converters, Half- adder, Full adder, Half subtractor, Full subtractor, n bit
binary adder.
Introduction to MSI chips : Multiplexer (IC 74153), Demultiplexer (IC 74138), Decoder (74238) Encoder
(IC 74147), Binary adder (IC 7483).
Te

Design using MSI chips : BCD adder & subtractor using IC 7483, Implementation of logic functions
using IC 74153 & 74138.
Case Study : Use of combinational logic design in 7 segment display interface.

Chapter Contents
5.1 Introduction to Combinational Circuits 5.11 Study of Different Multiplexer ICs
5.2 Design of Combinational Logic using SSI 5.12 Multiplexer Tree/Cascading of multiplexer
chips
5.3 Binary Adders and Subtractors 5.13 Use of Multiplexers in Combinational Logic
Design
5.4 The n-Bit Parallel Adder 5.14 Demultiplexers
5.5 n-bit Parallel Subtractor 5.15 Types of Demultiplexers
5.6 BCD Addition 5.16 Demultiplexer Tree
5.7 BCD Subtractor using MSI IC 7483 5.17 Encoders
5.8 Magnitude Comparators 5.18 Priority Encoder
5.9 Multiplexer (Data Selector) 5.19 Decoder
5.10 Types of Multiplexers 5.20 Case Study : Combinational Logic Design of
BCD to 7 Segment Display Controller

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-2 Combinational Logic Design

5.1 Introduction to Combinational Examples of combinational circuits :


Circuits :
 Following are some of the examples of combinational
Types of digital circuits : circuits :
 The digital systems in general are classified into two 1. Adders, subtractors. 2. Comparator.
categories namely : 3. Code converters. 4. Encoders, decoders.
1. Combinational logic circuits. 5. Multiplexers, demultiplexers.
2. Sequential logic circuits.
5.1.1 Analysis of a Combinational Circuit :

ns e
Combinational circuits :
 In the analysis problem, the logic diagram of a
 The combinational circuit is a digital system the output

io dg
combinational circuit is given to us.
of which at any instant of time, depends only on the
 We have to obtain the Boolean expression for its output
levels present at its input terminals.
or write a truth table or explain the operation of circuit.
 The combinational circuits do not use any memory.
Analysis Procedure :


at le
Hence the previous state of input does not have any
effect on the present state of the circuit.

The sequence in which the inputs are being applied also



1.
The general procedure for the analysis is as follows :
Write down the Boolean function for the output of each
ic w
does not have any effect on the output of a input gate in the given circuit.

combinational circuit. 2. Obtain the Boolean expression at all other gates.


bl no

3. Write down the Boolean expression for the final output


 A combinational circuit is a logic circuit the output of
in terms of the input variables.
which depends only on the combination of the inputs.
Illustration :
 The output does not depend on the past value of inputs
or outputs.  Analyze the combinational circuit shown in Fig. 5.1.2.
Pu K

 Hence combinational circuits do not require any


memory (to store the past values of inputs or outputs).
ch

 The block diagram of a combinational circuit is shown in


Fig. 5.1.1.

 A combinational circuit can have a number of inputs


Te

and a number of outputs. The circuit of Fig. 5.1.1 has “n”


inputs and “m” outputs.

(C-333) Fig. 5.1.2 : Given combinational circuit

Analysis :
Step 1 : Boolean expressions for the outputs of all the
(C-332) Fig. 5.1.1 : Block diagram of a combinational circuit input gates :

 A combinational circuit is made up of logic gates. These  Input gates are 1, 2, 3 and 4. The Boolean expressions
gates are connected suitably between the inputs and for their outputs are as follows :
outputs of the combinational circuit. – –
T1 = AB T3 = C
 A combinational circuit operates in three steps : –
T2 = BC T4 = B
1. It accepts n-different inputs.
Step 2 : Boolean expressions for remaining gates :
2. The combination of gates operates on the inputs.

3. “m” different outputs are produced as per T5 = T1 + T2 = AB + BC
requirement. – –
T6 = F1 = T5  T3 = (AB + BC) C

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-3 Combinational Logic Design

–– – ––
= ABC + BCC = ABC Step 5 : Draw the logic diagram (combinational circuit).
– Ex. 5.1.1 : A circuit has four inputs and two outputs. One
T7 = F2 = T2 + T3 = BC + C
of the outputs is high when majority of inputs
Step 3 : Boolean expressions for final outputs :
are high. The second output is high only when
––
F1 = T6 = ABC all inputs are of same type. Design the
– combinational circuit.
F2 = T7 = C + BC
OR
Step 4 : Write the truth table :
Design a logic circuit which has three inputs A,

ns e
(C-8244) Table 5.1.1 B, C and gives a high output when majority of
inputs is high.

io dg
Soln. :
Step 1 : Assign symbols to input and output variables :
 Let the four inputs be A, B, C, D and the two outputs be

at le 
Y1 and Y2.
Step 2 : Write the truth table :
The truth table is as given in Table P. 5.1.1.
ic w
(C-8245) Table P. 5.1.1 : Truth table relating the
inputs and outputs
bl no

 Note that the outputs of all the gates (T1 to T5) are first
written in the truth table for various combinations of
inputs and then the values of F1 and F2 are entered
Pu K

using the following equations,


F1 = T3  T5 And F2 = T2 + T3
ch

5.1.2 Design of Combinational Logic using


SSI Chips :
 SSI is small signal integration. The SSI include the gate
ICs. Hence this design procedure is used to implement
Te

the designed circuit using gates.


 The steps involved in designing a combinational logic
are as follows :
Steps to be followed :

Step 1 : You will be given a problem.

Step 2 : Determine the number of inputs and outputs and  From the truth table we note the following things.
assign letter symbols to input and output  Y1 = 1 when number of 1 inputs is higher than the
variables. For example F1, F2 … for outputs and number of 0 inputs.
A, B, C. …. for the inputs.
 Y2 = 1 when A = B = C = D.
Step 3 : Prepare a truth table relating the inputs and
Step 3 : Write K-map for each output and get simplified
outputs. expression :
Step 4 : Write K-map for each output in terms of inputs and  K-maps for the two outputs and the corresponding
obtain the simplified Boolean expression for each simplified Boolean expressions are given in
output. Figs. P. 5.1.1(a) and (b).

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-4 Combinational Logic Design

5.2 Design of Combinational Logic


using SSI Chips :

 The combinational circuits to be designed using SSI


chips i.e. logic gates are as follows :
1. Code converters
2. Adders and subtractors

5.2.1 Code Converters :

ns e
 In this section we are going to convert one type of code
into another type.

io dg
5.2.1.1 BCD to Excess 3 Converter :

(C-334) Fig. P. 5.1.1(a) : K-map and simplification for Y1 SPPU : May 06, Dec. 12

at le University Questions.

Q. 1 Design and implement BCD to Excess-3 code


converter using logic gates. Starting with truth table
ic w
show K-maps and circuit diagram of your design.
(May 06, 8 Marks)
bl no

Q. 2 Design 4-bit BCD to excess-3 code converter. Use


logic gates as per your design and requirement.
(Dec. 12, 8 Marks)
Pu K

Principle :

 We know that Excess-3 code can be derived from the


ch

BCD code by adding 3 to each BCD number.


(C-334) Fig. P. 5.1.1(b) : K-map and simplification for Y2
 For example decimal 13 is represented as 0001 0011 in
Step 4 : Implement the logic diagram : BCD.
Te

 The logic diagram using logic gates is as shown in  If we add 3 i.e. (0011 0011) then the corresponding
Fig. P. 5.1.1(c). Excess 3 code is 0100 0110.
Step 1 : Write the truth table relating BCD and
Excess 3 :
(C-8109) Table 5.2.1 : Truth table relating BCD and Excess-3
codes

(C-335) Fig. P. 5.1.1(c) : Logic diagram

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-5 Combinational Logic Design

Step 2 : Write K-map for each output and obtain


simplified expression :

 Refer Fig. 5.2.1(a) to (d) for the K-maps and

corresponding simplified equations.

 Don’t care conditions () have been entered for D3

D2D1D0 above 1001.

ns e
(C-366) Fig. 5.2.1(c) : K-map for E1

io dg
–– ––
 E1 = D1 D0 + D1D0 = (D1  D0)

at le
ic w
bl no

(C-365) Fig. 5.2.1(a) : K-map for E3

 Simplified equation :
Pu K

(C-366) Fig. 5.2.1(d) : K-map for E0


E3 = D3 + D2D0 + D2D1
––

 E3 = D3 + D2 (D0 + D1)  E0 = D0
ch

Step 3 : Implement the BCD to Excess 3 code converter


using gates :

 The circuit diagram of BCD to Excess 3 code converter is


Te

shown in Fig. 5.2.2.

(C-365) Fig. 5.2.1(b) : K-map for E2

 Simplified equation :
–– –– –– ––
E2 = D2 D1 + D2 D0 + D2 D1 D0
–– –– ––
 E2 = D2 (D1 + D0) + D2 D1 D0

(C-367) Fig. 5.2.2 : BCD to Excess - 3 code converter

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-6 Combinational Logic Design

5.2.2 BCD to Gray Code Converter :

SPPU : Dec. 06.

University Questions.

Q. 1 Design and implement BCD to gray code converter


using logic gates. Starting with truth table show
K-maps and circuit diagram of your design.

(Dec. 06, 8 Marks)

ns e
Step 1 : Write the truth table relating BCD and gray

io dg
codes : (C-371) Fig. 5.2.3(b) : K-map for G2

(C-8145) Table 5.2.2 G2 = D2 + D3

at le
ic w
bl no
Pu K
ch

(C-372) Fig. 5.2.3(c) : K-map for G1


Step 2 : Write K-map for each output and get simplified
–– ––
equation :  G1 = D2D1 + D2D1 = (D2  D1)
Te

(C-371) Fig. 5.2.3(a) : K-map for G3


(C-372) Fig. 5.2.3(d) : K-map for G0
 G3 = D3 –– ––
 G0 = (D1D0 + D1D0 ) = D1  D0

Step 3 : Realization using gates:

 The BCD to gray converter is shown in Fig. 5.2.4.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-7 Combinational Logic Design

 Simplified equation : G3 = B3

(C-373) Fig. 5.2.4 : BCD to Gray code converter

ns e
5.2.3 Binary to Gray Code Converter :
SPPU : May 12.

io dg
University Questions. (C-374) Fig. 5.2.5(b) : K-map for G2
–– ––
Q. 1 Design 4-bit binary to gray code converter. State the  Simplified equation : G2 = B3 B2 + B3B2 = B3  B2
applications of gray code. (May 12, 8 Marks)
In gray code only one bit changes at a time.

at le
Step 1 : Write the truth table relating binary inputs and
gray outputs :
ic w
(C-8252) Table 5.2.3 : Truth table relating binary and gray
codes
bl no
Pu K

(C-375) Fig. 5.2.5(c) : K-map for G1


ch

–– ––
 Simplified equation : G1 = B2B1 + B2 B1 = B2  B1
Te

Step 2 : Write K-map for each gray output and obtain


the simplified expression :

(C-375) Fig. 5.2.5(d) : K-map for G0

 Simplified equation :
–– ––
G0 = B1 B0 + B1 B0 = B1  B0

Step 3 : Realize Binary to gray code converter using


gates :

(C-374) Fig. 5.2.5(a) : K-map for G3  Binary to gray code converter is shown in Fig. 5.2.5(e).

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-8 Combinational Logic Design

ns e
(C-376) Fig. 5.2.5(e) : Binary to gray code converter

5.2.4 Gray to BCD Converter :

io dg
Ex. 5.2.1 : Convert 4-bit gray code into corresponding
BCD code. Show truth table and MSI circuit.

Soln. :
at le May 10, 6 Marks.
ic w
Step 1 : Write the truth table relating gray and BCD
codes : (C-8279)
bl no
Pu K
ch

(C-1709) Fig. P. 5.2.1


Te

Step 3 : Realization :
Step 2 : Write K-maps for each output and get  The gray to BCD converter is shown in Fig. P. 5.2.1(a).

simplified equation :

(C-1710) Fig. P. 5.2.1(a)

Ex. 5.2.2 : Design and explain in detail 4-bit gray code to


5-bit BCD code conversion. For this design
use K-map reduction and MSI circuit using
basic gates. May 11, 16 Marks.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-9 Combinational Logic Design

Soln. :
Step 1 : Write the truth table 4 bit gray to 5 bit BCD
code converter :

(C-8280) Table P. 5.2.2

ns e
io dg
at le
ic w
bl no

Step 2 : Write the K-maps and simplify :


Pu K
ch
Te

(C-1964) Fig. P. 5.2.2 : K-maps


Simplified expression for B0 :
–– –– –– –– –– –– –– –– ––
B0 = G3 G2 G1 G0 + G3 G2G1 G0 + G3 G2 G1G0
–– –– ––
+ G3 G2 G1 G0 + G3G2G1 G0+ G3G2G1 G0
–– –– –– ––
+ G3G2 G1 G0 + G3G2 G1G0

(C-6178)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-10 Combinational Logic Design
–– –– ––
Soln. :
 B0 = G1 G0 (G3  G2) + G1 G0 ( G3  G2 )
Step 1 : Write the truth table :
––
+ G1G0 (G3  G2) + G1G0 ( G3  G2 )

–– ––
 B0 = (G3  G2) (G1 G0 + G1G0)
–– ––
+ ( G3  G2 ) (G1 G0 + G1G0 )

 B0 = (G3  G2) ( G1  G0 )

ns e
+ ( G3  G2 ) (G1  G0)

io dg
(C-6179)

Let X = G3  G2
Step 2 : K-map and simplification :

at le
And Y = (G1  G0)
–– ––
 B0 = X Y + X Y = X  Y
ic w
 Substituting for X and Y we get,
bl no

B0 = (G3  G2)  (G1  G0) = G3  G2  G1  G0


Pu K
ch
Te

(C-5103) Fig. P. 5.2.3(a)

Step 3 : Logic gate diagram :

(C-1965) Fig. P. 5.2.2(a) : Gray to 5 bit BCD code converter


(C-5104) Fig. P. 5.2.3(b)
5.2.5 Excess 3 to BCD Converter :
5.3 Binary Adders and Subtractors :
Ex. 5.2.3 : Design a 3-bit excess 3 to 3-bit BCD code
converter using logic gate. May 15, 6 Marks)  Addition of two binary digits is most basic operation
performed by the digital computers.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-11 Combinational Logic Design

 The rules of binary addition are summarised in


Table 5.3.1(A). A and B are the single bit digits to be
added whereas C and S are the carry and sum outputs
respectively.

Table 5.3.1(A) : Rules of binary addition

(C-344) Fig. 5.3.1(b) : Truth table

Design using K-maps :

ns e
 K-maps for carry and sum outputs are as shown in

io dg
Figs. 5.3.2(a) and (b).

(C-2588) Fig. 5.3.1(A) : Block diagram of adder

5.3.1


at leTypes of Binary Adders :

In this section we are going to learn digital circuits


ic w
which are used to “add” two “binary” numbers.
(a) K – map for sum output (b) K – map for carry output
 The binary adders are of two types :
bl no

(C-345) Fig. 5.3.2


1. Half adder and 2. Full adder
 Boolean expressions for the sum (S) and carry (C) output
5.3.2 Half Adder : SPPU : May 07. are obtained from the K-maps as follows :
Pu K

University Questions.
…(5.3.1)
Q. 1 What do you mean by half adder ?
ch

(C-346)
(May 07, 2 Marks)
 The disadvantage of half adder is that addition of three
Definition and block diagram :
bits is not possible to perform.
 Half adder is a combinational logic circuit with two
Te

Logic diagram using gates :


inputs and two outputs, which carries out addition of
 The half adder circuit is as shown in Fig. 5.3.3.
two “single” bit numbers.

 This circuit has two outputs namely “carry” and “sum”.


The block diagram of half adder is as shown in
Fig. 5.3.1(a).

(C-346) Fig. 5.3.3 : Half adder circuit

Disadvantage of half adder :

 The principle of adding two 2-bit numbers A and B is as


(C-344) Fig. 5.3.1(a) : Block diagram shown in Fig. 5.3.4(a).

 The half adder circuit is supposed to add two single bit Let Number A = A1 A0
binary numbers A and B. And Number B = B1 B0

 Therefore the truth table of a half adder is as shown in  Then the addition should take place as shown in
Fig. 5.3.1(b). Fig. 5.3.4(a).

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-12 Combinational Logic Design

The K-maps :

 The K-maps for the sum (S) and carry out (Co) outputs
and the corresponding Boolean expressions are as

(C-347) Fig. 5.3.4(a) : Two bit binary addition shown in Fig. 5.3.6.

 A half adder can add A0 and B0 to produce S0 and C0.  Note that the K-maps have been written from the truth
But the addition of next bits requires the addition of A1,
table of Fig. 5.3.5(b).
B1 and C0.
For the sum output :
 The addition of three bits is not possible to perform by

ns e
using a half adder. Hence we cannot use a half adder in
practice.

io dg
5.3.3 Full Adder : SPPU : May 07.

University Questions.
Q. 1 What do you mean by full adder ?

at le
Definition :
(May 07, 6 Marks)
(C-351) Fig. 5.3.6(a) : K-map for sum output
ic w
 Full adder is a three input two output combinational Expression for sum output :
logic circuit which can add three single bits applied at
its input to produce Sum and Carry outputs.
bl no

 It can add two one-bit numbers A and B, and carry Cin.


The full adder is a three input and two output (C-6375)
combinational circuit.
–– –– –– –– ––
 To overcome the drawback of Half Adder circuit, a 3
Pu K

 S = Cin ( AB + A B ) + Cin (AB + A B)


single bit adder circuit called Full Adder is developed.
–– ––
 It can add two one-bit numbers A and B, and carry Cin. Let X = AB + A B
ch

Block diagram and Truth Table : –– –– –– ––


 S = Cin X + Cin X = Cin  X = Cin  (AB + A B)
 The block diagram of a full adder is as shown in
–– ––
Fig. 5.3.5(a) and its truth table is given in Fig. 5.3.5(b). But AB + A B = A  B
Te

 S = Cin  A  B …(5.3.2)

For carry output :

(C-350) Fig. 5.3.5(a) : Block diagram

(C-352) Fig. 5.3.6(b) : K-map for carry output

Expression for carry output :

Co = AB + ACin + BCin …(5.3.3)

 We can use Equations (5.3.4) and (5.3.5) to draw the


(b) Truth table
logic diagram of a full adder as shown in Fig. 5.3.6(c).
(C-350) Fig. 5.3.5 : Full adder

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-13 Combinational Logic Design

S = (A  B)  Cin = A  B  Cin
Logic diagram for full adder :
 This expression is same as that obtained for the full
adder. Thus the sum output has been successfully
implemented by the circuit shown in Fig. 5.3.8.
 Now write the expression for carry output Co as,
Co = (A  B) Cin + AB
–– –– –– ––
Co = (AB + AB) Cin + AB = ABCin + A BCin + AB
–– ––
= ABCin + A BCin + AB (1 + Cin)

ns e
–– ––

(C-353) Fig. 5.3.6(c) : Full adder circuit = ABCin + A BCin + AB + ABCin

io dg
–– ––
5.3.4 Full Adder using Half Adder : = BCin (A + A) + ABCin + AB
––
SPPU : May 07.
= BCin + A BCin + AB
University Questions. ––

at le
Q. 1 How will you implement full adder using half
adder ? Explain with circuit diagram.
(May 07, 6 Marks)
= BCin + A BCin + AB (1 + Cin)
––
= BCin + A BCin + AB + ABCin
––
ic w
= BCin + AB + ACin ( B + B)
 The full adder circuit can be constructed using two half  Co = BCin + AB + ACin …Proved.
adders as shown in Fig. 5.3.7 and the detail circuit is
bl no

 This expression is same as that for a full adder. Thus we


shown in Fig. 5.3.8.
have proved that circuit shown in Fig. 5.3.8 really
behaves like a full adder.

5.3.5 Applications of Full Adder :


Pu K

 The full adder acts as the basic building block of the 4


ch

bit/8 bit binary/BCD adder ICs such as 7483.


5.3.6 Binary Subtractors :

(C-354) Fig. 5.3.7 : Full adder using half adders  The rules of binary subtraction are as follows :
Te

0–0 =0 0–1 = 1 with borrow 1


 A full adder can be implemented using two half adders 1–0 =1 1–1 = 0
and an OR gate as shown in Fig. 5.3.8.
 Note that in the second case (0 – 1) it is necessary to
borrow a 1.
Types of binary subtractors :

 The types of binary subtractors are :

1. Half subtractor, 2. Full subtractor.

5.3.7 Half Subtractor :

Definition :

(C-355) Fig. 5.3.8 : Full adder using two half adders  Half subtractor is a combinational circuit with two inputs
and two outputs (difference and borrow).
 Now let us prove that this circuit acts as a full adder.
Proof :  It produces the difference between the two binary bits

 Refer Fig. 5.3.8 and write the expression for sum output at the input and also produces an output (borrow) to
as, indicate if a 1 has been borrowed.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-14 Combinational Logic Design

 But while performing the subtraction, it does not take

(C-3514) into account the borrow of the lower significant stage.

 In the subtraction (A – B), A is called as minuend bit 5.3.8 Full Subtractor :


and B is called as subtrahend bit.
Truth table : Definition and block diagram :

 Truth table showing the outputs of a half subtractor for  The full subtractor is a combinational circuit with three
all the possible combinations of input are shown in inputs A, B and Bin and two outputs D and Bo.
Table 5.3.1.

ns e
 A is the minuend, B is subtrahend, Bin is the borrow
(C-8058) Table 5.3.1 : Truth table for half subtractor

io dg
produced by the previous stage, D is the difference
output and Bo is the borrow output.

 The disadvantage of a half subtractor is overcome if we

at le 
use the full subtractor.

Fig. 5.3.10 shows the symbol of a full subtractor.


ic w
K-maps for difference and borrow outputs :
bl no

 The K-maps for the two outputs of a half subtractor are


as shown in Fig. 5.3.9.
Pu K

(C-2591) Fig. 5.3.10 : Symbol of a full subtractor


ch

Truth table :

 The truth table for full subtractor is shown in


(a) K-map and simplification (b) K-map and simplification
for difference output for the borrow output Table 5.3.2.
Te

(C-356) Fig. 5.3.9 (C-8059) Table 5.3.2 : Truth table for a full subtractor
–– ––
 Difference D = AB + A B = A  B
––
 Borrow Bo = AB
Logic diagram :

 The logic diagram using these two Boolean expressions


is as shown in Fig. 5.3.9(c).

(C-357) Fig. 5.3.9(c) : Half subtractor circuit


K-maps and simplifications :
Disadvantage of the half subtractor :

 Half subtractor can only perform the subtraction of two  K-maps for D and Bo outputs are shown in Figs. 5.3.11(a)

binary bits. and (b).

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-15 Combinational Logic Design

For difference output :

(C-360) Fig. 5.3.11(a) : K-map for D

ns e
–– –– –– –– –– ––
 D = A B Bin + ABBin + A B Bin + ABBin

io dg
Simplification for difference output : (C-361) Fig. 5.3.12 : Logic diagram for a full subtractor

 From Fig. 5.3.11(a), 5.3.9 Full Subtractor using Half Subtractors :


.SPPU : Dec. 08.

at le (C-6376)
University Questions
Q. 1 With the help of a circuit diagram explain the full
ic w
–– subtractor using half subtractor. (Dec. 08, 8 Marks)
 D = Bin ( A  B ) + Bin (A  B)
Logic diagram :
Let A  B = C,
bl no

 Fig. 5.3.13 shows the implementation of a full subtractor


–– ––
 D = Bin C + BinC = Bin  C using two half subtractors and an OR gate.

 D = Bin  A  B …(5.3.4)
Pu K

For borrow output :


ch
Te

(C-362) Fig. 5.3.13 : Full subtractor using half subtractors

 Let us prove that the circuit shown in Fig. 5.3.15 really

(C-360) Fig. 5.3.11(b) : K-map for Bo


operates as a full subtractor.

–– –– Proof :
 Bo = ABin+ AB + BBin
 Refer Fig. 5.3.13 to write the expression for difference
Simplification for borrow output : output D as,

 From Fig. 5.3.13(b), D = (A  B)  Bin = A  B  Bin


–– ––
Bo = ABin + AB + BBin …(5.3.5)  This is same as the expression for D output of a full
subtractor.
 No further simplification is possible.
 Now write the expression for Borrow output Bout.
Logic diagram for full subtractor : –– –– –– ––
Bout = ( A  B ) Bin + A B = ( AB + A B ) Bin + A B
 Logic diagram for the full subtractor is shown in –– –– –– –– –– ––

Fig. 5.3.12. This has been drawn by using the Boolean = (A B + AB) Bin + A B = A B Bin + ABBin + A B
–– –– ––
equations of (5.3.8) and (5.3.9). = A B Bin + ABBin + A B (1 + Bin) …since (1 + Bin) = 1

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-16 Combinational Logic Design
–– –– –– ––
Block diagram :
= A B Bin + ABBin + A B + A B Bin
–– –– –– ––  The block diagram of a four bit parallel adder using full
= A B Bin + BBin (A + A) + AB adders is shown in Fig. 5.4.2.
–– –– –– ––
= A B Bin + B Bin + A B …since (A + A) = 1
–– –– ––
= A B Bin + B Bin + A B (1 + Bin)
–– –– –– ––
= A B Bin + B Bin + A B + A B Bin
–– –– ––
= A Bin ( B + B) + BBin + A B

ns e
–– ––
= A Bin + BBin + A B …Proved. (C-386) Fig. 5.4.2 : Block diagram of a four-bit parallel adder

io dg
 Note that this expression is exactly same as that for Bo  Let the two four bit words that are to be added be A
of the full subtractor. and B and be denoted as follows :
 Thus the circuit shown in Fig. 5.3.13 acts as a full A = A3 A2 A1 A0,
subtractor.

5.4 at le
The n-Bit Parallel Adder : 
B = B3 B2 B1 B0.
A0 and B0 represent the LSBs of the four bit words A and
B. Hence full adder-0 is the lowest stage. Hence its Cin
ic w
 The full adder is capable of adding only two single digit has been connected to 0 permanently.
binary numbers along with a carry input.
 The rest of connections are exactly same as those done
bl no

 But in practice we need to add binary numbers which for the n-bit parallel adder.
are much larger in size than just one bit.
 The four-bit parallel adder is a very common logic
 The two binary numbers to be added could be 4 bit, 8 circuit. It is normally shown by a block diagram as
bit, 16 bit long. shown in Fig. 5.4.3.
Pu K

 In general we assume that both the numbers are n bit


long.
ch

 To add two n-bit binary numbers we need to use the


n-bit parallel adder shown in Fig. 5.4.1.
 It uses a number of full adders which are connected in
cascade.
Te

 The carry output of the previous full adder is connected


to the carry input of the next full adder as shown in
Fig. 5.4.1.
(C-387) Fig. 5.4.3 : Block diagram of 4-bit parallel adder

 It has two 4 bit inputs A3....A0 and B3 ......B0, a carry input


and carry output and 4-bit sum output S3 S2 S1 S0.

5.4.2 Propagation Delay in Parallel Adder :


 In parallel adder carry out of the previous stage is
(C-385) Fig. 5.4.1 : Block diagram of n-bit parallel adder connected to carry in of the next stage.
5.4.1 A Four Bit Parallel Adder Using Full  Therefore the carry is said to be propagated like ripple
Adders : SPPU : May 06, Dec. 06.
from the LSB stage to the MSB stage. This phenomenon
University Questions. is called ripple carry propagation.
Q. 1 Draw and explain 4-bit full adder.  Due to this ripple carry propagation time delay is
(May 06, Dec. 06, 4 Marks) introduced in the addition process. This time delay is
called as the propagation delay.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-17 Combinational Logic Design

 Consider the addition of the two 4-bit numbers. Block diagram :

 The problem of propagation delay can be eliminated by


using this addition technique.

(C-6475)
 The look-ahead-carry addition will therefore speed up
the addition process.
 From this addition it is evident that the addition of 1’s in
 The adder with look ahead carry needs additional
the second position produces a carry which is added to
hardware but the speed of that adder is independent of
the bits in the third position and the carry produced in
the number of bits.

ns e
the third position is added to the bits in the fourth
 Consider the block diagram of full adder Fig. 5.4.4(a)
position.

io dg
and its AND-OR-EXOR realization shown in Fig. 5.4.4(b).
 Therefore the sum bit produced in the MSB position of
the result depends on the carry generated due to
additions in the preceding stages.


at le
The real problem is created due to this.

If the propagation delay of each full adder is say 20 nS,


ic w
(a) Block diagram of ith full adder
then sum S3 will reach its correct value after
3  20 = 60 nS from the instant when LSB carry is
bl no

generated.

 Hence the total time required to perform the addition


will be 4  20 = 80 nS.
Pu K

 The problem of propagation delay becomes severe as


the number of bits increases.
(b) Realization using AND-OR-EXOR gates
ch

 For example if n = 16 then the total time required for (C-389) Fig. 5.4.4

performing the addition is 16  20 = 320 nS.  Refer Fig. 5.4.4(b) to write,


5.4.3 Look Ahead – Carry Adder : Pi = Ai  Bi …(5.4.1)
Te

SPPU : May 06, Dec. 07, May 14. and Gi = Ai Bi …(5.4.2)

University Questions. Also Si = Pi  Ci – 1 = Ai  Bi  Ci – 1 …(5.4.3)

Q. 1 Draw and explain 4-bit full adder. How will you and Ci = Gi + PiCi – 1 …(5.4.4)
generate look ahead carry for your circuit.  The carry output Gi of the first half adder is equal to 1 if
(May 06, 6 Marks) Ai = Bi = 1 and a carry is generated at the ith stage of
Q. 2 What do you mean by carry generate and carry the parallel adder. That means Ci = 1.
propagate ? Explain with suitable equations and  This variable Gi is known as Carry Generate and its
block diagram. Write an equation for C3 using carry
value does not depend on the input carry i.e. Ci – 1.
generate and carry propagate. Simplify this
 The variable Pi is called as Carry Propagate because
equation to get minimum gate delay. How many
gate delays are needed to generate C3 with this this term is associated with the propagation of carry

principle ? Explain your calculations. from Ci – 1 to Ci. Now consider Equation (5.4.4)
i.e. Ci = Gi + Pi Ci – 1.
(Dec. 07, 10 Marks)
Q. 3 Draw and explain the loop ahead carry generator.  Using this equation, we can write the expression for the

(May 14, 6 Marks) carry output of each stage in a 4 bit parallel adder as
follows :

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-18 Combinational Logic Design

Pi = Ai  Bi

Stage expression for carry output

0 C0 = G0 + P0 C– 1 …(5.4.5)

1 C1 = G1 + P1C0

= G1 + P1 (G0 + P0 C–1)

 C1 = G1 + P1G0 + P0P1C–1 …(5.4.6)

ns e
Stage expression for carry output

io dg
2 C2 = G2 + P2C1 = G2 + P2 ( G1 + P1G0 + P0P1C–1)

 C2 = G2 + P2G1 + P2P1G0 + P2P1P0C–1 …(5.4.7)

3 C3 = G3 + P3G2 + P3P2G1 + P3P2P1G0


at le + P3P2P1P0C–1 ...(5.4.8)

In the expressions stated above, the variable involved


ic w
are, G0, G1, G2, G3, P0, P1, P2, P3 and C–1.
bl no

 Out of them the G variables are generated from the A

and B inputs using AND gates (as illustrated in

Equation (5.4.2).
(C-390) Fig. 5.4.5 : Logic diagram of the look
Pu K

 And the P variables are obtained again directly from A


ahead carry generator
and B inputs using EX-OR gates (as illustrated in
5.4.4 Four Bit Fast Adder with Look-Ahead
ch

Equation (5.4.1). Carry :


 If the G, P and C–1 are at a time available, then it is Block diagram :

possible to produce the carry outputs C0, C1, C2 and C3  The block diagram of a four bit parallel adder using the
Te

look-ahead carry generator is shown in Fig. 5.4.6.


by using 2-level realization (AND-OR or NAND-NAND)

etc.

 The advantage of generating the carry outputs using

this method is that the propagation delay in this process

corresponds to the propagation delay of only two gates.

 These carry outputs are then connected to the carry

inputs of the succeeding stages.

 This eliminates the problem of carry getting propagated

like ripples.

Logic diagram :

 The logic circuit of a look ahead carry generator is

shown in Fig. 5.4.5 and it is based on Equations (5.4.1)

through (5.4.8). (C-391) Fig. 5.4.6 : A 4-bit parallel adder

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-19 Combinational Logic Design

 The Pi and Gi variables (i.e. P0, P1, ....., P3 and G0, G1 .....G3)  Cin 0 is the input carry and Cout 3 represents the output
are obtained from the inputs Ai and Bi (i.e. A3 A2 A1 A0 carry. S3, S2, S1, S0 represent the sum outputs with S3
and B3 B2 B1 B0) by using the EX-OR and AND gates MSB.
respectively.
Pin diagram :
 The carry outputs C0 through C3 are produced
simultaneously by the look ahead carry generator, as  The pin diagram of IC 7483 is shown in Fig. 5.4.8.
explained earlier.  This IC adds the two four bit words A and B and the bit
 These carry outputs and Pi variables are EX-ORed to at Cin 0 and produces a four bit sum output along with
produce the sum outputs S0 through S3. carry output at Cout 3.

ns e
 For example S3 is produced by Ex-ORing P3 and C2, then
S2 is produced by Ex-ORing P2 and C1 and so on.

io dg
5.4.5 MSI Binary Adder IC 74 LS 83 / 74 LS
283 : SPPU : Dec. 10, Dec. 11.

University Questions.

at le
Q. 1

Q. 2
Explain for IC 74LSXX various characteristics in
brief. (Dec. 10, 4 Marks)
What is the use of 7483 chip ? (Dec. 11, 4 Marks)
ic w
Block diagram :
 The most common binary parallel adder in the (C-392) Fig. 5.4.8 : Pin diagram of IC – 7483
bl no

integrated circuit form is IC 74 LS 83 / 74 LS 283. This is


an MSI( EDIUM Scale Integration) IC. 5.4.6 Four Bit Binary Adder using IC 7483 :
 It is a 4-Bit parallel adder, which consists of four Block diagram :
interconnected full adders alongwith the look-ahead
Pu K

 A four bit binary adder is shown in Fig. 5.4.9. Note that


carry circuit.
the carry input Cin0 has been connected to ground.
 The IC 7483 and 74283 are TTL MSI (Medium Scale
So at the outputs we get the addition of two four bit
ch

Integration) for 4-bit parallel adders and both of them 


have the same pin configuration. numbers A and B.

Functional symbol :

 Fig. 5.4.7 shows the functional symbol of IC 74 LS 283.


Te

A3 A2 A1 A0 is a four bit word A and B3 B2 B1 B0 is


another word B.

(C-393) Fig. 5.4.9 : 4 bit binary adder using IC 7483

5.4.7 Cascading of Adders :


 If we want to add two 8-bit numbers using the 4-bit
parallel adder 74283, then we have to cascade two such
(C-392) Fig. 5.4.7 : Functional symbol for 74 LS 283 four bit adders.

 Both these words are applied at the inputs of the I.C.  Fig. 5.4.10 shows an 8-bit adder using two 4-bit adders.
7483/74 LS 283. Similarly it is possible to connect a number of adders to
make an n-bit adder.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-20 Combinational Logic Design

 A7 – A0 and B7 – B0 are the two eight bit numbers to be Description :


added.  The number to be subtracted (B) is first passed through
 Adder-1 in Fig. 5.4.10 adds the four LSB bits of the two inverters to obtain its 1’s complement.
numbers i.e. A3 – A0 and B3 – B0.
 One inverter per bit of word B is used so that all the bits
 The carry input of first adder has been connected to of B get inverted.
ground (logic 0).
 Then 1 is added to 1’s complement of B, by making
Cin = 1. Thus we obtain the 2’s complement of B.

 The 4-bit adder then adds A and 2’s complement of B to

ns e
produce the subtraction at its sum outputs S3 S2 S1 S0.

io dg
 The word S3 S2 S1 S0 represent the result of binary
subtraction (A – B) and carry output Cout represents the
polarity of the result.
(C-394) Fig. 5.4.10 : 8-bit addition using 4-bit adders
 If A > B then Cout = 0 and the result is in true binary

at le
The Cout 3 i.e. the carry output of adder-1 is connected to
Cin 0 input of adder-2. The second adder adds this carry
form but if A < B then Cout = 1 and the result is negative
and in the 2’s complement form.
ic w
and the four MSB bits of the two numbers.
5.5.2 4-Bit Binary Parallel Adder / Subtractor
 Cout 7 of adder-2 acts as the final output carry and the Using IC 7483 :
bl no

sum output is from S7 through S0.


Block diagram :
5.5 n-bit Parallel Subtractor :  The addition or subtraction of two 4-bit binary numbers

 The subtraction can be carried out by taking the 1’s or can be obtained using the same circuit shown in
Pu K

Fig. 5.5.2.
2’s complement of the number to be subtracted.
 It makes use of the MSI 4 bit adder IC 7483 and it is
 For example we can perform the subtraction (A – B) by
ch

called as an adder/subtractor circuit.


adding either 1’s complement or 2’s complement of B
 The operation performed by this circuit (addition or
to A. That means we can use a binary adder to perform
subtraction) depends on the state of the mode of select
the binary subtraction.
input i.e. M in Fig. 5.5.2.
Te

5.5.1 4 Bit Parallel Subtractor using IC7483 :


Block diagram :

 A 4-bit parallel subtractor using a 4-bit parallel adder IC


7483 is shown in Fig. 5.5.1.

(C-396) Fig. 5.5.2 : 4-bit binary parallel adder/subtractor

 Number B is applied to the adder through four EX-OR


gates. One input of each EX-OR gate is connected to
(C-395)Fig. 5.5.1 : 4-bit parallel binary
subtractor using 2’s complement the mode-select input (M).

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-21 Combinational Logic Design

Operation as Adder (M = 0) : Case 1 : Sum is equal to or less than 9 and carry is 0.


 For operation as an adder, the mode select (M) input is Case 2 : Sum is greater than 9 and carry is 0.
connected to ground,
Case 3 : Sum is less than or equal to 9 but carry is 1.
 M = 0.
 The BCD addition is to be carried out in an identical
 Since M = 0, the output of the EX-OR gates will be the
manner as normal binary addition.
same number B which was applied at their inputs. This is
because 0  0 = 0 and 0  1 = 1. Case 1 : Sum equal to or less than 9 with carry 0
 The addition of (2)10 and (6)10 in BCD is shown in
 Hence B3, B2, B1 and B0 will pass unchanged through the

ns e
EX-OR gates. The carry input Cin is connected to M, Fig. 5.6.1.

 Cin = 0.

io dg
 Therefore the adder adds A + B + Cin = A + B since
Cin = 0. Thus with M = 0 addition of A and B will take
place.


at le
Operation as Subtractor (M = 1) :

For operation as a subtractor, the mode select (M) input


ic w
is connected to VCC,

 M = 1.
bl no

 Since M = 1, one input of each EX-OR gate is now 1. (C-76) Fig. 5.6.1 : Illustration of case 1 in BCD addition
Hence each EX-OR gate acts as an inverter. This is
Case 2 : Sum greater than 9 but carry = 0
because 1  0 = 1 and 1  1 = 0.
 In this case the sum of the two BCD numbers is greater
Pu K

 Thus each bit of word B is inverted by the EX-OR


inverters. Thus we get the 1’s complement of number B than 9 that means it is an invalid BCD number. But the

at the output of EX-OR gates. final carry is 0.


ch

 The carry input terminal Cin is connected to M,  So we have to correct the sum by adding decimal 6 or
 Cin = 1 BCD 0110 to it. After doing this we get the correct BCD
Te

 The inverted number B adds with Cin = 1 to give the 2’s sum.
complement of B. Hence the adder will add A with the  Case 2 of BCD addition is illustrated in Fig. 5.6.2.
2’s complement of B and the result is actually the
subtraction A – B.

 If Cout = 0 then the subtraction (A – B) is positive and in


true form. But if Cout = 1 then the subtraction (A – B) is
negative and in the 2’s complement form.

 Thus with M = 1, this circuit works as a 2’s complement


subtractor.

5.6 BCD Addition :


(C-77) Fig. 5.6.2 : Illustration of case 2 in BCD addition
 In BCD addition we have to deal with three different
Case 3 : Sum less than or equal to 9 but carry = 1
situations. Assume that two 4 bit BCD numbers A and B
are being added. Then the three cases to be considered  Here the sum of two BCD numbers is less than or equal
are, to 9 i.e. it is a valid BCD number, and final carry = 1.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-22 Combinational Logic Design

Soln. :
 A nonzero carry indicates that the answer is wrong and
(79)BCD + (16)BCD :
needs correction.
 Add (6)10 or (0110) BCD to the sum to correct the

answer.

 Case 3 of BCD addition is illustrated in Fig. 5.6.3.

(C-5229)

ns e
Add (6)10 to invalid BCD.

io dg
at le (C-5230)
ic w
 (79)BCD + (16)BCD = (95)BCD ...Ans.

5.6.1 BCD Adder using MSI IC 7483 :


bl no

SPPU : Dec. 10, May 11, May 12, Dec. 12, May 18,

University Questions.

Q. 1 Draw and explain 4-bit BCD adder using IC 7483.


(C-78) Fig. 5.6.3 : Illustration of case 3 in BCD addition
Pu K

(Dec. 10, May 11, 4 Marks, May 18, 6 Marks)


Ex. 5.6.1 : Add (83)10 and (34)10 in BCD. Q. 2 Describe the working of BCD adder using 7483
ch

Soln. : with the help of diagram. (May 12, 8 Marks)


Q. 3 Draw and explain 4-bit BCD adder using IC7483.
Also explain with example addition of numbers with
Te

carry. (Dec. 12, 8 Marks)


 BCD adder adds two BCD digits and produces a BCD
digit. But remember that a BCD digit cannot be greater
than 9.
 The two given 4-bit BCD numbers are to be added using
the rules of binary addition.

 If sum is less than or equal to 9 and carry = 0, then no


correction is necessary. The sum obtained is correct and
in the true BCD form.

 But if sum is invalid BCD or carry = 1, then the result is


wrong and needs correction.
(C-6578)
 The wrong result can be corrected by adding six
 (83)10 + (34)10= (117)10 …Ans. i.e. (0110) to it.
Block Diagram of BCD Adder :
Ex. 5.6.2 : Perform addition in BCD format
(79)BCD + (16)BCD.  From the points stated above, we understand that the
4-bit, BCD adder should consist of the following blocks :

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-23 Combinational Logic Design

1. A 4-bit binary adder to add the given two 4-bit BCD Write K-map :
numbers A and B.
2. A combinational circuit to check if sum is greater than 9
or carry = 1.
3. Another 4-bit binary adder to add six (0110) to the
incorrect sum if sum > 9 or carry = 1.

 The block diagram of such a BCD adder is shown in


Fig. 5.6.4.

ns e
 So we have to design the combinational circuit that
finds out whether the sum is greater than 9 or carry = 1.

io dg
(C-398) Fig. 5.6.5 : K-map for Y output

 The boolean expression is,


Y = S 3S 2 + S 3S 1

at le  The complete BCD adder is shown in Fig. 5.6.6.


ic w
bl no
Pu K
ch

(C-397) Fig. 5.6.4 : Block diagram of BCD adder


Design of Combinational Circuit :
 The output of combinational circuit should be 1 if the
sum produced by adder 1 is greater than 9 i.e. 1001.
Te

 The truth table is as follows :


(C-6180) Table 5.6.1 : Truth table for combinational
circuit design

(C-399) Fig. 5.6.6 : 4-bit BCD adder

 The output of the combinational circuit should be 1 if


Cout of adder-1 is high or if the output of adder-1 is
greater than 9.
 Therefore Y is ORed with Cout of adder 1 as shown in
Fig. 5.6.6.
 The output of combinational circuit is connected to B1B2
inputs of adder-2 and B3 = B1 = 0 as they are connected
to ground permanently.
 This makes B3 B2 B1 B0 = 0 1 1 0 if Y = 1.

 The sum outputs of adder-1 are applied to A3A2A1A0 of


adder-2.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-24 Combinational Logic Design

 The output of combinational circuit is to be used as final Step 1 : Obtain the 9’s complement of number B (i.e the
output carry and the carry output of adder-2 is to be number to be subtracted).
ignored. Step 2 : Add A and 9’s complement of B.
Operation : Step 3 : If a carry is generated in step 2 then add it to the

Case I : Sum  9 and carry = 0 sum to obtain the final result. The carry is called
as end around carry.
 The output of combinational circuit Y = 0. Hence
Step 4 : If carry is not produced then the result is negative
B3 B2 B1 B0 = 0000 for adder-2.
and in the 9’s complement form. So take 9’s
 Hence output of adder-2 is same as that of adder-1. complement of the result.

ns e
Case II : Sum > 9 and carry = 0 Ex. 5.7.1 : Subtract (3)10 from (7)10 in BCD.

io dg
 If S3 S2 S1 S0 of adder-1 is greater than 9, then output Y Soln. :
of combinational circuit becomes 1. Step 1 : Obtain 9’s complement of (3)10 :
 B3 B2 B1 B0 = 0110 (of adder-2) 9’s complement of (3)10 is 9 – 3 = 6.

 Hence six (0110) will be added to the sum output of Step 2 : Add 7 and nine’s complement of 3 : (C-6258)


at le
adder-1.
We get the corrected BCD result at the sum output of
ic w
adder-2.

Case III : Sum  9 but carry = 1


bl no

 As carry output of adder-1 is high, Y = 1.


5.7.2 4-Bit BCD Subtractor using 9’s
 B3 B2 B1 B0 = 0 1 1 0 (of adder-2) Complement Method :
 0 1 1 0 will be added to the sum output of adder-1.
SPPU : Dec. 09, Dec. 11
Pu K

 We get the corrected BCD result at the sum output of


University Questions.
adder-2.
Q. 1 Draw and explain 4-bit BCD subtractor using IC
 Thus the four bit BCD addition can be carried out using
ch

7483. (Dec. 09, 5 Marks)


the binary adder. Q. 2 What is the use of 7483 chip ? Draw and explain
nine’s complement used in BCD subtractor using
5.7 BCD Subtractor using MSI IC 7483 : 7483. (Dec. 11, 8 Marks)
Te

BCD subtraction can be performed using two methods : Operation :

1. Using 9’s complement 2. Using 10’s complement  The circuit diagram of a 4-bit BCD subtractor is shown
in Fig. 5.7.1. It consists of four binary parallel adders
5.7.1 BCD Subtraction using 9’s (IC 7483).
Complement :  Adder – 1 obtains the 9’s complement of number B.

 The 9’s complement of a BCD number can be obtained  Adders – 2 and 3 form the normal 4-bit BCD adder with
a facility to add (6)10 i.e (0110)2 for correction.
by subtracting it from 9.
 For example 9’s complement of 1 is 8. The 9’s  Adder – 2 adds number A with the 9’s complement of
complement of various digits are given in Table 5.7.1. number B. The combinational circuit associated with
adder – 3 will correct the sum by adding (6)10 or (0110)2
Table 5.7.1 : 9’s complement of various decimal digits if necessary.
Decimal digit 0 1 2 3 4 5 6 7 8 9  The output of this combinational circuit is used further
9’s complement 9 8 7 6 5 4 3 2 1 0 as a carry. At the output of adder – 3 we get the correct
BCD sum of A and 9’s complement of B.
Procedure for BCD subtraction :
 Adder – 4 is used to either add 1 to the output of adder
 The BCD subtraction using 9’s complement is performed – 3 or take the 9’s complement of the output of
as follows : adder – 3 depending on the status of carry as follows :

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-25 Combinational Logic Design

Adder 4 operation Step 1 : Obtain the 10’s complement of subtrahend


If carry = 1 : Add 1 to the sum output of adder 3. This (number to be subtracted).
happens because Cin = 1 for adder 4. Step 2 : Add the manuend to the 10’s complement of
If carry = 0 : Take 9’s complement of sum output of subtrahend.
adder 3. Step 3 : Discard carry. If carry is 1 then the answer is
5.7.3 BCD Subtraction using 10’s positive and in its true form.
Complement : Step 4 : If carry is not produced then the answer is
 The 10’s complement is obtained by adding 1 to the 9’s negative. So take 10’s complement to get the
answer.

ns e
complement. The 10’s complement can be used to
perform the BCD subtraction as follows :

io dg
at le
ic w
bl no
Pu K
ch
Te

(C-401) Fig. 5.7.1 : 4-bit BCD subtractor using 9’s complement method

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-26 Combinational Logic Design

Ex. 5.7.2 : Perform the subtraction (9)10 – (4)10 in BCD 5.7.4 4-bit BCD Subtraction using 10’s
using the 10’s complement. Complement Method :
Soln. : (C-6193)
SPPU : Dec. 09.
University Questions.
Q. 1 Draw and explain 4-bit BCD subtractor using IC
7483. (Dec. 09, 5 Marks)

Circuit diagram :

The circuit diagram of a 4-bit BCD subtractor using the

ns e

10’s complement method is shown in Fig. 5.7.2.

io dg
 This circuit consists of four, 4-bit binary adders
(IC 7483).

at le
ic w
bl no
Pu K
ch
Te

(C-402) Fig. 5.7.2 : BCD subtraction using 10’s complement method

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-27 Combinational Logic Design

Operation : Block diagram :


 The operations to be performed by this circuit are as  The block diagram of an n bit digital comparator is
follows :
shown in Fig. 5.8.1.
1. Obtain the 10’s complement of number B (i.e the
number to be subtracted).  The comparator has three outputs namely A > B, A = B
2. Add number A and 10’s complement of B using and A < B.
BCD addition.
 Depending on the result of comparison, one of these
3. If carry is not generated then take the 10’s
outputs will go high.
complement of the result.

ns e
Adder - 1
 Adder – 1 performs the following operations.

io dg
 Adder – 1 produces the 10’s complement of number B.
 Number B is inverted using the EX-OR gates and then
Cin = 1 is added to it to obtain 2’s complement of B.
 A3 A2 A1 A0 = 1010 i.e. (10)10. Adder – 1 adds 1010 and


at le
2’s complement of number B. So actually it performs the
subtraction (10 – B) to obtain the 10’s complement of B.
Thus we obtain 10’s complement of B at the output of (C-403) Fig. 5.8.1 : Block diagram of an n-bit comparator
ic w
Adder – 1. 5.8.1 A 2-Bit Comparator : SPPU : Dec. 12.
Adders- 2 and 3
bl no

University Questions.
 Adders 2 and 3 together form the normal BCD adder
Q. 1 Design 2-bit magnitude comparator using logic
discussed in the earlier sections.
gates. Assume that A and B are 2-bit inputs. The
 Adder – 2 adds number A to 10’s complement of B.
outputs of comparator should be A > B,
 Adder – 3 adds six (0110) to the result of this addition, if
Pu K

the correction is necessary. A = B, A < B. (Dec. 12, 8 Marks)


 Thus at the output of adder – 3 we get the correct BCD Truth table :
equivalent of (A + 10’s complement of B).  For a 2-bit comparator, each input word A and B is 2 bit
ch

 The output of the combinational circuit is treated as long.


carry. It is passed to adder - 4.
 The truth table of a 2-bit comparator is shown in
Adder - 4 Table 5.8.1.
Te

 Depending on the status of carry obtained from the (C-406(a)) Table 5.8.1 : Truth table for a 2-bit comparator
combinational circuit, the adder behaves in the
following manner.
 If carry = 0, then due to inverter used A3 A2 A1 A0
= 1010 i.e. (10)10 and carry input Cin = 1. Also the EX-OR
gates will act as inverters.
 Hence adder 4 will add (10)10 and the 2’s complement
of adder 3 output. So what it actually does is, it takes
the 10’s complement (10 – result) of the result.
 But if carry = 1 then adder - 4 will pass the adder - 3
output unchanged.

5.8 Magnitude Comparators :


Definition :
 Digital(or magnitude) comparator is a combinational
circuit, which compares the two n-bit binary words A
and B applied at its input and produces three different
outputs : A  B, A = B and A  B.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-28 Combinational Logic Design

K-maps : For A  B
For A < B :
 Simplified expression :
– – – –
A < B = A1 A0 B0 + A1 B1 + A0 B1 B0

For A > B

 Simplified expression :
– – – –
A > B = A0 B1 B0 + A1 A0 B0 + A1 B1

ns e
Simplification for output A = B :

Refer the K-map for output A = B shown in Fig. 5.8.2(b).

io dg

 The expression for A = B is given by,


– – – – – – – –
(C-406) Fig. 5.8.2(a) : K-map for output A < B (A = B) = A1 A0 B1 B0 + A1 A0 B1B0 + A1 A0 B1 B0 + A1A0 B1 B0

at le
For A = B : – – – – – –
= A0 B0 (A1 B1 + A1 B1) + A0 B0 ( A1 B1 + A1B1)
– – – –
= (A1B1 + A1 B1) (A0 B0 + A0 B0 )
ic w
 (A = B) = (A1  B1) (A0  B0) where  = EX-NOR
bl no

Logic diagram for a two bit comparator :

 The logic diagram for the 2-bit digital comparator is

shown in Fig. 5.8.3.


Pu K

 This diagram is drawn by referring to the simplified


(C-406) Fig. 5.8.2(b) : K-map for output A = B
ch

expressions of the outputs.


For A > B :
Te

(C-407) Fig. 5.8.2(c) : K-map for output A > B

 The K-maps for the three outputs and corresponding

simplified expressions are shown in Figs. 5.8.2(a), (b) and

(c).

 From these K-maps we get the simplified expressions


(C-408) Fig. 5.8.3 : Logic diagram for 2-bit comparator
for the three outputs of comparator are as follows :

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-29 Combinational Logic Design

5.9 Multiplexer (Data Selector) : Equivalent Circuit :


 The equivalent circuit of an n:1 multiplexer is as shown
Definition and block diagram :
in Fig. 5.9.1(b).
 A multiplexer is a combinational circuit with n data
 As shown in Fig. 5.9.1(b) the multiplexer acts like a
inputs one output and m select inputs, which selects
digitally controlled single pole, multiple way switch.
one of the n data inputs and routes (connects) it to the
output.

 The selection of one of the n inputs is done with the


help of the select inputs.

ns e
 Multiplexer is a special type of combinational circuit.

io dg
The block diagram of an n-to-1 multiplexer is shown in
Fig. 5.9.1(a).

at le (C-413) Fig. 5.9.1(b) : Equivalent circuit of a multiplexer


ic w
 The output gets connected to only one of the n data
inputs at given instant of time via the single pole
bl no

multiple throw rotary switch.

 The position of the rotary arm will be dependent on the


status of the m select inputs. It will connect the output
Pu K

to the selected input.

5.9.1 Necessity of Multiplexers :


(C-413) Fig. 5.9.1(a) : Block diagram of an n : 1 multiplexer
ch

 As shown, there are n-data inputs (D0, D1, .....Dn-1), one  In most of the electronic systems, the digital data is

output Y and m select inputs, S0, S1 .....Sm–1. available from more than one sources. It is necessary to
route this data over a single line.
 The relation between the number of data inputs (n) and
Te

number of select inputs (m) is as follows :  Under such circumstances we require a circuit which
m selects one of the many sources at a time.
n = 2
 This circuit is nothing else but a multiplexer, which has
 A multiplexer is a digital circuit which selects one of the
many inputs, one output and some select inputs.
n data inputs and routes (connects) it to the output.
 Multiplexer improves the reliability of the digital system
 The selection of one of the n inputs is done with the
because it reduces the number of external wired
help of the select inputs.
connections.
 To select n inputs we need m select lines such that
m
2 = n. 5.9.2 Advantages of Multiplexers :
 Depending on the digital code applied at the select 1. It reduces the number of wires, required to be used.
inputs, one out of n data sources is selected and
2. A multiplexer reduces the circuit complexity and cost.
transmitted to the single output Y.
3. We can implement many combinational circuits using
 E is called as a strobe or enable input which is useful for
MUX.
cascading.
4. It simplifies the logic design.
 It is generally an active low terminal, that means it will
perform the required operation when it is low. 5. It does not need the K maps for simplification.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-30 Combinational Logic Design

5.10 Types of Multiplexers : Realization of a 2 : 1 MUX using gates :


 The realization using gates is shown in Fig. 5.10.2.
 The types of multiplexers are :

1. 2 : 1 multiplexer. 2. 4 : 1 multiplexer.

3. 8 : 1 multiplexer. 4. 16 : 1 multiplexer.

5. 32 : 1 multiplexer.

5.10.1 2 : 1 Multiplexer :

ns e
Block diagram :

 The block schematic of a 2 : 1 multiplexer is shown in

io dg
Fig. 5.10.1(a). It has two data inputs D0 and D1, one (C-415) Fig. 5.10.2 : Realization of 2 : 1 MUX using gates

select input S, an enable input and one output. 5.10.2 A 4 : 1 Multiplexer : SPPU : May 07.
 The truth table of this MUX is shown in Fig. 5.10.1(b).

at le University Questions.

Q. 1 Draw basic circuit diagram of 4 : 1 MUX and its


ic w
truth table. (May 07, 6 Marks)
Block diagram and truth table :
bl no

 Fig. 5.10.3 shows the block diagram of a 4 : 1

(a) Block diagram


multiplexer and Table 5.10.2 gives its truth table.
Pu K
ch

(b) Truth table


(C-414) (a) Fig. 5.10.1 : 2 : 1 multiplexer
Te

Truth table :
(C-416) Fig. 5.10.3 : 4 : 1 multiplexer
 Write a more elaborate truth table for 2 : 1 MUX as
(C-416)(a) Table 5.10.2 : Truth table
shown in Table 5.10.1.

(C-414(a)) Table 5.10.1 : Truth table of a 2 : 1 MUX

 Note that n = 4 hence number of select lines i.e. m = 2


 From the truth table it is clear that Y = 1 for only two m
so that 2 = n. The strobe terminal (G) has not been
conditions shown by the shaded boxes. included to reduce the complexity.

 We can write down the Boolean expression by taking  The truth table tells us that if S1 S0 = 00, the data bit D0
is selected and routed to output.
into consideration these two conditions as follows :
–– ––  Y = D0 …. when S1 S0 = 00
 Y = E S D0 + ESD1 = E ( S D0 + SD1)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-31 Combinational Logic Design

 Similarly if S1 S0 = 01, then D1 is selected and routed to

the output.

 Y = D1 …. when S1 S0 = 01

Y = D2 for S1 S0 = 10 and

Y = D3 for S1 S0 = 11

 The output will be high when the selected input (D0 , D1

ns e
etc.) is 1. Hence the logical expression for output in the

io dg
SOP form is as follows :
– – – –
Y = S 1 S 0 D0 + S 1 S0 D1 + S1 S 0 D2 + S1 S0 D3
(a) Block diagram

Realization of a 4 : 1 MUX using gates :

 at le
This expression can be realized using basic gates as

shown in Fig. 5.10.4.


ic w
bl no
Pu K

(b) Truth table

(C-419(a)) Fig. 5.10.5 : 8 : 1 multiplexer


ch

Operating principle :

 When the strobe or enable input is 0, the output of the


multiplexer will be 0 irrespective of any input.
Te

 With E = 1, we can select any one of the eight data


inputs and connect it to the output.

(C-417) Fig. 5.10.4 : Realization of 4 : 1 multiplexer  For example if S2 S1 S0 = 0 1 1 then the data input D3 is
using basic gates selected and output Y will follow the selected input D3.

5.10.4 Applications of a Multiplexer :


5.10.3 8 : 1 Multiplexer :
SPPU : Dec. 06.
Block diagram and truth table :
University Questions.
 The block diagram of an 8 : 1 MUX is shown in Q. 1 Explain any one application of multiplexer.
Fig. 5.10.5(a) and its truth table is shown in (Dec. 06, 8 Marks)
Fig. 5.10.5(b).  Some of the important applications of a multiplexer are

 It has eight data inputs, one enable input, three select as follows :
1. It is used as a data selector to select one out of many
inputs and one output.
data inputs.
2. It is used for simplification of logic design.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-32 Combinational Logic Design

3. In the data acquisition system.  These chips consist of two 4 : 1 multiplexers. Each
4. In designing the combinational circuits.
multiplexer has a separate strobe (enable) input.
5. In the D/A converters.
 There are two select inputs A and B which are common
6. To minimize the number of connections in a logic
to both the 4 : 1 multiplexers inside the ICs.
circuit.

5.11 Study of Different Multiplexer ICs :  The strobe lines 1G and 2G are the active low enable

 The multiplexer ICs available in market are given in lines for the two 4 : 1 multiplexer inside the IC. The truth

ns e
Table 5.11.1. table for each 4 : 1 MUX is shown in Table 5.11.2.

io dg
Table 5.11.1
(C-6185) Table 5.11.2 : Truth table of 4:1 MUX
IC Number Description Output

74157 Quad 2 : 1 Mux Same as input

at le
74158

74153
Quad 2 : 1 Mux

Dual 4 : 1 Mux
(No inversion)

Inverted output

Same as input
ic w
(No inversion)

74352 Dual 4 : 1 Mux Inverted output


bl no

74151A 8 : 1 Mux Complementary


outputs

74152 8 : 1 Mux Inverted output


Pu K

 The functional diagram of IC 74153 is as shown in


74150 16 : 1 Mux Inverted output
Fig. 5.11.1(b).
ch

5.11.1 54LS 153/DM 54LS 153/DM 74LS 153


(Dual 4 : 1 Multiplexer) :

 The block diagram of these multiplexers is shown in


Te

Fig. 5.11.1(a).

(C-3579) Fig. 5.11.1(b) : Functional diagram of IC-74153

 The function table is shown in Table 5.11.3.

(C-8133) Table 5.11.3 : Function table of a IC-74153

(C-3578) Fig. 5.11.1(a) : Pin configuration of Dual 4 : 1


multiplexer IC 54LS 153/DM 54LS 153/ DM 74LS 153

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-33 Combinational Logic Design

5.12 Multiplexer Tree/Cascading of (C-6187) Table P. 5.12.1 : Summary of operation

Multiplexer :

 The multiplexers having more number of inputs can be


obtained by cascading two or more multiplexers with
less number of inputs.

 This is called as a multiplexer tree. This concept will be


clear after solving the following examples.

ns e
Ex. 5.12.1 : Implement a 16 : 1 multiplexer using 4 : 1

io dg
multiplexers.
Dec. 05, 6 Marks, Dec. 07, 7 Marks,
Dec. 08, May 11, 8 Marks

Soln. :
 at le
Refer Fig. P. 5.12.1 for implementation.
ic w
Description : Ex. 5.12.2 : Design 12 : 1 mux using 4 : 1 multiplexers
(with enable inputs). Explain the truth table of
 The select inputs S1 and S0 of the multiplexers 1, 2, 3 your circuit in short.
bl no

and 4 are connected together. Dec. 09, 8 Marks, Dec. 17, 6 Marks.
Soln. :
 The select inputs S3 and S2 are applied to the select
 The 12:1 MUX using 4 : 1 MUXes is as shown in
inputs S1 and S0 of MUX-5.
Fig. P. 5.12.2 and its truth table is as shown in
Table P. 5.12.2.
Pu K

 The outputs Y1 ,Y2 ,Y3 ,Y4 are applied to the data inputs
D0, D1 , D2 and D3 of MUX-5 as shown in Fig. P. 5.12.1. (C-8134) Table P. 5.12.2 : Truth table
ch
Te

 The enable input E of the first three 4:1 multiplexers are


connected to logic 1 permanently in order to enable
them.
(C-425) Fig. P. 5.12.1 : 16 : 1 multiplexer using 4 : 1
multiplexers  The select lines S0 and S1 respectively of MUX 1, 2 and 3

 The operation can be summarized using Table P. 5.12.1. are connected together as shown in Fig. P. 5.12.2.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-34 Combinational Logic Design

ns e
io dg
at le
ic w
bl no

(C-1708) Fig. P. 5.12.2 : 12:1 multiplexer using 4:1 multiplexers

Ex. 5.12.3 : Design 14 : 1 mux using 4 : 1 mux (with


Pu K

enable inputs). Explain the truth table of your


circuit in short. May 10, 8 Marks.
ch

Soln. :
 The truth table of 14 : 1 MUX is as shown below :

(C-8135) Table P. 5.12.3 : Truth table of a 14 : 1 MUX


Te

(C-1711) Fig. P. 5.12.3(a)

 The realization of 14 : 1 MUX using 4 : 1 MUX is shown

in Fig. P. 5.12.3(b).

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-35 Combinational Logic Design

Design procedure :

Step 1 : Identify the decimal number corresponding to


each minterm in the given expression as
illustrated below :

(C-6264)

Step 2 : The input lines of the multiplexer, corresponding


to these numbers (0, 5 and 7) are connected to

ns e
logic 1 level.
Step 3 : All the other input lines of multiplexer are

io dg
connected to logic 0 level.
Step 4 : The inputs (A, B, C) are to be connected to the
select inputs.

The following example will make this concept clear.

at le 

Ex. 5.13.1 : Realize the logic function of the truth table


given in Table P. 5.13.1 using a multiplexer.
ic w
(C-7617) Table P. 5.13.1
bl no

(C-1712) Fig. P. 5.12.3(b) : 14 : 1 MUX using 4 : 1 multiplexers

5.13 Use of Multiplexers in


Pu K

Combinational Logic Design :


 Multiplexers ICs for 2 : 1 , 4 : 1, 8 : 1 and 16 : 1
ch

multiplexers are available.


Soln. :
 We can use them to implement the given Boolean
Step 1 : Express the output Y in the standard SOP form
expressions representing the combinational circuits.
Y =  m (1, 3, 4, 6)
Te

 Thus it is possible to design and implement many


Step 2 : Connect the data inputs 1, 3, 4 and 6 to logic 1
combinational circuits by using multiplexer and a few
and the remaining to logic 0.
logic gates.
Step 3 : Connect the inputs A, B, C to the select lines S2
Advantages :
S1 and S0 respectively.
 Advantages of using multiplexers for logic design are as Step 4 : Draw the logic diagram as shown in Fig. P. 5.13.1.
follows :
1. Logic design is simplified.
2. It is not necessary to simplify the logic expression.
3. It minimizes the number of ICs required to be used.
5.13.1 Implementation of a Logical
Expression in the Standard SOP Form :
Use of MUX for combinational circuit design :

1. A truth table or logic expression in standard SOP or POS


form is given to us.
2. We have to follow the design procedure given below to
use MUX for implementing the given logical expression. (C-429) Fig. P. 5.13.1 : Logic diagram for Ex. 5.13.1

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-36 Combinational Logic Design

Ex. 5.13.2 : Implement the following expression using a f (A, B, C) =  m (1, 3, 4, 6)


multiplexer. Step 1 : Apply two variables B and C to the select
f (A, B, C) =  m (0, 2, 4, 6) inputs :
Soln. :  As shown in Fig. P. 5.13.3(a), the two input variables B
 Since there are three variables, the multiplexer having and C are applied to the select lines S1 and S0
three select inputs should be used. Hence an 8 : 1 respectively.
multiplexer as shown in Fig. P. 5.13.2 should be used.

ns e
Step 1 : Identify the decimal number corresponding to each
minterm. The decimal numbers corresponding the

io dg
minterms are 0, 2, 4 and 6.
Step 2 : Connect the data input lines 0, 2, 4 and 6 to logic 1
and the remaining lines (1, 3, 5, 7) to logic 0 as

at leshown in Fig. P. 5.13.2.


ic w
(C-431) Fig. P. 5.13.3(a) : Logic diagram
bl no

Step 2 : Write the design table :


 The design table is shown in Fig. P. 5.13.3(b).
Pu K
ch

(C-430) Fig. P. 5.13.2 : Implementation of a logic (C-432) Fig. P. 5.13.3(b) : Design table
expression using a multiplexer  The data inputs D0 to D3 have been written at the top of

Te

Step 3 : Connect the variables A, B and C to the select the table and the two possible values A and A of the
inputs. unused variable A have been written.
 In the eight boxes we enter the decimal numbers
Note : We can use IC 74151 which is an 8 : 1 multiplexer
corresponding to the minterms (0 to 7) serially.
to implement Boolean equations using 8 : 1 MUX.
 Encircle those minterms corresponding to which the
5.13.2 Use of 4 : 1 MUX to Realize a 4 Variable output is 1 (minterms 1, 3, 4, 6).
Function :
Step 3 : Check each column in the design table :

 It is very easy to use the 4 : 1 MUX to implement a 3  The columns of design table are inspected using the
variable function. following rules :
Rule 1 : If both the minterms in a column are not circled,
 The procedure to be followed for this has been
then apply logic 0 to the corresponding data input.
illustrated through the following examples.
Note that there is no such column in our
Ex. 5.13.3 : Implement the logic function f (A, B, C) implementation table.
=  m (1, 3, 4, 6) using a 4 : 1 multiplexer. Rule 2 : If only the minterm in the second row is encircled
(see columns 1 and 3) then “A” should be applied to
Soln. :
that data input. Hence we should apply A to the D0 and
 The logic function to be implemented is D2 inputs.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-37 Combinational Logic Design

Rule 3 : If only the minterm in the first row is encircled, (see Step 3 : Inspect each column in the design table :
columns 2 and 4), then A
– should be connected to that
Rule 1 : If both the minterms in a column are not circled,
data input. Hence we should apply A
– to the D1 and D3
then apply logic 0 to the corresponding data
inputs.
input. Note that there is no such column in our
Rule 4 : If both the minterms in a column are encircled, then
implementation table.
apply a logic 1 to the corresponding data input.
Rule 2 : If only the minterm in the second row is encircled
 Note that there is no such column in our
then “A” should be applied to that data input.
implementation table.
Hence we should apply A to the D6 input.

ns e
Step 4 : Draw the logic diagram :
Rule 3 : If only the minterm in the first row is encircled,
 The logic diagram is as shown in Fig. P. 5.13.3(a). –
then A should be connected to that data input

io dg
Note : We can use IC 74153 which is a dual 4 : 1

.Hence we should apply A to the D4, D5 and D7 inputs.
multiplexer in order to implement the Boolean
expressions using 4 : 1 MUX. Rule 4 : If both the minterms in a column are encircled,
then apply a logic 1 to the corresponding data 1

at le
5.13.3 Use of 8 : 1 MUX to Realize a 4 Variable
Function : input. Note that there is no such column in our
implementation table.
ic w
 It is very easy to use the 8 : 1 MUX to implement a 4
Step 4 : Draw the logic diagram :
variable function.
 The logic diagram is as shown in Fig. P. 5.13.4(b).
bl no

 The procedure to be followed for this has been


illustrated through the following examples.

Ex. 5.13.4 : Implement the following Boolean function


Pu K

using 8 : 1 multiplexer.
f(A, B, C, D) =  m (2, 4, 5, 7, 10, 14)
ch

Soln. :

Step 1 : Apply the variables B, C and D to the select


inputs :
As shown in Fig. P. 5.13.4(b), the three variables B, C and
Te

D are connected to the select lines S2 , S1 and S0 respectively.


(C-435) Fig. P. 5.13.4(b) : Logic diagram
Step 2 : Write the design table :

 The design table is as shown in Fig. P. 5.13.4(a). Ex. 5.13.5 : Implement the following function using 4 : 1

multiplexers with active low strobe input :

f (A, B, C, D) = m (2, 3, 5, 7, 8, 9, 12, 13, 14,


15).

Soln. :

Step 1 : Write the design table :


(C-434) Fig. P. 5.13.4(a) : Design table

 In the sixteen boxes we have entered the decimal


numbers corresponding to the minterms 0 to 15 serially.

 Encircle those minterms which correspond to an output


= 1 (minterms 2, 4, 5, 7, 10, 14).
(C-1329(a))

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-38 Combinational Logic Design

Step 2 : Implementation of logic circuit : –– –– ––


ABCD+ABCD = ABCD

 f (A, B, C, D) =  m (0, 2, 3, 7, 11, 14, 15)


Step 2 : Write the design table :
(C-437(a)) Table P. 5.13.6 : Design table

ns e
io dg
Step 3 : Implementation using 8 : 1 multiplexer :
 The implementation is as shown in Fig. P. 5.13.6.

at le
ic w
(C-1330) Fig. P. 5.13.5

5.13.4 Implementation of a Logical


bl no

Expression in the Non-standard SOP


Form :

 Till now we have seen the implementation of


expressions in the standard SOP form.
Pu K

 But if the given expression is not in the standard form,


then we have to first bring it to the standard form and (C-438) Fig. P. 5.13.6 : Implementation
ch

then proceed.
5.13.5 Implementing a Standard POS
 Example 5.13.6 illustrates this concept. Expression using Multiplexer :
Ex. 5.13.6 : Implement the following Boolean expression
Te

 Let us consider the Boolean expression in the standard


using 8 : 1 multiplexer.
–– – – – POS form and its implementation using a multiplexer.
f (A, B, C, D) = A B D + A B C + B C D + A C D
Soln. : Ex. 5.13.7 : Implement the following logic function using

Step 1 : Convert the given expression to standard SOP the 8 : 1 multiplexer.

form : f (A, B, C) = M (0, 1, 3, 5, 7)


Soln. :
––– – –
f (A, B, C, D) = A B D (C + C ) + A B C (D + D )
 In the given expression, the maxterms have been
– – – –
+ B C D (A + A ) + A C D (B + B ) specified.
–– – ––––
= ABCD+ABCD+ABCD  Hence we should connect the corresponding data
– – –– inputs (D0, D1, D3, D5 and D7) to logic 0.
+ABCD+ABCD+ABCD
– ––  And connect the remaining data inputs to logic 1.
+ABCD+ABCD
This is the expression in the standard SOP form.  Connect the input variables A, B, C to the select inputs

 f (A, B, C, D) =  m (2, 0, 15, 14, 11, 3, 7, 3) S2, S1 and S0 respectively.

As X + X = X  The logic diagram is as shown in Fig. P. 5.13.7.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-39 Combinational Logic Design

Ex. 5.13.9 : Implement a full adder circuit using two 4 : 1


multiplexers.
Dec. 04, 3 Marks, Dec. 14, Dec. 19, 6 Marks.
Soln. :
Step 1 : Write the truth table of full adder :

(C-7621) Table P. 5.13.9 : Truth table of a full adder

ns e
(C-439) Fig. P. 5.13.7 : Implementation of

io dg
standard POS expression

Ex. 5.13.8 : Implement the following logic function using


4 : 1 multiplexer.
f (A, B, C) =  M (0, 1, 3, 5, 7)
Soln. :

at le
Connect B and C to the select inputs S1 and S0
ic w
respectively. Step 2 : Write the design tables for sum and carry
outputs :
 For the connections of data inputs (D0 to D3) and A
bl no

write down the design table as shown in


Fig. P. 5.13.8(a).
Pu K

Fig. P. 5.13.9(a) : Design tables for


ch

(C-443)

sum and carry outputs


Step 3 : Draw the logic diagram :

 The full adder using 4 : 1 multiplexer is shown in


Te

(C-440) Fig. P. 5.13.8(a) : Design table


Fig. P. 5.13.9(b).
 Note that as the given expression is in the POS form, we
should encircle those maxterms which are not included
in the given Boolean expression.

 The rules for deciding the input to the data lines


(D0 to D3) are however the same as those stated earlier
for the SOP form.

 The logic diagram is as shown in Fig. P. 5.13.8(b).

(C-444) Fig. P. 5.13.9(b) : Implementation of full adder using


two 4 : 1 multiplexers
5.13.6 Implementation of Boolean SOP
Expression with Don’t Care
Conditions :
(C-441) Fig. P. 5.13.8(b) : Implementation using 4 : 1  Sometimes the Boolean expressions are given with
multiplexer don’t care conditions.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-40 Combinational Logic Design

 We know that the don’t care conditions can be treated Step 2 : Implementation using 4:1 MUX :
as logic 1’s or 0’s.

 Ex. 5.13.10 illustrates the use of multiplexer to


implement such equations.

Ex. 5.13.10 : Implement the following Boolean expression


using an 8 : 1 multiplexer.
f (A, B, C, D) =  m (1, 3, 5, 10, 11, 13, 14)
+ d(0, 2).

ns e
Soln. : (C-7936) Fig. P. 5.13.11(a) : Implementation using 4:1 MUX

io dg
 The don’t care conditions are assumed to be logic 1’s.
Ex. 5.13.12 : Implement the following expression using
The design table is shown in Fig. P. 5.13.10(a) and the
8 : 1 multiplexer : f(A, B, C, D) = m (2, 4, 6,
corresponding logic diagram is shown in 7, 9, 10, 11, 12, 15) (Dec. 12, 8 Marks)

at le
Fig. P. 5.13.10(b). Soln. :
f(A, B, C, D) = m (2, 4, 6, 7, 9, 10, 11, 12, 15)
ic w
Step 1 : Write the design table :

(C-3599)Table P. 5.13.12
bl no

(a) Design table


Pu K
ch

Step 2 : Implementation using 8 : 1 MUX :

 The implementation is as shown in Fig. P. 5.13.12.


Te

(b) Implementation
(C-445) Fig. P. 5.13.10

Ex. 5.13.11 : Implement the function f(A, B, C) = m


(1, 3, 7) using the same.
May 12, 8 Marks, May 19, 6 Marks
Soln. :

Step 1 : Write the design table :


 The design table is as shown in Fig. P. 5.13.11.
(C-3600) Fig. P. 5.13.12 : Implementation

Ex. 5.13.13 : Implement the following Boolean function


using single 8 : 1 multiplexer :
f (A, B, C, D) = m (1, 4, 6, 9, 13)

(C-7935) Fig. P. 5.13.11 May 17, 6 Marks

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-41 Combinational Logic Design

Soln. : Step 2 : Implementation :

Step 1 : Write the design table :

(C-5905) Table P. 5.13.13 : Design table

ns e
io dg
Step 2 : Implementation using 8 : 1 multiplexer :
(C-7439) Fig. P. 5.13.14

5.14 Demultiplexers :
at le 5.14.1 Demultiplexer Principle :
ic w
Definition :
 A de-multiplexer is a combinational circuit with one
data input, n outputs and m select inputs, which selects
bl no

one of the n data outputs and routes (connects) it to the


input.

 The selection of one of the n outputs is done with the


help of the select inputs.
Pu K

Block diagram :
 The block diagram of a 1 to n demultiplexer is shown in
ch

(C-5906) Fig. P. 5.13.13 : Implementation using 8 : 1 MUX


Fig. 5.14.1(a) and its equivalent circuit is as shown in
Ex. 5.13.14 : Implement given function using 8 : 1 MUX Fig. 5.14.1(b).
and logic gates :
Te

F(A, B, C, D) = m (0, 1, 3, 4, 8, 9, 15).

Dec. 18, 6 Marks

Soln. :

Given : F (A, B, C, D) =  m (0, 1, 3, 4, 8, 9, 15)

To do : Implementation using 8 : 1 MUX. (a) 1 : n demultiplexer (b) Equivalent circuit


(C-447) Fig. 5.14.1
Step 1 : Write the design table :
 It has one data input, n outputs, m select inputs and
(C-7438) Table P. 5.13.14
one enable input as shown.

 A demultiplexer performs the reverse operation of a


multiplexer i.e. it receives one input and distributes it
over several outputs.

 At a time only one output line is selected by the select


lines and the input is connected to the selected output
line.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-42 Combinational Logic Design

 The enable input will enable the demultiplexer. If the  Din is connected to Y0 if S0 = 0 and E = 1. Similarly Din is
enable (E) input is not active, then the demultiplexer connected to Y1 if S0 = 1 and E = 1.
does not work.
 If E = 0, then both the outputs will be 0 irrespective of
 The relation between the n output lines and m select the inputs, because the DEMUX is disabled.
lines is as follows :
m
n = 2
Equivalent Circuit :

 A demultiplexer is equivalent to a single pole multiple

ns e
way switch as shown in Fig. 5.14.1(b).
(C-448) Fig. 5.15.1 : A 1:2 demultiplexer
 As shown in Fig. 5.14.1(b) a demultiplexer is equivalent

io dg
to a digitally controlled single pole, multiple way switch. Truth Table :

 The data input gets connected to only one of the n  The truth table of the 1 : 2 demultiplexer is as follows :
outputs at given instant of time via the single pole (C-7620) Table 5.15.1 : Truth table of demux 1 : 2


at le
multiple throw rotary switch as shown.

The position of the rotary arm will be dependent on the


ic w
status of the m select inputs. It will connect the input to
the selected output.
Need of Demultiplexers :
bl no

 In most of the electronic systems, the digital data from


more than one sources is added together and this 5.15.2 1 : 4 Demultiplexer :
common signal is transmitted over a common Block diagram and truth table :
Pu K

communication channel.
 The 1 : 4 demultiplexer is shown in Fig. 5.15.2 and its
 It is necessary to separate out this data at the receiving
truth table is given in Table 5.15.2.
ch

end.
 Under such circumstances we require a circuit which
separates the multiple signals from the common one.

 This circuit is nothing else but a demultiplexer, which


Te

has one input, many outputs, and some select inputs.

 Demultiplexer improves the reliability of the digital (C-451) Fig. 5.15.2 : Block diagram of 1 : 4 demultiplexer
system because it reduces the number of external wired
(C-6189) Table 5.15.2 : Truth table for 1 : 4 demultiplexer
connections.

5.15 Types of Demultiplexers :

 Similar to the multiplexers, the demultiplexers are


classified as follows :
1. 1 : 2 demultiplexer. 2. 1 : 4 demultiplexer.
3. 1 : 8 demultiplexer. 4. 1 : 16 demultiplexer.

5.15.1 1 : 2 Demultiplexer :
Block diagram :

 The block diagram of 1 : 2 demultiplexer is shown in  Din is connected to Y0 when S1S0 = 00, it is connected to
Fig. 5.15.1. It has one data input Din, one select input S0, Y1 when S1S0 = 01 and so on. The other outputs will
one enable (E) input and two outputs Y0 and Y1. remain zero.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-43 Combinational Logic Design

 The enable input needs to be high in order to enable  A0 , A1 , A2 are the three address lines or the three input
the demux. If E = 0 then all the outputs will be low
lines. We have to apply the 3-bit binary data to these
irrespective of everything.
inputs.
5.15.3 1 : 8 Demultiplexer :
 But when this IC is to be used as a 1 : 8 DEMUX we have
Block diagram and truth table :
to use these lines as the three select lines.
 The block diagram of 1 : 8 demux is shown in
– –
Fig. 5.15.3(a).  O0 to O7 are the 8-output active low lines.

ns e
– –
 There are three enable inputs out of which E1 and E2 are

io dg
the active low enable inputs whereas E3 is an active high

enable input.
– –
 We have to make E1 = E2 = 0 and E3 = 1 in order to

at le enable the IC.


ic w
(C-454) Fig. 5.15.3(a) : 1 : 8 demux
bl no

 It has one data input, eight outputs, three select inputs


and an enable input E.

 The truth table is shown in Fig. 5.15.3(b).


Pu K
ch

(C-489) Fig. 5.15.4(a) : Pin configuration of IC 74138

Pin names Description


Te

A0 – A2 Address inputs (Select lines)


– –
E1 – E2 Enable inputs (Active Low)

E3 Enable Input (Active HIGH)


– –
O0 – O7 Outputs (Active Low)

(C-8103) Fig. 5.15.3(b) : Truth table for 1 : 8 demux Fig. 5.15.4(b) : Pin names and description

 Depending on the combination of the select inputs S2 S1 Features of IC 74138 :


S0, the data input Din is connected to one of the eight
 Schottky process for achieving a high speed.
outputs.
 For example if S2 S1 S0 = 1 1 0 then Din is connected to  It can work as a decoder or as a demultiplexer.
output Y6.  Multiple enable inputs ensure easy expansion.

5.15.4 IC 74138 as 1 : 8 DE-MUX :  Active low, mutually exclusive outputs.

 The pin configuration of the 3 : 8 decoder IC 74138 is  The truth table for IC 74138 is shown in Table 5.15.3.
shown in Fig. 5.15.4.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-44 Combinational Logic Design

(C-6190) Table 5.15.3 : Truth table of 74138

ns e
(C-460) Fig. P. 5.16.1 : 1 : 8 demultiplexer using

io dg
two 1 : 4 demultiplexers
A0 = LSB, A2 = MSB, H = HIGH Voltage Level,
 The select lines S1 and S0 of the two 1 : 4 demultiplexers
L = LOW Voltage Level  = Don’t care condition. are connected in parallel with each other and S2 is used

at le
The connection diagram of IC 74138 as 1:8 DEMUX is as
shown in Fig. 5.15.5. 
for selecting one of the two 1 : 4 demultiplexers.

S2 is connected directly to enable (E) input of Demux – 2


ic w
whereas inverted S2 is connected to the enable input of
demux – 1.
bl no

 The truth table of this circuit is shown in Table P. 5.16.1.

(C-6192) Table P. 5.16.1 : Truth table of 1 : 8 demultiplexer


using two 1 : 4 demultiplexers
Pu K
ch

(C-3580) Fig. 5.15.5 : IC74138 as 1:8 DEMUX


Te

5.16 Demultiplexer Tree :


5.16.1 Use of DEMUX in Combinational Logic
 Similar to multiplexer we can construct the Design :
demultiplexer having more number of lines using
 Like multiplexers, we can use the demultiplexers for
demultiplexers having less number lines.
designing the combinational circuits.
 This is called as demultiplexer tree. It is also called as
cascading of demultiplexers.  Demultiplexers ICs for 1:2 , 1:4, 1:8 and 1:16

 This concept will be clear by solving the following demultiplexers are available.

examples.  We can use them to implement the given Boolean

Ex. 5.16.1 : Obtain a 1 : 8 line demultiplexer using two 1 : 4 expressions representing a combinational circuits.
line demultiplexers.
 Thus it is possible to design and implement many
Soln. :
combinational circuits by using demultiplexer and a few
 The 1 : 8 demultiplexer using two 1 : 4 demultiplexers is
logic gates.
shown in Fig. P. 5.16.1. This is similar to cascading of
multiplexers.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-45 Combinational Logic Design

Advantages : Step 2 : Write expressions for sum and carry :

 Advantages of using demultiplexers for logic design are Sum =  m (1, 2, 4, 7)


as follows :  Outputs Y1 , Y2 , Y4 and Y7 should be Ored to get sum
1. Logic design is simplified. output.
2. It is not necessary to simplify the logic expression.
Carry =  m (3, 5, 6, 7)
3. It minimizes the number of ICs required to be used.
 Outputs Y3 , Y5 , Y6 and Y7 should be Ored to get the
Use of DEMUX for combinational circuit design :
carry output.
1. A truth table or logic expression is given to us.

ns e
Step 3 : Implementation using a 1 : 8 demultiplexer :
2. We have to follow the design procedure given below to
 Fig. P. 5.16.2 shows the implementation.

io dg
use DEMUX for implementing the given logical
expression.

Design procedure :

at le
Step 1 : Identify the decimal number corresponding to
each minterm
illustrated below.
in the given expression as
ic w
If Y = ABC + ABC + ABC
(C-6264)
0 5 7
bl no

Step 2 : The input data line of the demultiplexer is

connected to a logic 1. The outputs corresponding


(C-465) Fig. P. 5.16.2 : Full adder using a demultiplexer
to these numbers (0, 5 and 7) are connected as
Pu K

Note : We can use IC 74138 which is an 1 : 8


inputs to a suitable multi input OR gate.
demultiplexer to implement Boolean equations
ch

Step 3 : The inputs (A, B, C) of the combinational circuit using 1:8 demultiplexer..
being designed are connected to the select
Ex. 5.16.3 : Implement the full subtractor using a 1 : 8
inputs. demultiplexer.
Te

 The following example will demonstrate this concept. (Dec. 09, Dec. 10, 4 Marks, May 10, 3 Marks)
Soln. :
Ex. 5.16.2 : Implement a full adder using demultiplexer.
Step 1 : Write the truth table of the full subtractor :
Dec. 09, Dec. 10, 4 Marks, May 10, 3 Marks,.
 The truth table of a full subtractor is as follows :
Soln. :
Step 1 : Write the truth table of the full adder : (C-7461) Table P. 5.16.3 : Truth table of a full subtractor

(C-7621) Table P. 5.16.2 : Truth table of a full adder

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-46 Combinational Logic Design

Step 2 : Write expressions for sum and carry :

 From the truth table we can express the difference and

borrow out outputs in the standard SOP forms as,

D = f (A, B, Bin) =  m (1, 2, 4, 7)

and Bout = f (A, B, Bin) =  m (1, 2, 3, 7)

Step 3 : Implementation using a 1 : 8 demultiplexer :

ns e
 Connect Din to logic 1 permanently and connect A, B
and Bin to the select inputs S2 , S1 , S0 respectively as

io dg
shown in Fig. P. 5.16.3.
(C-1323) Fig. P. 5.16.4

Ex. 5.16.5 : Implement two bit comparator using 1 : 16

at le demultiplexer (active low output). Draw the

truth table of two bit comparator and explain


ic w
the design in steps. Dec. 11, 8 Marks.
Soln. :
bl no

 For a 2-bit comparator, each input word is 2 bit long.

 The truth table of a 2-bit comparator is shown in


Table P. 5.16.5.
(C-464) Fig. P. 5.16.3 : Full subtractor using 1 : 8 demux
Pu K

(C-406(a)) Table P. 5.16.5 : Truth table for a 2-bit comparator


 As Din is connected to 1, we get the required minterms
ch

at the Demux outputs.

 We have to OR the required minterms to obtain the D


and Bout outputs as shown in Fig. P. 5.16.3.
Te

Note : We can use IC 74138 as 1 : 8 DEMUX to implement

the Boolean expression with 1 : 8 DEMUX.

Ex. 5.16.4 : Implement the following functions using

demultiplexer :
f1(A, B, C) = m (0, 3, 7)

f2(A, B, C) = m (1, 2, 5)

Soln. :

Step 1 : Select the DEMUX :

 We have to use a 1 : 8 demux because there are 8

inputs.
 Implementation using 1 : 16 demultiplexer is as shown
Step 2 : Implement the circuit :
in Fig. P. 5.16.5.
 Fig. P. 5.16.4 shows the implementation.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-47 Combinational Logic Design

4. Hexadecimal to binary encoder.

5.18 Priority Encoder : SPPU : Dec. 08.

University Questions.
Q. 1 What is priority encoder ? (Dec. 08, 2 Marks)

Definition :
 This is a special type of encoder with priorities assigned
to all its input lines. If two or more input lines are “1” at

ns e
the same time, then the input line with highest priority
will be considered.

io dg
Block diagram and truth table :
 The block diagram of a priority encoder is shown in
Fig. 5.18.1(a) and its truth table is shown in
(C-1969) Fig. P. 5.16.5 : 2 bit comparator using 1 : 16 DEMUX
Fig. 5.18.1(b).
5.17 at leEncoders :
ic w
Definition :
 Encoder is a combinational circuit which is designed to
perform the inverse operation of the decoder.
bl no

 An encoder has “n” number of input lines and “m”


number of output lines.
 An encoder produces an m bit binary code
corresponding to the n bit digital number, applied at its
Pu K

input.
(a) Block diagram of a priority encoder
Block diagram :
ch

 Block diagram of an encoder is shown in Fig. 5.17.1.


Te

(C-466) Fig. 5.17.1 : Block diagram of an encoder (b) Truth table of a priority encoder
(C-467) Fig. 5.18.1
 The encoder accepts an n input digital word and
converts it into an m bit another digital word.  Priorities are given to the input lines. If two or more

 For example a BCD number applied at the input can be input lines are “1” at the same time, then the input line
converted into a binary number at the output. with highest priority will be considered.
 The internal combinational circuit of the encoder is  There are four inputs, D0 through D3 and two outputs
designed accordingly. Y1 and Y0 . Out of the four inputs D3 has the highest
5.17.1 Types of Encoders : priority and D0 has the lowest priority.
 That means if D3 = 1 then Y1 Y0 = 11 irrespective of the
 The types of encoders which we are going to discuss are
other inputs. Similarly if D3 = 0 and D2 = 1 then
as follows :
Y1 Y0 = 10 irrespective of the other inputs.
1. Priority encoders.
 Carefully go through the truth table shown in
2. Decimal to BCD encoder.
Fig. 5.18.1(b) to get the feel of priority encoder
3. Octal to binary encoder.
operation.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-48 Combinational Logic Design

5.18.1 Priority Encoders in the IC Form :


 The priority encoders are available in the integrated
circuit form. The available encoders are :

74147 10 : 4 Priority encoder (Decimal to BCD)


74148 Octal to binary priority encoder

 74147 is basically a decimal to BCD priority encoder.


5.18.2 Decimal to BCD Encoder : (b) Pin configuration of 74147
 The block diagram of decimal to BCD encoder is shown

ns e
(C-472) Fig. 5.18.3
in Fig. 5.18.2.
 A1 to A9 are the active low inputs and A, B, C, D are the

io dg
 The truth table for a decimal to BCD encoder is as given
in Table 5.18.1. active low outputs. Therefore bubbles have been added
in Fig. 5.18.3(a).
(C-470(a))Table 5.18.1 : Truth table for
decimal to BCD encoder Truth table :

at le  Fig. 5.18.3(c) shows the truth table of decimal to BCD


encoder IC 74147.
ic w
 A1 to A9 are the inputs with A1 having the lowest priority
and A9 having the highest priority.
bl no

 From truth table we conclude that all nine inputs are


ACTIVE LOW representing decimal digit from 1 to 9. In
response to input, chip produces inverted BCD code
corresponding to highest numbered ACTIVE INPUT.
Pu K
ch
Te

(C-470) Fig. 5.18.2 : Block diagram of decimal to BCD encoder

5.18.3 Decimal to BCD Encoder MSI IC 74147 :


 IC 74147 is basically a 10:4 encoder or Decimal to BCD
encoder.

Pin configuration and logic symbol : (C-8309)Fig. 5.18.3(c) : Truth table for 74147

 Fig. 5.18.3(b) shows the pin configuration and – –– –


 When all inputs are held high, output D C B A = 1 1 1 1
Fig. 5.18.3(a) shows the logic symbol of IC 74147.
i.e. DCBA = (0000)2 = (0)10. Thus a decimal 0 is
represented.

 The truth table also shows the normal BCD output


which is actually the inversion of the output of IC.

 As A9 is the highest priority input, if A9 = 0 then the
remaining input lines are treated as don’t care and the
– – – –
inverted BCD output is produced as D C B A = 0 1 1 0.
(a) Logic symbol of IC 74147 The same logic is applicable to the other inputs.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-49 Combinational Logic Design

5.19 Decoder : (C-480(a)) Table 5.19.1 : Truth table of a 2 line to 4 line


decoder
Definition :

 A decoder is a combinational circuit with “n” inputs and


n
to a maximum 2 outputs that are related to each other
with a certain rule.
Block diagram :

 Fig. 5.19.1 shows the block diagram of a decoder. It has

ns e
n
“n” inputs and to a maximum 2 outputs.
 The Boolean expressions for the four outputs are,

io dg
–– –
D0 = A B D1 = A B

D2 = A B and D3 = A B

at le
(C-479) Fig. 5.19.1 : Block diagram of a decoder
5.19.2 Demultiplexer as Decoder :
 We can use a demultiplexer as a decoder.
ic w
 Decoder is identical to a demultiplexer without any data  Let us see how to operate a 1 : 4 demux as 2 : 4
input. It performs operations which are exactly opposite decoder.
bl no

to those of an encoder.
 Consider Fig. 5.19.3 which shows a 1 : 4 demultiplexer.
Typical applications :
 Din is the data input, S1 S0 are the select lines and Y3
1. Code converters.
through Y0 are the outputs.
Pu K

2. BCD to seven segment decoders.


 In order to operate it as 2 : 4 decoder, we have to use S1
3. Nixie tube decoders.
ch

S0 as inputs, keep Din open and use Y3 to Y0 as outputs


4. Relay actuators.
as shown in Fig. 5.19.3.
5.19.1 2 to 4 Line Decoder :
Te

Block diagram :

 The block diagram of a 2 to 4 line decoder is shown in


Fig. 5.19.2. A and B are the two inputs whereas D0
through D3 are the four outputs.

(C-483) Fig. 5.19.3 : 1 : 4 demux as 2 : 4 decoder

5.19.3 3 to 8 Line Decoder :

Block diagram :
(C-480) Fig. 5.19.2 : Block diagram of a 2 line to
4 line decoder
 The block diagram of a 3 to 8 line decoder is shown in
Truth table :
Fig. 5.19.4.
 Table 5.19.1 shows the truth table which explains the
 A, B and C are the three inputs whereas D0 through
operation of the decoder. It shows that each output is
D7 are the eight outputs.
“1” for only a specific combination of inputs.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-50 Combinational Logic Design

(C-485(a)) Table 5.19.3 : Truth table

(C-7773) Fig. 5.19.4 : Block diagram of a 3 line to 8 line


decoder

Truth table :

ns e
 The truth table of 3 to 8 line decoder is shown in
Table 5.19.2.

io dg
(C-6477) Table 5.19.2 : Truth table for 3 to 8 line decoder 5.19.5 IC 74138 / IC 74238 as 3 : 8 Decoder :

 The pin configuration of the 3 : 8 decoder IC 74138 is

at le 
shown in Fig. 5.19.6.

We have already discussed this IC as a demultiplexer.


ic w
Let us now see how to use it as a decoder.

 A0 , A1 , A2 are the three address lines or the three input


bl no

lines. We have to apply the 3-bit binary data to these

inputs. So these lines act as the 3-data input lines of the

3:8 line decoder.


Pu K

– –
 O0 to O7 are the 8-output active low lines. These lines
Boolean expressions :
ch

act as 8-data output lines of the 3 : 8 line decoder.


 The Boolean expressions for the eight outputs in terms
– –
of the three inputs are as given in Table 5.19.2.  There are three enable inputs out of which E1 and E2 are

5.19.4 1 : 8 DEMUX as 3:8 Decoder : the active low enable inputs whereas E3 is an active high
Te

enable input.
 In order to operate 1 : 8 Demux as a 3:8 line decoder,
– –
the connections are to be made as shown in Fig. 5.19.5.  We have to make E1 = E2 = 0 and E3 = 1 in order to

 The data input Din is connected to logic 1 permanently. enable the IC.
The three select inputs S0, S1 and S2 will act as three
input lines of the decoder and Y0 to Y7 are the 8-output

lines.

(a) Pin configuration of IC 74138

Pin names Description


(C-485) Fig. 5.19.5 : Use of 1:8 demux as 3:8 decoder
A0 – A2 Address inputs (Select lines)
 The truth table of this circuit is as given in Table 5.19.3
– –
E1 – E2 Enable inputs (Active Low)
which shows that the circuit works as a 3 : 8 decoder.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-51 Combinational Logic Design

Pin names Description Ex. 5.19.1 : Implement the following Boolean function
E3 Enable Input (Active HIGH) using a 3 : 8 decoder and external gates.
– – f (A, B, C) =  (2, 4, 5, 7)
O0 – O7 Outputs (Active Low)
Soln. :
(b) Pin names and description
 The decoder produces minterms. The outputs Y2 , Y4 , Y5
(C-489) Fig. 5.19.6
and Y7 are ORed to produce the required output.
 The truth table for IC 74138 is shown in Table 5.19.4.
 The logic implementation of the given logic function is
(C-6190) Table 5.19.4 : Truth table of 74138
shown in Fig. P. 5.19.1.

ns e
io dg
at le
ic w
(C-486) Fig. P. 5.19.1 : Implementation of Boolean
bl no

A0 = LSB, A2 = MSB, H = HIGH Voltage Level, equation using decoder and gate
L = LOW Voltage Level,  = Don’t care condition
Ex. 5.19.2 : Implement the following multiple output
 Fig. 5.19.7 shows the connection diagram for IC 74138
used as a 3 : 8 decoder. function using a suitable decoder.
Pu K

f1 (A, B, C) =  m (0, 4, 7) + d (2, 3),

f2 (A, B, C) =  m (1, 5, 6)
ch

f3 (A, B, C) =  m (0, 2, 4, 6).

Soln. :

 Let us use 3 line to 8 line decoder. f1 consists of don’t


Te

care conditions. So we will consider them to be logic 1.

 The implementation of the given three functions using a


3 : 8 decoder and a few OR gates is shown in
Fig. P. 5.19.2.
(C-3581) Fig. 5.19.7 : IC 74138 connected as 3:8 decoder

5.19.6 Combinational Logic Design Using


Decoders :

 In order to implement a logic function using a decoder,


the outputs corresponding to all the minterms will be
NANDed or bubbled ORed if the decoder outputs are
active low type (for example decoder IC 74138 has
active low outputs).

 Following examples demonstrate the use of decoder for


(C-487) Fig. P. 5.19.2 : Implementation of the given Boolean
logic circuit design. functions using a 3 : 8 decoder and OR gates

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-52 Combinational Logic Design

Ex. 5.19.3 : Design a full adder using 3 : 8 decoder IC Ex. 5.19.4 : Implement a 3-bit binary to gray code

74138. Dec. 13, Dec. 16, 6 Marks converter using decoder IC 74138.
Soln. : May 11, 8 Marks.
 The truth table of a full adder is shown in Soln. :
Table P. 5.19.3. Step 1 : Write the truth table relating the binary and
(C-7621)Table P. 5.19.3 : Truth table of full adder gray codes :

 The truth table is as follows :

ns e
(C-8138) TableP. 5.19.4 : Truth table relating the
binary and gray codes

io dg
at le
ic w
bl no

 The A, B and Cin inputs are applied to the A0, A1 and A2


Step 2 : Obtain the expressions for G2 , G1 and G0 :
inputs of the decoder.
 Refer to the shaded portions of Table P. 5.19.4.
 The outputs of 74138 are active low. Hence we should
Pu K

– – – – Normally the expressions for G2 , G1 and G0 would have


apply O1 , O2 , O4 and O7 to a NAND gate as shown in

been written in the SOP form as,


ch

Fig. P. 5.19.3 to obtain the sum output.


– – – – G2 = O4 + O5 + O6 + O7
 Similarly outputs O3 , O5 , O6 and O7 to another NAND

gate to obtain the carry output. G1 = O2 + O3 + O4 + O5


Te

 Implementation of full adder is shown in Fig. P. 5.19.3. G0 = O1 + O2 + O5 + O6

 All the enable terminals are connected to their  But the outputs of IC 74138 are active low. Hence we

respective active levels to enable the IC.


will have to convert these equations in terms of inverted

“O” outputs as follows :

G2 = O4 + O5 + O6 + O7 = O 4 + O5 + O6 + O7

––––– – –
But A + B = A · B

––––––––––––––
– – –
 G2 = O4 · O5 · O6 · O7 …(1)

––––––––––––––
– – –
Similarly G1 = O2 · O3 · O4 · O5 …(2)
(C-491) Fig. P. 5.19.3 : Full implemented using IC 74138
––––––––––––––
– – –
And G0 = O1 · O2 · O5 · O6 …(3)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-53 Combinational Logic Design

Fig. P. 5.19.5 to obtain the difference and borrow


outputs.

Logic diagram :

 Implementation of full subtractor is as shown in


Fig. P. 5.19.5.

 All the enable terminals are connected to their


respective active levels to enable the decoder IC.

ns e
io dg
(C-492) Fig.
P. 5.19.4 : Binary to gray code
converter using decoder 74138

 Equations (1), (2) and (3) are implemented as shown in

at le
Fig. P. 5.19.4. Note that each equation represents a 4-

input NAND gate.


ic w
Note : We can use IC 74138 as 3 : 8 decoder in order to
implement the Boolean equations using 3 : 8
bl no

decoder.

Ex. 5.19.5 : Implement full subtractor using decoder


IC 74138 and 2-input NAND gate ICs 7400.
Pu K

(C-1317) Fig. P. 5.19.5


Dec. 15, May 18, 6 Marks
Soln. :  Implementation is to be done using 2 input NAND
ch

Truth table : gates. Hence the output equations for “Borrow” and

 The truth table for full subtractor is as shown in “Difference” are modified as,
–– –– –– ––
Table P. 5.19.5.
Difference D = O1 O2 O4 O7
Te

(C-7461) Table P. 5.19.5 : Truth table for full subtractor


–– –– –– ––
= (O1 O2) + (O4 O7)

–– –– –– ––
= O1 O2 + O4 O7

–– –– –– ––
Borrow B = O1 O2 O3 O7

–– –– –– ––
= O1 O2 + O3 O7

–– –– –– ––
 The A, B, Bin inputs are applied to the A0, A1 and A2
= O1 O2 O3 O7
inputs of the decoder.

 The outputs of IC 74138 are active low. Hence they Ex. 5.19.6 : Implement Gray to Binary code converter
using suitable decoder.
should be applied to NAND gates as shown in

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-54 Combinational Logic Design

Soln. : Solve it yourself.


 Their connection leads are brought out, and by applying
5.19.7 Advantage of Decoder Realization : a forward voltage to the segments to be turned on we
can display any number between 0 and 9.
 The decoder may not be as efficient as the MUX in

realizing a single Boolean function because a NAND  For example if we want to display the number 7 then
the segments a, b and c should be turned on and all
gate is required to be used on the output side.
other segments should be off.
 But the decoder can be preferred over a MUX in
 Similarly the other numbers can be displayed as shown
realizing multiple Boolean functions simultaneously.

ns e
in Fig. 5.20.2.
5.20 Case Study : Combinational Logic

io dg
Design of BCD to 7 Segment
Display Controller :
5.20.1 Seven Segment LED Display :


at le
All of us know what a seven segment display is.

Fig. 5.20.1 shows the construction of a seven segment


ic w
display.
bl no

(B-2343) Fig. 5.20.2 : Various digits displayed with seven


segment display

5.20.2 Types of Seven Segment Displays :


Pu K

There are two types of seven segment LED displays :

1. Common anode 2. Common cathode


ch

(B-2342) Fig. 5.20.1 : Standard form of seven segment display 5.20.3 Common Anode Display :
 We can display any number from 0 to 9 by turning on
 A common anode seven segment display is as shown in
various combinations of the segments as shown in
Fig. 5.20.3.
Te

Table 5.20.1.
 Here as the name indicates the anode terminals of all
(B-2343(a)) Table 5.20.1
LED segments are connected together.

(C-495) Fig. 5.20.3 : Common anode LED display

5.20.4 Common Cathode Display :


 Actually each segment (a to f) is nothing but an LED in a
 A common cathode seven segment LED display is as
segment form.
shown in Fig. 5.20.4.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-55 Combinational Logic Design

 Now as the counter output is in the BCD (binary coded


decimal) form, which has only four lines, it cannot drive
the seven segment display directly.

 Therefore we have to use a BCD to seven segment


decoder / driver IC (integrated circuit) between the
counter output and seven segment display.

 The common anode type display is being used here.

ns e
(C-496) Fig. 5.20.4 : Common cathode LED display  The decoder accepts a four bit BCD count from a

io dg
 As the name indicates, the cathode terminal is made counter, converts it to a seven bit code suitable for the
– –
common and all the anode terminals are brought out seven segment display ( a to g ) and drives the display.

separately.  To turn on a segment the corresponding decoder


output goes low, and sinks current (for common anode

at le
A current limiting resistor is externally connected in

series with each segment.



display).

A current limiting resistor is connected in series with


ic w
 The common cathode point is connected to ground and
each segment.
the anodes of the segments to be illuminated are
bl no

connected to positive supply voltage VCC. 5.20.6 BCD to Seven Segment Display
Driver (Common Anode Display) :
5.20.5 Use of a Decoder for Driving the Seven
Segment Display :  Let us assume that the type of display is common
Pu K

anode.
 The use of a BCD to seven segment decoder / driver for
 Hence the output of the converter should be “0” if a
ch

driving the seven segment display is shown in


display segment is to be turned on.
Fig. 5.20.5.
Truth table :
Te

 The truth table is as shown in Table 5.20.2.

(C-499(a)) Table 5.20.2 : Truth table for BCD to 7 segment


decoder

(C-497) Fig. 5.20.5 : Driving the segment common anode


display

K-maps and simplifications :


 Many times the seven segment LED displays are
 The K maps for the seven outputs are as shown in
connected at the output of digital ICs such as counters.
Fig. 5.20.6.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-56 Combinational Logic Design

ns e
io dg
at le
ic w
bl no
Pu K
ch
Te

(C-499(b)) Fig. 5.20.6 : K maps for the outputs of BCD to seven


segment decoder

Realization :

 The logic diagram for BCD to seven segment decoder is

shown in Fig. 5.20.7.

(C-499) Fig. 5.20.6(Contd...)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-57 Combinational Logic Design

Q. 6 With suitable block diagram explain the operation of

n-bit serial adder.

Q. 7 Write a note on a 4 bit parallel binary adder.

Q. 8 Draw pin diagram of IC 7483 and explain its

operation as 4-bit binary adder.

Q. 9 What is half subtractor ? Draw the logic diagram and

ns e
truth table of half subtractor.

io dg
Q. 10 What is meant by a full subtractor ? Draw a full

subtractor circuit.

Q. 11 Draw the circuit diagram of single digit BCD adder

at le Q. 12
using IC 7483.

Implement a full subtractor circuit using only NAND


ic w
gates.

Q. 13 In detail explain the working of 4-bit binary


bl no

adder/subtractor using IC 7483.

Q. 14 For one digit BCD adder :

1. Draw circuit diagram.


Pu K

2. Explain its working.


ch

3. Can (11)10 and (9)10 be added by this circuit ?

Justify.

Q. 15 Draw half subtractor circuit. Use NAND gates only.


Te

Explain its working.


(C-501) Fig. 5.20.7 : BCD to seven segment decoder
Q. 16 Draw 4 bit adder/subtractor circuit using IC 7483 and

IC 7486. Explain its working.


Review Questions
Q. 17 Draw the circuit of 4 bit binary parallel adder. Explain

Q. 1 Explain the working of a half adder ? Draw its logic its working.

diagram. Q. 18 What is meant by a multiplexer ? Explain with block


Q. 2 Draw the logic diagram of full adder and its truth diagram the principle of multiplexing.
table.
Q. 19 Explain with diagram and truth table the operation of
Q. 3 Give the circuit diagram of a full adder using NAND
4 : 1 Mux.
gate and explain. Give its truth table.
Q. 20 Explain briefly with pin-diagram the following ICs
Q. 4 What is similar between a half adder and a half
IC-74153,
subtractor ?
Q. 21 What is a ‘Multiplexer tree’ ?
Q. 5 Build a full adder from half adder circuits.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 5-58 Combinational Logic Design

Q. 22 Give applications of multiplexers. Q. 26 Explain how demultiplexer can be used as a

Q. 23 Explain with diagram the working of 1 to 8 decoder.

demultiplexer. Q. 27 What is the necessity of multiplexer ?

Q. 24 Explain with diagram the working of 1 to 16 Q. 28 With a neat block diagram explain the function of an

demultiplexer. encoder.

Q. 25 Explain with pin-diagram the following ICs : Q. 29 What is meant by a priority encoder ? Give example.

ns e
IC74151, IC 74138. Q. 30 What do you mean by a ‘Decoder’ ? Give its

applications.

io dg



at le
ic w
bl no
Pu K
ch
Te

Powered by TCPDF (www.tcpdf.org)


Unit 3

Chapter

6
ns e
io dg
at le Flip Flops
ic w
bl no

Syllabus
Introduction to sequential circuits : Difference between combinational circuits and sequential circuits;
Pu K

Memory element-latch & Flip-Flop.


Flip-Flops : Logic diagram, Truth table & excitation table of SR, JK, D, T flip flops; Conversion from one
ch

FF to another, Study of flip flops with regard to asynchronous and synchronous, Preset & clear, Master
slave configuration; Study of 7474, 7476 flip flop ICs.
Case study : Use of sequential logic design in a simple traffic light controller.
Te

Chapter Contents
6.1 Introduction 6.11 Master Slave (MS) JK Flip Flop
6.2 Triggering Methods 6.12 Preset and Clear Inputs
6.3 Gated Latches (Level Triggered SR Flip Flop) 6.13 Various Representations of Flip Flops
6.4 The Gated S-R Latch (Level Triggered S-R 6.14 Excitation Table of Flip-Flop
Flip Flop)
6.5 The Gated D Latch (Clocked D Flip Flop) 6.15 Conversion of Flip Flops
6.6 Gated JK Latch (Level Triggered JK Flip Flop) 6.16 Applications of Flip Flops
6.7 Edge Triggered Flip Flops 6.17 Study of Flip-Flop ICs
6.8 Edge Triggered D Flip Flop 6.18 Analysis of Clocked Sequential Circuits
6.9 Edge Triggered J-K Flip Flop 6.19 Design of Clocked Synchronous State
Machine using State Diagram
6.10 Toggle Flip Flop (T Flip Flop) 6.20 Case Study : Use of Sequential Logic Design
in a Simple Traffic Light Controller

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-2 Flip Flops

6.1 Introduction :  Fig. 6.1.1 shows the block diagram of a sequential circuit
which includes the memory element in the feedback
Combinational circuits : path.
Definition :
Present state of sequential circuit :
 A combinational circuit is a logic circuit the output of
 The data stored by the memory element at any given
which depends only on the combination of the inputs.
instant of time is called as the present state of the
The output does not depend on the past value of inputs
sequential circuit.
or outputs.
Next state :
 Hence combinational circuits do not require any

ns e
 The combinational circuit shown in Fig. 6.1.1 operates
memory (to store the past values of inputs or outputs).
on the external inputs and the present state to produce
Till now we have discussed only the combinational

io dg

new outputs.
circuits.
 Some of these new outputs are stored in the memory
 The output of combinational circuit at any instant of
element and called as the next state of the sequential
time, depends only on the levels present at input circuit.

at le
terminals. It does not depend on the past status of
inputs.
 The most important part of the sequential circuit seems
to be the memory element. The memory element of
Fig. 6.1.1 is known as Flip Flop (FF). It is the basic
ic w
 The combinational circuits do not use any memory.
Therefore the previous states of input does not have memory element.
any effect on the present state of the circuit.
6.1.1 Clock Signal :
bl no

 Also the sequence in which the inputs are being applied


 The clock signal shown in Fig. 6.1.2 is a timing signal.
has no effect on the output of a combinational circuit.
Every sequential signal will have this timing signal
 We do not have to use any timing and synchronization
applied as an input signal as an input signal.
Pu K

signal such as clock signal in a combinational circuit.


 Clock is a rectangular signal as shown in Fig. 6.1.2, with
Sequential circuits : a duty cycle equal to 50%. That means its on time is
ch

Definition : equal to its off time.

 In the sequential circuit, the timing parameter also  The clock signal repeats itself after every T seconds.
needs to be taken into consideration. Hence the clock frequency is f = 1/T.
Te

 The output of a sequential circuit depends on the


present time inputs, the previous output (past) and the
sequence in which the inputs are applied.
 In order to provide the previous input or output a
memory element is required to be used. Thus a
sequential circuit needs to use a memory element as
shown in Fig. 6.1.1.
(C-561) Fig. 6.1.2 : Clock signal

6.1.2 Comparison of Combinational and


Sequential Circuits :
SPPU : Dec. 09, Dec. 13, Dec. 18, Dec. 19
.University Questions.
Q. 1 Explain the difference between combinational and
sequential circuits. (Dec. 09, 4 Marks)
Q. 2 Explain the difference between combinational and
sequential circuit. Design S-R flip-flop using J-K
(C-560) Fig. 6.1.1 : Block diagram of a sequential circuit flip-flop. (Dec. 13, 6 Marks)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-3 Flip Flops

Q. 3 Give comparison of combinational circuit with Operation :


sequential circuit. Draw and explain one-bit memory  Assume that output of gate-1 i.e. Q = 1. Hence B2 = 1.
cell using NAND gates. (Dec. 18, 6 Marks) –
 As B2 = 1, output of gate-2 i.e. Q = 0. This makes B1 = 0.
Q. 4 Compare combinational circuits with sequential
Hence Q continues to be equal to 1.
circuits. Convert JK Flip-Flop into SR flip-flop.
 Similarly we can demonstrate that if we start with Q = 0,
(Dec. 19, 6 Marks) –
then we end up obtaining Q = 0 and Q = 1.
Sr. Combinational Sequential
Parameter Conclusions :
No. circuits circuits
 From the above discussion we can draw the following

ns e
1. Output Inputs present at Present inputs
depends that instant of and past conclusions :

io dg
on time. inputs/outputs. –
 The outputs of the circuit (Q and Q ) will always be
2. Memory Not necessary Necessary –
3. Clock Not necessary Necessary complementary. That means if Q = 0 then Q = 1 and
input –
vice versa. They will never be equal Q = Q = 0 or 1 is an
4.
at leExamples Adders,
subtractors, code
converters
Flip flops, shift
registers,
counters

invalid state.
This circuit has two stable states.

ic w
 One of them corresponds to Q = 1, Q = 0 and it is
6.1.3 1-Bit Memory Cell called as 1 state or set state. Whereas the other state
(Basic Bistable Element) : SPPU : Dec. 18 –
bl no

corresponds to Q = 0, Q = 1 and it is called as 0 state or


.University Questions. reset state.

Q. 1 Give comparison of combinational circuit with –


 If the circuit is in the reset state (Q = 0, Q = 1), then it
sequential circuit. Draw and explain one-bit memory will continue to be in the reset state and if it is in the set
Pu K

cell using NAND gates. (Dec. 18, 6 Marks) –


state (Q = 1, Q = 0) then it will continue to remain in the
Definition :
set state.
ch

 Flip-flop is also known as the basic digital memory  This property of the circuit shows that it can store 1 bit
circuit. of digital information. Therefore it is called as a 1-bit
 It has two stable states namely logic 1 state and logic 0 memory cell.
state. We can design it either using NOR gates or NAND 6.1.4 Latch :
Te

gates.
 The cross coupled inverter of Fig. 6.1.3 is capable of
Circuit diagram :
locking or latching the information. Hence this circuit is
 A flip-flop can be designed by using the fundamental also called as a latch.
circuit shown in Fig. 6.1.3.  The disadvantage of the cross coupled inverter circuit is
 NAND gates 1 and 2 are basically acting as inverters. that we cannot enter the desired digital data into it.
Hence this circuit is called as a cross coupled inverter.  This disadvantage can be overcome by modifying the
 Output of gate 1 is connected to the input of gate-2 circuit as shown in Fig. 6.1.4.

and output of gate 2 is connected to input of gate-1 as  This modification will allow us to enter the desired
shown in Fig. 6.1.3. digital data into the circuit.

(C-562) Fig. 6.1.3 : A cross coupled inverter


as memory element (C-563) Fig. 6.1.4 : Modified memory cell

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-4 Flip Flops

Operation :
Case I : S = R = 0 (No change) :
 The outputs of gates 1 and 2 will become 1.

 Let Q = 0 and Q = 1 initially. Hence both the inputs to – –
Qn and Qn : Present states Qn + 1 and Qn + 1 : Next states
gate 3 are 1 and the inputs to gate-4 are (01).
(C-569) (a) Symbol

 So gate-3 output i.e. Q = 0 and gate-4 output i.e. Q = 1.

 Thus with S = R = 0, there is no change in the state of

ns e
outputs.

io dg
Case II : S = 1, R = 0 (Set) :

 Since S = 1 and R = 0, one of the inputs to gate-3 will


be 0. This will force Q output to 1.


at le
Hence both the inputs to gate-4 will be 1. This forces Q
to 0.

ic w
 Hence for S = 1, R = 0, the outputs are Q = 1 and Q = 0. (C-8074) (b) Truth table of S-R latch

This is set state. Fig. 6.1.5


bl no

Case III : S = 0, R = 1 (Reset) : Summary of operation of S-R latch :

 If S = 0, R = 1 then one of the inputs to gate-4 will be 0. The summary of operation of an S-R latch is as follows :

This will force the Q output to 1.  For S = R = 0 the latch output does not change.
Pu K

 Hence both the inputs to gate 3 will be 1. This forces Q  S = 0, R = 1 is called as the “Reset” condition as Q = 0

to 0. and Q = 1.
ch

–  S = 1, R = 0 is called as the “Set” condition as Q = 1


 Thus for S = 0, R = 1, the outputs are Q = 0, Q = 1. This

is the reset state or clear state. and Q = 0.

Case IV : S = R = 1 (Race : Prohibited) :  S = R = 1 is the prohibited state. The output is


Te

 If S = R = 1 then outputs of gates 1 and 2 will be zero. unpredictable. This condition should therefore be
avoided.
 Hence one of the inputs to gates 3 and 4 will be 0.
– Race condition :
 So both the outputs Q and Q will try to become 1. It is
 The condition S = R = 1 is called as “Race” condition.

not allowed as Q and Q should be complementary.
 When any one input to a NOR gate is 1, its output
Hence S = R = 1 condition is prohibited.
becomes 0. Thus both the outputs will try to become 0.
6.1.5 Symbol and Truth Table of S-R Latch :
This is called as the RACE condition.
 The symbol and truth table of S-R latch are as shown in
Figs. 6.1.5(a) and (b) respectively.
6.1.6 Characteristic Equation :
–  One way of explaining the behaviour of a latch or flip
 In the truth table Qn and Qn represent the present states

of outputs i.e. these are the outputs before applying a flop is to use its truth table.

new set of inputs.  The truth table is also called as excitation table.

 Qn + 1 and Qn + 1 represent the next states of outputs i.e.  Another way of doing it is to use special type of

the outputs after apply a new set of inputs. equations called characteristic equations.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-5 Flip Flops

 The characteristic equation of a flip-flop is the equation Case I : S = 0, R = 0 : Race

which relates the next state of the flip flop or latch Qn + 1



or Qn + 1 to the current state and inputs (Qn, S and R).

 Characteristic equation is actually obtained from the


truth table of the flip flop or latch, using the K-map.

Characteristic equation of SR latch :

 Refer to the truth table of SR latch and write the K-map (C-572) Fig. 6.1.8(a) : S = 0, R = 0

ns e
for the next state of output i.e. Qn + 1 as shown in
 When any one input of a NAND gate becomes 0, its
Fig. 6.1.6. output is forced to 1.

io dg

 Here S = R = 0.  Q and Q both will be forced to be
equal to 1.
 This is an undeterminate state and hence should be

at le 
avoided.
This is also called as Race condition.
ic w
(C-570) Fig. 6.1.6 : K-map for next state Qn + 1 of SR latch Case II : S = 0, R = 1 : Reset

 After simplification, the characteristic equation of SR


bl no

latch is given by,



Qn + 1 = S + R Qn …(6.1.1)

6.1.7 NAND Latch [S-R Latch using


Pu K

NAND Gates] :
Logic diagram : (C-573) Fig. 6.1.8(b)
ch

 We can construct a S-R latch with NAND gates as shown –


 Since S = 0, it forces Q to be 1.
in Fig. 6.1.7(a).
 Hence both inputs to NAND-1 are 1.
 Note that the outputs of two NAND gates are cross
Te

 Hence Q = 0.
connected, in an identical manner as that in the NOR  Thus with S = 0 and R = 1 the outputs are Q = 0 and
latch. –
Q = 1.
 Fig. 6.1.7(a) shows the NAND latch and Fig. 6.1.7(b)
 This is the reset condition.
shows its truth table.
Case III : S = 1, R = 0 : Set

(a) NAND latch (b) Truth table


(C-573(a)) Fig. 6.1.8(c)
(C-571) Fig. 6.1.7
 Since R = 0, Q is forced to 1.
Operation :
 Hence both inputs to NAND-2 are 1.
 The operation of S-R NAND latch is summarised in –
 Hence Q = 0.
Fig. 6.1.8.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-6 Flip Flops

 Thus with S = 1 and R = 0 the outputs are Q = 1 and 1. Level triggered circuits.

Q = 0. 2. Edge triggered circuits.

 This is the set condition. 6.2.1 Concept of Level Triggering :


Case IV : S = 1, R = 1 : No change
Definition :

 The latch or flip-flop circuits which respond to change in


their inputs, only if their enable input (E) held at an
active level which may be either HIGH or LOW level are

ns e
called as level triggered latches or flip-flops.

 Thus these circuits do not respond at the rising or

io dg
(C-574) Fig. 6.1.8(d) falling edges of clock. They only respond to the steady
–––––
– – HIGH or LOW levels of the clock signal.
Qn + 1 = R  Qn and Qn + 1 = S  Qn
 Fig. 6.2.1(a) shows the symbol of a level triggered SR flip

at le
Using De-Morgan’s theorem,

Qn + 1
– – – –
= R + Qn and Qn + 1 = S + Qn.
flop and Fig. 6.2.1(b) shows the clock signal applied at
its input.
ic w
– –
 Substitute R = 0 and S = 0 to get,
– – –
bl no

Qn + 1 = 0 + Qn = Qn and Qn + 1 = 0 + Qn = Qn.

 Thus there is no change in the outputs if S = R = 1.


 The symbol for S-R latch is shown in Fig. 6.1.9 alongwith
the summary of operation.
Pu K

Circuit symbol and summary of operation of S-R latch :

 For S = 0, R = 0, both the outputs will be forced to (a) (b)


ch

become 1. This is RACE condition and should be (C-576) Fig. 6.2.1 : Concept of level triggering
available.
6.2.2 Types of Level Triggered Flip-flops :

 For S = 0, R = 1, the outputs are Q = 0, Q = 1 and it is
 There are two types of level triggered flip-flops :
Te

called as the Reset condition.


1. Positive level triggered.

 For S = 1, R = 0, the outputs are Q = 1, Q = 0 and it is 2. Negative level triggered.
called as the set condition.
Positive level triggered :
 For S = R = 1, there is no change in the output state.
 If the outputs of a flip-flop respond to the input
changes, only when its clock inputs at HIGH (1) level,
then it is called as the positive level triggered flip-flop.

 The block diagram shown in Fig. 6.2.1(a) is a positive


(C-575) Fig. 6.1.9 : Symbol of S-R latch level triggered S-R FF.

Negative level triggered FF :


6.2 Triggering Methods :
If the outputs of a flip-flop respond to the input
 In the latches and flip-flops, we use the additional signal
changes, only when its clock input is at LOW (0) level, then it
called clock signal.
is called as the negative level triggered flip-flop.
 Depending on which portion of the clock signal the
latch or flip-flop responds to, we can classify them into Note : The level triggering is not used practically, due to
two types : some of its disadvantages.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-7 Flip Flops

6.2.3 Concept of Edge Triggering : 6.3 Gated Latches (Level Triggered


SPPU : May 11. SR Flip Flop) :
.University Questions.
 We have discussed the RS latches using the NAND and
Q. 1 What is edge triggering ? (May 11, 2 Marks) NOR gates.
Definition :  Now 2 more NAND gates are added to the basic SR
 The flipflops which change their outputs only latch and one more input called enable (E) is added, in
corresponding to the positive (rising) or negative order to obtain a gated SR latch or a level triggered SR
(falling) edge of the clock input are called as edge flip-flop.

ns e
triggered flipflops.  A level voltage (0 or 1) or a clock signal can be applied
to the enable (E) input.

io dg
 These flip-flops are therefore said to be edge sensitive
or edge triggered rather and not level triggered.  These flipflops will respond to the inputs if and only if
we apply an active level at the enable input. This active
level can be either 0 or 1 depending on the type of

at le 
flip-flop.
Such flipflops are called as level triggered flipflops or
gated latches or clocked flipflops.
ic w
6.3.1 Types of Level Triggered (Clocked)
Flip Flops :
bl no

(C-577) Fig. 6.2.2  There are two types of level triggered latches :
1. Positive level triggered.
 The rectangular signal applied to the clock input of a
2. Negative level triggered.
Pu K

flip-flop is shown in Fig. 6.2.2.

 If the same signal is applied as the clock signal to an


6.4 The Gated S-R Latch
(Level Triggered S-R Flip Flop) :
ch

edge triggered flip flop, then its outputs will change


only at either rising (positive) edge or at the falling 6.4.1 Positive Level Triggered SR Flip-flop :
(negative) edge of the clock. Logic diagram :
Te

 The edge triggered flip-flops do not respond to the  The gated S-R latch is shown in Fig. 6.4.1. It is also called
steady state high or low level in the clock signal at all. as clocked SR flip flop.

6.2.4 Types of Edge Triggered Flip Flops :  It is basically the S-R latch using NAND gates with an
additional “enable” (E) input. It is also called as level
 There are two types of edge triggered flip flops :
triggered S-R FF.
1. Positive edge triggered flip flops.
 The outputs of basic S-R latch used to change instantly
2. Negative edge triggered flip flops. in response to any change made at the input. But this
 Positive edge triggered flip flops will allow its outputs to does not happen with the gated S-R latch.
change in response to its inputs only at the instants  For this circuit, the change in output will take place if
corresponding to the rising edges of clock (or positive and only if the enable input (E) is made active.
spikes).  This circuit being positive level triggered, will respond to
 Its outputs will not respond to change in inputs at any changes in input only if the enable input is held at logic

other instant of time. 1 level.

 Negative edge triggered flip flops will respond only to  In short this circuit will operate as an S-R latch if E = 1
(Enable input is active) but there is no change in the
the negative going edges (or spikes) of the clock.
outputs if E = 0 (Enable input is inactive).

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-8 Flip Flops

 Hence the “Race” condition will occur in the basic NAND


latch. This operation should be avoided as both Q and

Q will try to become 1 at the same time as discussed
earlier.
Symbol and truth table :

 The symbol and truth table of the gates S-R latch are as
(C-578) Fig. 6.4.1 : Gated S - R latch
shown in Fig. 6.4.2(a) and Fig. 6.4.2(b) respectively.
Operation :

ns e
Case I : S = X, R = X, E = 0 (No change)

io dg
 Since enable E = 0, the outputs of NAND gates 3 and 4
will be forced to be 1 irrespective of the values of S and
(C-579) Fig. 6.4.2(a) : Symbol for S - R latch
R.
(C-7827) Fig. 6.4.2(b) : Truth table of gated S - R latch
 That means R = S = 1. These are the inputs of the basic


at le
S-R latch enclosed in the dotted box in Fig. 6.4.1.

Hence the outputs of NAND latch i.e. Q and Q will not
ic w
change. Thus if E = 0, then there is no change in the
output of the gated S-R latch.
bl no

Case II : S = R = 0 E = 1 : No change

 If S = R = 0 then outputs of NAND gates 3 and 4 are


forced to become 1.
6.4.2 Negative Level Triggered SR Flip Flop :
Pu K

 Hence R and S both will be equal to 1. Since S and R


are the inputs of the basic S-R latch using NAND gates, Logic diagram :
ch

there will be no change in the state of outputs.  Fig. 6.4.3(a) shows the circuit diagram of a negative level
 Thus for S = R = 0 the output state of this flip-flop triggered SR flip-flop.
remains unchanged.  It is the same circuit that we discussed in the previous
Te

Case III : S = 0, R = 1, E = 1 (Reset) section with only one additional inverter.

 Since S = 0, output of NAND-3 i.e. R = 1. And as R = 1  Due to the additional inverter connected to the enable

and E = 1 the output of NAND-4 i.e. S = 0. terminal, this circuit becomes sensitive to the low level

– (0) applied to the enable input. Hence it will enable the


 Hence Qn + 1
= 0 and Qn + 1
= 1. This is the reset outputs if E = 0.
condition.  If a square wave is applied to the enable input then the
Case IV : S = 1, R = 0, E = 1 (Set) latch output will respond to input changes only when

 Output of NAND 3 i.e. R = 0 and output of NAND 4 the square input is at its low (0) level.
i.e. S = 1.

 Hence output of S-R NAND latch is Qn + 1


= 1 and

Qn + 1 = 0. This is the reset condition.

Case V : S = 1, R = 1, E = 1 (RACE)

 As S = 1, R = 1 and E = 1, the outputs of NAND gates 3


and 4 both are 0. i.e. S = R = 0. (C-580) Fig. 6.4.3(a) : Logic diagram of Negative level
triggered S - R latch

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-9 Flip Flops

 Fig. 6.4.3(b) shows the symbol of negative level


triggered S-R latch. Note that there is a bubble added
to the enable input which is active low input.

(b) Logic symbol of gated D latch

(C-3419) Fig. 6.5.1

Truth table :
 Truth table for the gated D latch is given in Table 6.5.1.

ns e
(C-8061) Table 6.5.1 : Truth table for the gated D latch
(b) Symbol

io dg
(C-580) Fig. 6.4.3 : Negative level triggered S - R latch

6.5 The Gated D Latch


(Clocked D Flip Flop) :

 at le
In some applications, the S and R inputs will always be
complementary. i.e. when S = 0, R = 1 and when S =1, Operation :
ic w

R = 0. That means S = R.  If E = 0 then the latch is disabled. Hence there is no

 For such applications we can use the gated D latch. change in output.
bl no

Logic diagram :  If E = 1 and D = 0 then S = 0 and R = 1. Hence


irrespective of the present state the, next state is
 The circuit diagram of the gated D latch is shown in

Fig. 6.5.1(a) and its logic symbol is shown in Fig. 6.5.1(b). Qn + 1 = 0 and Qn + 1 = 1. This is the reset condition.
Pu K

This is also called as level triggered D flip-flop or  On the other hand if E = 1 and D = 1, then S = 1 and
clocked D flip-flop. –
R = 0. This will set the latch and Qn + 1 = 1, Qn + 1 = 0
ch

 Note that D latch is the simple gated S-R latch with a irrespective of the present state.
small modification. A NAND inverter is connected From the truth table it is evident that Q output is same
between its S and R inputs as shown in Fig. 6.5.1(a). as D input, in otherwords Q output follows the D input.
Te

If D = 0 then Q = 0 and if D = 1 then Q = 1. However


 This latch has only one input denoted by D.
output follows input after some propagation delay
 Due to the NAND inverter, S and R inputs will always be hence the other name of the D flip-flop is delay
the complements of each other. Hence the input
flip-flop.
conditions such as S = R = 0 or S = R = 1, will never
Characteristic equation for D latch :
appear.
 This will avoid the problems associated with SR = 00  Refer to the truth table of D latch and write the K-map
for Qn + 1 as shown in Fig. 6.5.2.
and SR = 11 conditions of SR flip-flop.

(C-584) Fig. 6.5.2 : K-map for D latch

 After simplification we get the characteristic equation of

(a) Gated D latch


D latch as,

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-10 Flip Flops

– (C-3420(a)) Table 6.6.1 : Truth table of positive level


Qn + 1 = E  D + E triggered JK latch

6.6 Gated JK Latch (Level Triggered JK


Flip Flop) : SPPU : May 19.

University Questions
Q. 1 Draw JK flip flop using gates and explain race
around condition with the help of timing diagram.

ns e
(May 19, 6 Marks) 6.6.1 Race Around Condition in JK Latch :
SPPU : May 07, Dec. 13, May 15, Dec. 15, May 19.

io dg
Logic diagram :
.University Questions.
 The JK latch using NAND gates is shown in Fig. 6.6.1(a). Q. 1 What is race-around condition ? How will you
It consists of the basic SR latch and an enable input. It is eliminate it ? Explain with the help of necessary
also called as level triggered JK flip-flop. circuit diagram and timing diagram.


at le –
Note the outputs Q and Q have been fed back and
connected to the inputs of NAND gates 4 and 3
Q. 2
(May 07, 8 Marks)
What is race around condition ? How it can be
avoided ? Convert D flip-flop to T flip-flop.
ic w
respectively. (Dec. 13, 6 Marks)
 The JK latch of Fig. 6.6.1 (a) responds, to the input Q. 3 What is race around condition ? Explain with the
changes if a positive level is applied at the enable (E) help of timing diagram. How is it removed in basic
bl no

flip-flop circuit ? (May 15, Dec. 15, 6 Marks)


input. Hence it is a positive level triggered latch.
Q. 4 Draw JK flip flop using gates and explain race
around condition with the help of timing diagram.
(May 19, 6 Marks)
Pu K

 The “Race Around Condition” that we are going to


explain occurs when J = K = 1 i.e. when the latch is in
the toggle mode.
ch

 Refer Fig. 6.6.2 which shows the waveforms for the


various modes, when a rectangular waveform is applied
to the “Enable” input.
Te

(a) A level triggered JK flip-flop

(b) Symbol of the gated JK latch


(C-3420) Fig. 6.6.1
Operation :
 The operation of NAND JK latch is exactly identical to
that of the positive edge triggered JK flip flop discussed
in section 6.9.1.
 The only difference between them is that, this circuit is
level triggered.
 The operation of NAND JK latch has been summarized
in Table 6.6.1. (C-586) Fig. 6.6.2 : Waveforms for various modes of a JK latch

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-11 Flip Flops

Interval t0 - t1 :  In other words latch is a level triggered flip flop.

 During this interval J = 1, K = 0 and E = 0. Flip-flop :

 Hence the latch is disabled and there is no change in Q.  But flip flop is a sequential circuit which generally
Interval t1 - t2 : samples its inputs and changes its outputs only at
particular instants of time and not continuously.
 During this interval J = 1, K = 0 and E = 1.
 The flip-flops are therefore said to be edge sensitive or
 Hence this is a set condition and Q becomes 1.
edge triggered rather than being level triggered like
Interval t2 - t3 : Race around
latches.

ns e
 At instant t2, J = K = 1 and E = 1 Hence the JK latch is in
6.7 Edge Triggered Flip Flops :

io dg
the toggle mode and Q becomes low (0) and Q = 1.
 We have already discussed the concept of edge
 These changed outputs get applied at the inputs of
triggering.
NAND gates 3 and 4 of the JK latch. Thus the new
inputs to Gates 3 and 4 are :  For the edge triggered flip flops, it is necessary to apply

at le –
NAND - 3 : J = 1, E = 1, Q = 1,
NAND - 4 : K = 1, E = 1, Q = 0.
the clock signal (timing signal) in the form of sharp
positive and negative spikes instead of in the form of a
ic w
rectangular pulse train.
 Hence R will become 0 and S will become 1.
 Such sharp spikes are shown in Fig. 6.7.1. These spikes
 Therefore after a time period corresponding to the
bl no

can be derived from the rectangular clock pulses with



propagation delay, the Q and Q outputs will change to, the help of a passive differentiator circuit shown in

Q = 1 and Q = 0. Fig. 6.7.1.
Pu K

 These changed output again get applied to the inputs  Thus the passive differentiator acts as a pulse shaping
of NAND-3 and 4 and the outputs will toggle again. circuit.
 Thus as long as J = K = 1 and E = 1, the outputs will
ch

keep toggling indefinitely as shown in Fig. 6.6.2. This


multiple toggling in the J-K latch is called as Race
Around condition. It must be avoided.
Te

Interval t3 - t4 :

 During this interval J = 0, K = 1 and E = 1.


 Hence it is the reset condition. So Q becomes zero.

How to avoid race around condition ?


 The race around condition in JK latch can be avoided by,
1. Using the edge triggered JK flip flop.
2. Using the master slave JK flip flop.

6.6.2 Difference between Latch and


(C-587) Fig. 6.7.1 : Use of differentiator to
Flip-flop :
obtain sharp edges
Latches :
6.7.1 Positive Edge Triggered S-R Flip Flop :
 Latches and flip flops both are basically the bistable SPPU : Dec. 11.
elements.
.University Questions.
 As discussed earlier a latch has got an enable input. As
Q. 1 Draw and explain SR flip-flop using NAND gates.
long as it is active, the latch output will keep changing
(Dec. 11, 8 Marks)
according to the changes in its input.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-12 Flip Flops

Circuit diagram :  Due to these sharp positive spikes applied at point “A”,
 Fig. 6.7.2(a) shows the circuit diagram of a positive edge the gated S-R latch in the S-R flip flop will be enabled
triggered S-R flip flop and Fig. 6.7.2(b) shows the logic only for a short duration of time when the positive spike
symbol for it. is present at A. (at instants t1, t2 … in Fig. 6.7.3)

 At these instants, the flip-flop behaviour is exactly


identical to that of the enabled gated S-R latch (enabled
level triggered SR flip-flop).

 The truth table of positive edge triggered S-R flip flop

ns e
will also be identical to that of the gated S-R latch with

io dg
only one change.

(a) Positive edge triggered S-R flip flop  The “enable” input will now be replaced by clock input.
And the outputs will change only if a positive edge is
applied to the clock input.

at le  The positive clock edge is denoted by an arrow () in


Table 6.7.1.
ic w
(b) Logic symbol Truth Table :
(C-588) Fig. 6.7.2
 The truth table of a positive edge triggered SR flip flop
bl no

Operation :
is shown in Table 6.7.1.
 Note that the SR flip flop of Fig. 6.7.2(a) consists of a
(C-6260) Table 6.7.1 : Truth table of a positive
differentiator circuit and a gated S-R latch (level edge triggered SR flip flop
Pu K

triggered SR flip-flop) which we have already discussed.

 C - R1 acts as a differentiator and converts the


ch

rectangular clock pulses into positive and negative


spikes.

 The diode acts as rectifying diode and allows only the


positive spikes to pass through it while, blocking the
Te

negative spikes.

 Thus we get only the positive spikes, corresponding to


the leading edges of the clock signal, as shown in
Fig. 6.7.3 at point A which is enable input of the gated
S-R latch.
 Note that this flip-flop does not respond to the positive
or negative levels in the clock signal.
 Similarly it does not respond to the negative edges of
the clock.
 This flip flop will respond only to the positive edges of
clock signal.
 With positive edge of the clock, the SR flip flop behaves
in the following way.
S=R=0  No change in output

S = 0, R = 1  Qn + 1 = 0, Qn + 1 = 1 Reset condition
(C-589) Fig. 6.7.3 : Generation of positive triggering pulses

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-13 Flip Flops

– 6.8.1 Positive Edge Triggered D Flip Flop :


S = 1, R = 0  Qn + 1 = 1, Qn + 1 = 0 Set condition
Logic diagram :
S=R=1  Race condition.
 Fig. 6.8.1(a) shows the positive edge triggered D flip
6.7.2 Negative Edge Triggered S - R Flip Flop : flop. It consists of a gated D latch and a differentiator
Symbol and Truth table : circuit. The symbol is as shown in Fig. 6.8.1(b).
 The internal circuit (with NAND gates) of the negative  The clock pulses are applied to the circuit through a
edge triggered S-R flip flop is exactly same as that for differentiator formed by R1C and a rectifier circuit
consisting of diode D and R2.
the positive edge triggered SR flip-flop discussed

ns e
earlier.  The NAND gates 1 through 5 form a D latch.

io dg
 The differentiator circuit is slightly modified in order to  The differentiator converts the clock pulses into positive
make this flip flop respond to the negative (falling) and negative spikes and the combination of D and R2
edges of the clock input. will allow only the positive spikes to pass through to
point “A”, by blocking the negative spikes.
 Fig. 6.7.4 shows the circuit symbol of the negative edge

at le
triggered S-R flip flop and Table 6.7.2 shows its truth
table.
ic w
bl no

(C-590(a)) Fig. 6.7.4 : Circuit symbol of negative (a) Positive edge triggered D-flip flop
edge triggered SR FF
Pu K

(C-6207) Table 6.7.2 : Truth table of negative


edge triggered S-R FF
ch

(b) Symbol
(C-592(a))Fig. 6.8.1
Operation :
Te

 The operation of edge triggered D FF is exactly same as


that of the gated D latch discussed earlier with only one
difference.
 The D latch had an enable terminal whereas the D flip
flop has a clock input.
 The gated D latch of Fig. 6.8.1 is enabled by the positive
spikes obtained at point A.
 Hence the outputs will change based on the inputs at
these particular instants only.
6.8 Edge Triggered D Flip Flop :  Thus the edge triggered D flip-flop responds only to the
positive (leading) edges of the clock pulses.
 The edge triggered D flip flops can be of two types :
 At any other instants of time, the D flip flop will not
1. Positive edge triggered D flip flop. respond to the changes in input.
2. Negative edge triggered D flip flop. Truth table :
 These flip flops can be derived from the level triggered  Table 6.8.1 shows the truth table of a positive edge
D latch which we have discussed in section 6.5. triggered D flip flop.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-14 Flip Flops

(C-6208) Table 6.8.1 : Truth table of a positive 6.8.3 Applications of D Flipflop :


edge triggered D flip flop
1. As a delay element.
2. In the digital latches.

Transparent latch :

 In level triggered D FF (D latch) the Q output always


follows the changes in D input.

 Hence D latch is sometimes called as transparent latch.

ns e
6.9 Edge Triggered J-K Flip Flop :

io dg
 Edge triggered J-K flip flops are of two types :

From the truth table it is clear that the Q output of the flip 1. Positive edge triggered JK flip flop
flop follows the D input. 2. Negative edge triggered JK flip flop

at le
6.8.2 Negative Edge Triggered D Flip Flop :
Symbol :
6.9.1
Logic diagram :
Positive Edge Triggered JK Flip Flop :
ic w
 The symbol for negative edge triggered D flip flop is  The circuit diagram of the positive edge triggered JK flip
shown in Fig. 6.8.2.
flop is shown in Fig. 6.9.1(a) and its circuit symbol is
bl no

shown in Fig. 6.9.1(b).

Fig. 6.8.2 : Symbol of negative edge


Pu K

(C-593)

triggered D flip flop

This F/F responds only to the negative edges of the


ch


clock pulses. This is how it is different from the positive
edge triggered flip flop.
 Otherwise, the operation of the negative edge triggered
Te

D flip flop is exactly same as that of the positive edge


triggered D flip flop.
Truth Table : (C-596) Fig. 6.9.1(a) : Positive edge triggered JK flip flop

 The truth table of negative edge triggered D flip flop is


shown in Table 6.8.2.
(C-6209) Table 6.8.2 : Truth table of negative
edge triggered D flipflop

(C-596) Fig. 6.9.1(b) : Symbol

Operation :

 The clock signal is a train of rectangular or square


waves.
 It is passed through a differentiator (C – R1) and a
rectifier (D and R2), to obtain only the positive spikes at

point A as already discussed.

 The negative spikes are blocked by the diode.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-15 Flip Flops

– –
 NAND gates 1 and 2 form the basic S-R latch. The other  If the previous state of Q and Q is Q = 1 and Q = 0
two NAND gates (3 and 4) have three inputs each. The then

outputs Q and Q are fed back to the inputs of gates 4 S = 1 · 1 · 1 = 0 and R = 0 · 0 · 1 = 1
and 3 respectively.
 Therefore according to the operation of S-R latch if
 Referring to Fig. 6.9.1(a) we can write the mathematical –
S = 0, R = 1 then Q = 0, and Q = 1 ·
expressions for S and R as follows : –
–––––––––  Q = 0 and Q = 1

ns e
S = K · Q · CLK and R = J · Q · CLK  Thus with J = 0, K = 1 and positive going clock, the JK
flip flop will reset.
Let us now understand the operation step by step.

io dg


 If Q = 0 and Q = 1 before the application of clock
Case I : CLK = 0 or 1 i.e. level
pulse, then there will not be any change in their state.
 If CLK = 0 or 1 i.e. level and no pulse this flip flop is
Case V : CLK = , J = 1, K = 0 (Set)

at le
disabled, because the output of differentiator is zero for
any level input.


– –
If the previous state of Q and Q is Q = 0 and Q = 1
then ,
ic w
 Therefore Q and Q output do not change their state. S = 0 · 0 · 1 = 1 and R = 1 · 1 · 1 = 0

Case II : If clock edge is falling edge ()


 Hence according to the operation of S-R latch, if S = 1
bl no

 For the falling edge of the clock, the rectifier (D – R2) and R = 0 then,
will block the negative spike. Hence voltage at point A is –
Q = 1 and Q = 0…. i.e. the flip flop is set.
logic 0.

Pu K

 If Q and Q are already 1,0 then there will not be any


 So one input to both the NAND gates 3 and 4 is 0.
change in the output state.
 Hence both S and R will be forced to 1.
ch

Case VI : CLK = , J = 1, K = 1 Toggle mode


 For an SR latch using NAND gates, the outputs will
 We will discuss this case for two different previous
remain unchanged if S = R = 1. cases.
Te

 Corresponding to the falling edge of clock, the outputs



Q and Q will remain unchanged.

Case III : CLK = , J = 0, K = 0 (No change)

 Since J = 0, R = 1 and as K = 0, S = 1.

 As S = R = 1 the outputs Q and Q will not change
their state even though a positive edge of the clock
pulse is being applied.
– –
(C-6399)
 J = 0, K = 0, Qn + 1 = Q and Qn + 1 = Q.
 From the operation discussed above, we conclude that
 No change in output. when J = K = 1 and a positive clock edge is applied,
Case IV : CLK = , J = 0, K = 1 (Reset) – –
then Q and Q outputs are inverted i.e. Qn + 1 = Qn and
 Recall the expressions for S and R, –
Q n + 1 = Q n.

S = K · Q · CLK and R = J · Q · CLK  This is called as the TOGGLING mode. This is a very
important mode of operation.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-16 Flip Flops

Truth table of JK flip flop : 6.9.3 How does an Edge Triggered JK FF


(C-6837) Table 6.9.1 : Truth table for positive edge Avoid Race Around Condition ?
triggered JK FF SPPU : Dec. 07, Dec. 08, Dec. 09.

.University Questions.
Q. 1 Discuss methods to avoid race around condition in
JK flip-flop. (Dec. 07, 4 Marks)
Q. 2 How is race around condition avoided ?
(Dec. 08, Dec. 09, 3 Marks)

ns e
 For the race around to take place, it is necessary to have

io dg
the enable input high along with J = K = 1.

 As the enable input remains high for a long time in a JK


latch, the problem of multiple toggling arises which is

at le 
known as Race condition.

But in edge triggered JK flip flop, the positive clock


ic w
6.9.2 Characteristic Equation of JK Flip pulse is present only for a very short time because it is

Flop : in the form of a narrow differentiated spike.



bl no

 Refer to the truth table of JK flip flop (Table 6.9.2) and  Hence by the time the toggled outputs (Q and Q) return
write the K-map for Qn + 1 as shown in Fig. 6.9.2. back to the inputs of NAND gates 3 and 4, the positive
(C-8062) Table 6.9.2 : Truth table clock spike has died down to zero.
Pu K

 Hence the multiple toggling cannot take place.

 Thus the edge triggering avoids the race around


ch

condition.

6.9.4 Negative Edge Triggered JK


Flip-Flop :
Te

Symbol :

 The symbol for negative edge triggered JK flip-flop is


shown in Fig. 6.9.3.

(C-598) Fig. 6.9.3 : Symbol of negative edge triggered JK FF

 This FF responds only to the negative edges of the clock


(C-597) Fig. 6.9.2 : K-map for Qn + 1
pulses.
 The characteristic equation is,
 The rest of the operation of this FF is exactly same as
– –
Qn + 1 = JQn + KQn that of the positive edge triggered JK FF.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-17 Flip Flops

Truth table : Truth table :

 The truth table of negative edge triggered JK FF is as (C-6839) Table 6.10.1 : Truth table of a Toggle FF
(positive edge triggered)
shown in Table 6.9.3.

(C-6838)Table 6.9.3 : Truth table of negative edge


triggered JK FF

ns e
io dg
Operation :

at le 


When T = 0, J = K = 0. Hence the outputs Q and Q
won’t change.
But if T = 1, then J = K = 1 and the outputs will toggle
ic w
corresponding to every leading edge of clock signal.
6.10 Toggle Flip Flop (T Flip Flop) :
 This has been illustrated in Ex. 6.10.1.
bl no

6.10.1 Positive Edge Triggered T-FF :


Ex. 6.10.1 : Determine the output Q for the set up shown in

Symbol : Fig. P. 6.10.1(a). What is the relation between


the frequency of clock and that of Q output ?
 Toggle flip flop is basically a JK flip flop with J and K
Pu K

terminals permanently connected together. It has only


one input denoted by “T”, as shown in Fig. 6.10.1(a).
ch

 The symbol for positive edge triggered T flip flop is


shown in Fig. 6.10.1(b) and Table 6.10.1 shows its truth
table.
Te

(C-600) Fig. P. 6.10.1(a) : Given set up


Soln. :
 The flip flop of Fig. P. 6.10.1(a) is a toggle flip flop with
T = 1. It is a positive edge triggered flip flop.
 Hence the outputs will toggle in response to each rising
edge of the clock, as shown in Fig. P. 6.10.1(b).

(a) JK FF is converted into T flip flop

(b) Logic symbol of positive edge triggered T flip flop

(C-599) Fig. 6.10.1 (C-601) Fig. P. 6.10.1(b) : Waveforms for a T F/F

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-18 Flip Flops

Relation between input and output frequency : 6.11 Master Slave (MS) JK Flip Flop :

 Referring to Fig. P. 6.10.1(b), SPPU : Dec. 06, Dec. 10, May 14.

1 cycle period of CLK signal = T, University Questions.


1 Q. 1 What do you mean by Master-Slave JK Flip-flop ?
 Clock frequency fCLK =
T
Explain the advantage of this Flip-flop. Draw
1 cycle period of Q output = 2 T, suitable circuit diagram and timing diagram.
1
 Output frequency fo = (Dec. 06, 10 Marks)
2T

ns e
1 Q. 2 What is the advantage of M-S flip-flop ? Explain
But ( ) = fCLK
T

io dg
working of MS J-K flip-flop in detail.
fCLK (Dec. 10, 8 Marks)
 Output frequency fo = …Ans.
2
Q. 3 Draw and explain the behavior of M-S JK flip flop
The T flip flop divides the clock frequency by 2. Hence a with waveform. (May 14, 6 Marks)

at le
T flip flop can be used as a frequency divider.
Logic diagram :
ic w
6.10.2 Negative Edge Triggered T Flip Flop :  Fig. 6.11.1 shows the master slave JK flip flop.

Symbol :
bl no

 Fig. 6.10.2 shows the logic symbol of a negative edge


triggered toggle flip flop and Table 6.10.2 gives its truth
table.
Pu K
ch

(C-606) Fig. 6.11.1 : Master slave JK FF

 It is a combination of a clocked JK latch (level triggered

JK FF) and clocked SR latch (level triggered SRFF).


(C-602) Fig. 6.10.2 : Logic symbol of a negative edge
Te

triggered toggle flip flop  The clocked JK latch acts as the master and the clocked
Truth table : SR latch acts as the slave.
(C-6840) Table 6.10.2 : Truth table
 Master is positive level triggered. But due to the

presence of the inverter in the clock line, the slave will

respond to the negative level.

 Thus both master and slave circuits are level triggered

circuits.

 Hence when the clock = 1 (positive level) the master is

active and the slave is inactive. Whereas when clock = 0

(low level) the slave is active and the master is inactive.


6.10.3 Application of T F/F :
Truth table :
 The T F/F acts as the basic building block of a ripple  Table 6.11.1 gives truth table of master slave JK flip flop.
counter.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-19 Flip Flops

(C-6267) Table 6.11.1 : Truth table of master slave JK FF –


 Outputs of the master become Q1 = 0 and Q1 = 1. That

means S = 0 and R = 1.
 Clock = 0 : Slave active, master inactive

 Outputs of the slave become Q = 0 and Q = 1. This is
the RESET operation.
 Again if clock = 1 : Master active, slave inactive.

 Even with the changed outputs Q = 0 and Q = 1 fed

ns e

back to master, its outputs will be Q1 = 0 and Q1 = 1.

io dg
That means S = 0 and R = 1.
 Hence with clock = 0 and slave becoming active, the

Operation : outputs of slave will remain Q = 0 and Q = 1.


at le
We will discuss the operation of the master slave JK FF
with reference to its truth table.
 Thus we get a stable output from the Master slave.

(C-6395(b))
ic w
 We must always remember one important thing that in
 Clock = 1 : Master active, slave inactive.
the positive half cycle of the clock, the master is active –
 Outputs of master become Q1 = 1 and Q1 = 0
bl no

and in the negative half cycle, the slave is active.


i.e. S = 1, R = 0.
 This is shown in Fig. 6.11.2.
 Clock = 0 : Master inactive, slave active.

 Outputs of slave become Q = 1 and Q = 0.
Pu K

 Again if clock = 1 then it can be shown that the outputs



of the slave are stabilized to Q = 1 and Q = 0 .
ch

(C-607) Fig. 6.11.2

Case I : Clock = , J = K = 0 (No change) (C-6395(c))


Te

 Clock = 1 : Master active, slave inactive.


 For clock = 1, the master is active, slave inactive. As
–  Outputs of master will toggle. So S and R also will be
J = K = 0.  Outputs of master i.e. Q and Q will not
1 1 inverted.
change. Hence the S and R inputs to the slave will  Clock = 0 : Master inactive, slave active.
remain unchanged.  Outputs of the slave will toggle.
 As soon as clock = 0, the slave becomes active and  These changed output are returned back to the master
master is inactive. inputs.
 But since the S and R inputs have not changed, the  But since clock = 0, the master is still inactive. So it does
slave outputs will also remain unchanged. not respond to these changed outputs.

 The outputs will not change if J = K = 0.  This avoids the multiple toggling which leads to the race
around condition.
(C-6395)
 Thus the master slave flip flop will avoid the race around
This condition has been already discussed in case I.
condition.
(C-6395(a))
 The waveforms for the master slave flipflop are shown in
 Clock = 1 : Master active, slave inactive. Fig. 6.11.3.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-20 Flip Flops

6.12.1 S-R Flip-Flop with Preset and Clear


Inputs :
Logic diagram and symbol :
 The S-R flipflop with preset and clear is shown in
Fig. 6.12.1(a) and its symbol is shown in Fig. 6.12.1(b).

ns e
io dg
at le
(C-608) Fig. 6.11.3 : Waveforms of master slave JK flip flop

Observations from the waveforms :


(a) S-R FF with preset and clear
ic w
 We can make the following important observations
from the waveforms of the master slave JK FF :
bl no

1. The slave always follows the master, after a delay of half


clock cycle period.
2. The multiple toggling or the race around condition is
successfully avoided.
Pu K

(b) Symbol
6.12 Preset and Clear Inputs :
ch

(C-609) Fig. 6.12.1


 In the flip flops discussed so far when the power switch
–  Note that both these inputs are active low inputs. This is
is turned on, the state of outputs Q and Q can be
indicated by the bubbles on these inputs in

Te

anything that means either Q = 0, Q = 1 or Q = 1, Fig. 6.12.1(b).



Q = 0. It is not predictable. Operation :
 But this uncertainty cannot be tolerated in certain
Case I : PR = CLR = 1
applications.
If PR = CLR = 1 then both are inactive and the S-R FF
 In some applications it is necessary to initially set or
operates as per the truth table of the conventional SR
reset the flip flops precisely.
flip-flop.
 This can be practically achieved by adding two more
Case II : PR = 0 and CLR = 1 (Preset condition)
inputs to a flip flop, called preset (PR) and clear (CLR)
 As PR = 0, the output of gate 3 of Fig. 6.12.1(a) will be 1.
inputs.
 That means Q = 1.
 These inputs are called as direct or asynchronous
inputs because we can apply them any time between  Therefore all the three inputs to the gate 4 will be 1 and

clock pulses without thinking about their Q will become 0.
synchronization with the clock.  Thus making PR = 0 will set the flip flop. Note that with
 That means applying PR or CLR inputs is not related to PR = 0 and CLR = 1 the flipflop is set irrespective of the
the clock in any way. status of S, R or clock inputs.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-21 Flip Flops

(C-8075) Table 6.12.2 : Truth table for S-R FF with


 Therefore it is said that the PR input has higher priority
synchronous preset and clear
than S, R or CLK inputs.

Case III : PR = 1, CLR = 0 (Clear or Reset condition)

 If PR = 1 and CLR = 0, then the output of gate-4



i.e. Q = 1.

 Therefore all the three inputs to gate-3 will be 1 and


hence Q = 0.

ns e
 Thus making CLR = 0 will reset the flip flop. Note that

io dg
with CLR = 0 and PR = 1, the FF is reset irrespective of
the status of S, R or clock inputs.

 Therefore it is said that CLR has higher priority than S, R


6.12.3 JK Flip Flop with Preset and Clear
at le
or CLK inputs.

Case IV : PR = 0, CLR = 0

Inputs :

A JK flip flop with preset and clear inputs is shown in


ic w
 This condition should never be used, because it leads to Fig. 6.12.2. These are the synchronous preset and clear
an uncertain state. terminals.
bl no

 Both these inputs are active low and have higher priority
 The operation of SR flip flop with preset and clear inputs
than all the other inputs.
is summarized in Table 6.12.1.

 The don’t care conditions () marked in the CLK column


Pu K

indicate that PR and CLR inputs have higher priority


than clock input.
ch

(C-609(a))Table 6.12.1 : Summary of operation of SR FF

(C-610) (a) Symbol of a JK FF with preset and clear inputs


Te

6.12.2 Synchronous Preset and Clear


Inputs : (C-610(a))(b) Summary of operation of JK FF

Fig. 6.12.2
 Preset and clear inputs can be of two types :

1. Asynchronous with clock. 6.12.4 Applications of JK Flip Flop :


2. Synchronous with clock. 1. Shift register.

 We have already discussed the asynchronous preset and 2. Counters.


clear inputs. 6.13 Various Representations of
 The operation of synchronous preset and clear inputs Flip Flops :
can be understood from the truth table given in
 There are different ways in which a flip flop can be
Table 6.12.2. represented.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-22 Flip Flops

 Each representation is suitable for a different (C-6571) Table 6.14.1 : Truth table of SR flip flop

application.

 Various representations of flip flops are :

1. Characteristic equations.

2. Flip flop as Finite State Machine (FSM).

3. Flip flop excitation tables.

ns e
6.13.1 Characteristic Equations : Condition 1 :
Sn and Rn = 0  Refer first row of Table 6.14.1.

io dg
 We have already introduced the concept of
characteristic equations earlier in this chapter and we Condition 2 :

have written the characteristic equations for various flip Sn = 0 Rn = 1  Refer second row of Table 6.14.1.
flops.

at le
6.14 Excitation Table of Flip-Flop :
 From the two conditions mentioned above we conclude
that Sn input should be equal to 0 and Rn input can be 0

or 1 (don’t care) if we want to maintain Q = 0 and


ic w

 Till now we have written the truth tables of various flip Q = 1 before and after clock.
flops.  Similarly we can find the input conditions (Values of S
bl no

 The truth tables are also known as the characteristic and R inputs) for all the possible situations that may

tables. exist on the output side.

 But while designing the sequential circuits, sometimes  The table containing all these output situations and the
Pu K

the present and next state of a circuit are given and we corresponding input conditions is called as the

are expected to find the corresponding input condition. “excitation table” of a flip flop.
ch

 We need to use the excitation tables of flipflops to do  The excitation table of SR flip flop is shown in

this. These tables are different from the characteristic Table 6.14.2.

C-6581) Table 6.14.2 : Excitation table of SR flip flop


Te

tables.

 Present state is the state prior to application of clock


pulse and the next state means the state after
application of clock pulse.

6.14.1 Excitation Table of SR Flip Flops :

 For example the outputs of an S-R FF before clock pulse Description of excitation table of SR FF :

are Qn = 0 and Qn = 1 and it is expected that these We have already discussed case I.

outputs should remain unchanged after application of Case II : Q should change from 0 to 1
clock. Then what must be the values of inputs Sn and Rn
 This is nothing but the set condition.
to achieve this ?
 Hence Sn = 1 and Rn = 0 should be the inputs.
 Refer to the truth table of SR FF to answer this question.
Case III : Q should change from 1 to 0
 The answer is, for the following two conditions the
–  This is nothing but the reset condition.
outputs remain unchanged at Q = 0 and Q = 1.
 Hence Sn should be 0 and Rn should be 1.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-23 Flip Flops

Case IV : Q should be 1. No change 6.14.4 Excitation Table of T Flip Flop :


 To satisfy this requirement, we have two possible input
 Excitation table of T flip flop is shown in Table 6.14.5.
conditions :
(C-7829) Table 6.14.5 : Excitation table of T flip-flop
Condition 1 : Sn = Rn = 0 i.e. No change in output.

Condition 2 : Sn = 1 and Rn = 0.

 From these conditions we conclude that Sn can be either


0 or 1 i.e. don’t care and Rn = 0.

ns e
 Hence the inputs corresponding to this case is Sn = 
and Rn = 0.

io dg
 Similarly we can write the excitation tables for the other 6.15 Conversion of Flip Flops :
flip flops.
Concept :
6.14.2 Excitation Table of D Flip Flop :  The conversion from one type of flip flop to the other

 at le
Excitation table of a D flip flop is given by Table 6.14.3.

(C-7828) Table 6.14.3 : Excitation table of a D flip flop


(say SR FF to JKFF), needs a systematic approach using
the excitation tables and K map simplifications.
ic w
 Fig. 6.15.1 shows a generalized model for conversion
from one flip flop to the other.
bl no
Pu K

6.14.3 Excitation Table of JK Flip Flop :


 Excitation table of a JK flip flop is given by Table 6.14.4.
ch

(C-6211) Table 6.14.4 : Excitation table of JK flip flop


Te

(C-611) Fig. 6.15.1 : General model used to convert


one type of FF to the other

 As shown in Fig. 6.15.1 the required flip flop is actually a


combination of the given flip flip and a combinational
logic circuit using gates. (Such a combinational circuit is
 Refer to Case II of Table 6.14.4. For the change in Q called as the flip flop conversion logic).
output from 0 to 1 we have the following two input
 The inputs to FF conversion logic are, the flip flop data
conditions :
inputs and the outputs of given flip-flop i.e. the given FF
Condition I :
and the desired FF.
Jn = 1 and Kn = 0 i.e. set condition.
 The conversion logic is designed by combining the
Condition II :
excitation tables of both the flip flops.
Jn = 1 and Kn = 1 i.e. toggle.
 The truth table of the conversion logic has data inputs
 Hence Jn = 1 and Kn =  (0 or 1) corresponding to –
and Q and Q outputs of the given FF as inputs whereas
case II.
the inputs of the given FF are the outputs of the truth
 Similarly for case III also, the don’t care condition
table.
corresponds to toggle mode.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-24 Flip Flops

 Then we draw the K map for each output and obtain the Logic diagram :
simplified expressions. The conversion logic is then  The logic diagram for SR FF to D FF is shown in
implemented using gates. Fig. 6.15.4.

6.15.1 Conversion from S-R Flip Flop to D


Flip Flop : SPPU : May 12.

.University Questions.
Q. 1 Convert the basic SR-flip-flop (SR-FF) into D FF.
(May 12, 2 Marks)

ns e
 Refer Fig. 6.15.2. Here the given FF is SR FF and the

io dg
required FF is D FF.
 The truth table for the conversion logic is shown in (C-614) Fig. 6.15.4 : SR flip flop to D flip flop
Table 6.15.1. The inputs are D and Q whereas outputs
are S and R. 6.15.2 Conversion of JK FF to T FF :

at le
The truth table is prepared by combining the excitation
tables of D F/F and SR FF. University Questions.
SPPU : Dec. 09, May 17.
ic w
Q. 1 Convert J-K flip-flop into T-FF. Show the truth
table. (Dec. 09, 4 Marks)
bl no

Q. 2 Design flip-flop conversion logic to convert JK


flip-flop to T flip-flop. (May 17, 6 Marks)

Step 1 : Write the truth table for conversion :

 The required truth table is obtained from the excitation


Pu K

tables of JK and T flip flops as follows :


(C-612) Fig. 6.15.2
(C-6388) Table 6.15.2 : Truth table for JK to T FF conversion
ch

(C-6387) Table 6.15.1 : Truth table for SR to D FF conversion


Te

 Now write the K maps for the S and R outputs as shown Step 2 : K maps and simplification :
in Figs. 6.15.3(a) and (b).  The K maps for outputs J and K are shown in Fig. 6.15.5.

(a) K map for S (b) K map for R (a) K map for J output (b) K map for K output
(C-613) Fig. 6.15.3 (C-615) Fig. 6.15.5

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-25 Flip Flops

Step 3 : Draw the logic diagram : Step 3 : Draw the logic diagram :
 The logic diagram is shown in Fig. 6.15.6.  The logic diagram is shown in Fig. 6.15.8.

ns e
io dg
(C-616) Fig. 6.15.6 : Logic diagram for conversion (C-618) Fig. 6.15.8 : Conversion from SR flip flop to T flip flop
of JK FF to T FF
6.15.4 SR Flip Flop to JK Flip Flop :
6.15.3
at le SR Flip Flop to T Flip Flop :

.University Questions.
SPPU : May 12, Dec. 16, Dec. 18.

University Questions.
SPPU : May 12, Dec. 14, May 15.
ic w
Q. 1 Convert the basic SR-flip-flop (SR-FF) into T-FF Q. 1 Convert the basic SR-flip-flop (SR-FF) into JK-FF.
(May 12, 2 Marks)
bl no

(May 12, 2 Marks)


Q. 2 Explain the difference between asynchronous and
synchronous counter and convert SR flip-flop into T Q. 2 Design JK flip-flop using SR flip-flop.

flip-flop. Show the design. (Dec, 16, 6 Marks) (Dec. 14, 6 Marks)
Q. 3 Design and implement T flip-flop using SR flip-flop.
Pu K

Q. 3 How will you convert the basic SR-flip-flop (SR-FF)


(Dec. 18, 6 Marks)
into JK flip-flop ? (May 15, 6 Marks)
 The stepwise conversion process is as follows :
ch

Step 1 : Write the truth table for SR to JK :


Step 1 : Write the truth table :
(C-8069) Table 6.15.3 : Truth table for SR FF to T FF  The truth table for SR to JK flip flop conversion is shown
in Table 6.15.4.
Te

(C-6389) Table 6.15.4 : Truth table for SR to JK FF conversion

Step 2 : Write the K maps and obtain the expressions


for S and R :

Step 2 : K maps and simplification :


(a) K map for S (b) K map for R
 K maps for S and R outputs are shown in Fig. 6.15.9.
(C-617) Fig. 6.15.7

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-26 Flip Flops

(a) K-map for S output (b) K-map for R output

(C-619) Fig. 6.15.9 (a) K map and simplification

ns e
Step 3 : Logic diagram :

io dg
 The logic diagram of SR to JK flip flop is given in
Fig. 6.15.10.

at le
ic w
(b) Logic diagram
bl no

(C-621) Fig. 6.15.11 : D flip flop to T flip flop

6.15.6 T Flip Flop to D Flip Flop Conversion :

Step 1 : Write the truth table :


Pu K

(C-620) Fig. 6.15.10 : SR to JK flip flop conversion


 The truth table is as given in Table 6.15.6.
6.15.5 Conversion of D Flip Flop to T
ch

(C-7831) Table 6.15.6 : Truth table for T FF to D FF conversion


Flip Flop : SPPU : Dec. 13.

University Questions
Q. 1 What is race around condition ? How it can be
Te

avoided ? Convert D flip-flop to T flip-flop.


(Dec. 13, 6 Marks)

Step 1 : Write the truth table for D to T FF conversion :


Step 2 : K maps simplification and logic diagram :
 The truth table is as follows :
 The K map is shown in Fig. 6.15.12(a) and the logic
(C-8044) Table 6.15.5 : Truth table for D to T FF conversion
diagram is given in Fig. 6.15.12(b).

Step 2 : K map and simplification and logic diagram :


 K map is shown in Fig. 6.15.11(a) and logic diagram is
shown in Fig. 6.15.11(b). (C-622) Fig. 6.15.12(a) : K map for D output

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-27 Flip Flops

(C-624) Fig. 6.15.14 : Logic diagram for conversion

ns e
(C-622) Fig. 6.15.12(b) : Logic diagram of T to D flip flop
from JK FF to D FF
conversion

io dg
6.15.8 JK Flip Flop to SR Flip Flop
6.15.7 JK Flip Flop to D Flip Flop Conversion :
Conversion : SPPU : May 10, Dec. 13, Dec. 19
SPPU : Dec. 09, May 14.
University Questions.
University Questions

at le
Q. 1 Convert J-K flip-flop into D-FF. Show the truth
table. (Dec. 09, 4 Marks)
Q. 1

Q. 2
Design SR flip-flop using JK flip-flop.
(May 10, 4 Marks)
Explain the difference between combinational and
ic w
Q. 2 Explain the difference between asynchronous and sequential circuit. Design S-R flip-flop using
synchronous counter and convert J-K flip flop into J-K flip-flop. (Dec. 13, 6 Marks)
D-FF. Show the design. (May 14, 6 Marks)
bl no

Q. 3 Compare combinational circuits with sequential


Step 1 : Write the truth table for JK to D conversion : circuits. Convert JK Flip-Flop into SR flip-flop.

 The truth table is as follows : (Dec. 19, 6 Marks)

(C-6390) Table 6.15.7 : Truth table for JK to Step 1 : Write the truth table for JK to SR :
Pu K

D flip flop conversion


 The truth table is shown in Table 6.15.8.

Table 6.15.8 : Truth table for JK to SR conversion


ch

(C-8045)
Te

Step 2 : K maps and simplification :

Step 2 : K-maps and simplifications :

(a) K map for J output (b) K map for K output

(C-623) Fig. 6.15.13

Step 3 : Draw the logic diagram :


 The logic diagram for JK flip flop to D flip flop is shown (a) (b)
in Fig. 6.15.14. (C-625) Fig. 6.15.15

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-28 Flip Flops

Step 3 : Logic diagram : 6.15.10 T FF to SR FF Conversion :


 The logic diagram for JK to SR FF is shown in Step 1 : Write the truth table for T to S-R conversion :
Fig. 6.15.15(c). (C-8046) Table 6.15.10 : Truth table for T to S-R conversion

ns e
(C-626) Fig. 6.15.15(c) : JK to SR FF conversion

io dg
6.15.9 D FF to SR FF Conversion : Step 2 : K maps and simplification for T output :

Step 1 : Write the truth table for D to S-R Conversion :

at le
(C-7832) Table 6.15.9 : Truth table for D to S-R conversion
ic w
(C-629) Fig. 6.15.18 : K-map and simplification
bl no

–– –– ––
 T = S R Qn + S R Qn

Step 3 : Logic diagram :


Pu K

Step 2 : K maps and simplification :


ch
Te

(C-630) Fig. 6.15.19 : Logic diagram for T to S-R FF conversion

(C-627) Fig. 6.15.16 : K-map and simplification 6.15.11 Conversion from D FF to JK FF :

Step 3 : Logic diagram : Step 1 : Write the truth table :


 Table 6.15.11 shows the truth table obtained from the
excitation tables of D and JK FFs.

(C-8047) Table 6.15.11 : Truth table for D to JK conversion

(C-628) Fig. 6.15.17 : Logic diagram for D to S-R FF conversion

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-29 Flip Flops

Step 2 : K maps and simplification : Mode select – truth table

Inputs Outputs
Operating Mode – – –
SD CD D Q Q
Set L H X H L

Reset (Clear) H L X L H

*Undetermined L L X H H

Load “1” (Set) H H h H L

ns e
Load “0” (Reset) H H l L H
(C-1342(a))Fig. 6.15.20

io dg
Step 3 : Logic diagram : H, h = HIGH Voltage Level

 The logic diagram for D FF to JK FF is shown in L, l = LOW Voltage Level

Fig. 6.15.21. X = Don’t care

at le  i, h (q) = Lower case letters indicate the state of the


referenced input (or output) one set-up time prior to
ic w
the LOW to HIGH clock transition.
bl no

(C-1342(b)) Fig. 6.15.21 : D FF to JK FF


Pu K

6.16 Applications of Flip Flops :


ch

 Some of the important applications of the flip flops are :

1. Elimination of keyboard debounce.

2. As a memory element.
Te

(C-1722) Fig. 6.17.1


3. In various types of registers.
 Information applied at input D is transferred to the Q
4. In counters/timers.
output on the positive-going edge of the clock pulse.
5. As a delay element.
Clock triggering occurs at a voltage level of the clock
6.17 Study of Flip-Flop ICs : pulse and is not directly related to the transition time of
the positive-going pulse. When the clock input is at
6.17.1 SN74LS74A : Dual D-Type Positive
either the HIGH or the LOW level, the D input signal
Edge-triggered Flip-Flop Low Power –
does not have any effect on the Q and Q outputs.
Schottky :
 Now consider the mode select truth table indicates that
Description : – –
both outputs will be HIGH while both SD AND CD are
 The SN54LS/74LS74A is a dual edge-triggered flip-flop –
LOW, but the output states are unpredictable if SD and
which utilizes Schottky TTL circuitry to produce high

CD go HIGH simultaneously. If the levels at the set and
speed D-type flip-flops. Each flip-flop has individual
– clear are near VIL maximum then we cannot guarantee
clear and set inputs, as well as complementary Q and Q
to meet the minimum level for VOH.
outputs.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-30 Flip Flops

Mode Select – Truth table

Inputs Outputs
Operating Mode – – –
SD CD J K Q Q
Set L H X X H L
Reset (Clear) H L X X L H
*Undetermined L L X X H H
Toggle H H H h – q
q
Load “0” (Reset) H H L h L H

ns e
Load “1” (Set) H H H l H L
Hold H H L l q –

io dg
q
(C-3624) Fig. 6.17.2

Guaranteed operating ranges :

Symbol Parameter Min Typ Max Unit


VCC
at le Supply Voltage 54
74
4.5 5.0
4.75 5.0
5.5
5.25
V
ic w
TA Operating Ambient 54 – 55 25 125 C
Temperature Range 74 0 25 70
bl no

IOH Output Current – 54, – 0.4 mA


High 74
IOL Output Current – 54 4.0 mA
(C-1723) Fig. 6.17.4
Low
Pu K

74 8.0
– –
Pin configuration :  Both outputs will be HIGH while both SD and CD are

ch

 Fig. 6.17.3 shows the pin configuration of IC 7474. LOW, but the 7 output states are unpredictable if SD

and CD go HIGH simultaneously.
H, h = HIGH Voltage Level
L, l = LOW Voltage Level
Te

X = immaterial

 l, h (q) = Lower case letters indicate the state of the


referenced input (or output ) one set-up time prior to
(C-3625)Fig. 6.17.3 : Pin configuration of IC 7474
the HIGH-to-LOW clock transition.
6.17.2 SN74LS76A : Dual JK Flip-Flop with Guaranteed operating ranges :
Set and Clear Low Power Schottky :
Symbol Parameter Min Typ Max Unit
Description : VCC Supply Voltage 54 4.5 5.0 5.5 V
 The SN54LS/74LS76A offers individual J, K, Clock Pulse, 74 4.75 5.0 5.25
Direct Set and Direct Clear inputs. These dual flip-flops TA Operating Ambient 54 – 55 25 125 C
are designed so that when the clock goes HIGH, the Temperature Range 74 0 25 70
inputs are enabled and data will be accepted. IOH Output Current – 54, – 0.4 mA
 The logic Level of the J and K inputs will perform High 74
according to the truth table as long as minimum set-up IOL Output Current – 54 4.0 mA
times are observed. Input data is transferred to the
Low 74 8.0
outputs on the HIGH-to-LOW clock transitions.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-31 Flip Flops

 Logic diagram and pin configuration of IC 7476 is as  The basic building block of a sequential circuit is a flip
flop. The outputs QA, QB … etc. of such flip flops will be
shown in Fig. 6.17.5.
used as state inputs to a sequential circuit. They are
also called as state variable.
 In addition to this “x” represents an external input and Y
represents the output of the sequential circuit.
 Y will be dependent on the state variables (QA, QB, ...)
and the external input x.
 The general format of a state table is shown in

ns e
Table 6.18.1.

io dg
(C-7833) Table 6.18.1 : General format of state table

(a) Logic diagram

at le
ic w
Definition :
bl no

 The state table is defined as the table that tells us about


(b) Pin configuration the relation between the present state, next state,
(C-3626)Fig. 6.17.5 external inputs and output of a sequential circuit.

6.18 Analysis of Clocked Sequential


Pu K

6.18.2 State Diagram :


Circuits :
Definition :
ch

 The behaviour of a clocked sequential circuit is


 A state diagram is the graphical representation of a
dependent on the inputs, the outputs and the state of
sequential circuit, which consists of states, state
its flip-flops.
transitions and actions.
 The outputs as well as the next state both are function
Te

of inputs and present state.  State diagrams depict the permitted states and

 The analysis of the given clocked sequential circuit transitions as well as the events that effect these

includes writing the state table and drawing the state transition.
diagram for the given circuit.  The information available in the state table is
 In this section, we will introduce the concepts such as represented graphically using the state diagram.
state diagram, state table, state equation and input
 The state diagram is drawn by using the state table as a
equations and the step by step analysis procedure.
reference. Such a state diagram is shown in
6.18.1 State Table : Fig. 6.18.1.

 We use the truth table for explaining the relation


between its inputs and outputs of a combinational
circuit.
 But it is not suitable to use the truth table to describe
the input output relation of a sequential circuit.

 Instead we use the state table to describe the input


output relation of the sequential circuits. (C-653) Fig. 6.18.1 : State diagram

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-32 Flip Flops

 The circle represents the present state. The arrows Ex. 6.18.1 : For the clocked D FF write the state table,
draw the state diagram and write the state
between the circles define the state transition say from
equation.
00 to 01 or 01 to 11. Soln. :
 If there is directed line connecting the same circle then Step 1 : Write the truth table :
it means that the next state is same as the present state.
 Table P. 6.18.1(a) represents the truth table for a clocked
 The lines joining the circles are labeled with a pair of D flip flop.
binary numbers with a “ / ” in between. For example the (C-6212) Table P. 6.18.1(a) : Truth table of a clocked D FF
line joining 00 and 01 is labeled as 1/0.

ns e
 Note that 00 to 01 transition takes place when x = 1 and

io dg
Y = 0 (see row-1 of the state table). Hence 1 in 1/0
corresponds to x and 0 corresponds to y.
Don’t care condition in the state diagram :


at le
Sometimes the same next state is reached for more than
one present states.
ic w
 This is called as don’t care condition in the state
Step 2 : Write the state table :
diagram, as shown in Fig. 6.18.2.
 Table P. 6.18.1(b) represents the state table for the
bl no

clocked D FF.

(C-7834) Table P. 6.18.1(b) : State table of a clocked D FF


Pu K

(C-654) Fig. 6.18.2 : Don’t care condition in state diagram


ch

6.18.3 State Equation :


Definition : Step 3 : State diagram :
 State equation is an algebraic equation. The left side of
 State diagram of clocked D FF is shown in
Te

this equation represents the next state of the flip flops.


Fig. P. 6.18.1.
 And the right hand side of this equation specifies the
present state conditions which make the next state
equal to 1.

 For example consider the following state expression.


(C-6235)
(C-659(a)) Fig. P. 6.18.1 : State diagram of a clocked D FF

Step 4 : Write excitation table :

 The excitation table for D FF is given in

Table P. 6.18.1(c).
 State equation is also called as application equation. (C-7835) Table P. 6.18.1(c) : Excitation table for D FF
 We can derive the state equation from the state table
using the K maps. The state equation is the Boolean
function with time included into it.

 If the right hand side part is zero and we apply a clock


pulse, then the next state will be equal to zero.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-33 Flip Flops

Step 5 : Write the state equation : Step 4 : Write the excitation table :
 The K-map for output Q is shown in Fig. P. 6.18.1(a).  The excitation table of JK FF is as shown in
n+1
Table P. 6.18.2(c).

(C-7840) Table P. 6.18.2(c) : Excitation table for JK FF

ns e
(C-656) Fig. P. 6.18.1(a) : K-map and state equation

Ex. 6.18.2 : For the clocked JK FF write the state table,

io dg
Step 5 : Write the state equation :
draw the state diagram and write the state
 The K-map for output Q is shown in Fig. P. 6.18.2(b).
equation. n +1

Soln. :
Step 1 : Write the truth table :
 at le
Table P. 6.18.2(a) represents the truth table for a clocked
JK flip flop.
ic w
(C-6213) Table P. 6.18.2(a) : Truth table of JK FF
(C-658) Fig. P. 6.18.2(b) : K-map state equation
bl no

Ex. 6.18.3 : For a toggle FF write the state table, draw the
state diagram and write the state equation.
Soln. :
Pu K

Step 1 : Write the truth table :

 Table P. 6.18.3(a) gives the truth table for a T FF.


ch

(C-8071) Table P. 6.18.3(a) : Truth table for a T FF

Step 2 : Write the state table :


Te

 Table P. 6.18.2(b) represents the state table for a


clocked JK FF.
(C-8070) Table P. 6.18.2(b) : State table of clocked JK FF

Step 2 : Write the state table :


 The state table is shown in Table P. 6.18.3(b).
Step 3 : Draw the state diagram :
 State diagram is as shown in Fig. P. 6.18.2(a). (C-8072) Table P. 6.18.3(b) : State table for a T FF

Step 3 : Draw the state diagram :

(C-657) Fig. P. 6.18.2(a) : State diagram of a JK flip flop  The state diagram is shown in Fig. P. 6.18.3(a).

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-34 Flip Flops

Step 6 : Decide the type of FF to be used.


Step 7 : Derive the circuit excitation table and output table
from the state table.
(C-659) Fig. P. 6.18.3(a) : State diagram of a T FF
Step 8 : Obtain the expressions for the circuit output and
Step 4 : Write the excitation table : flip-flop inputs.
 Table P. 6.18.3(c) represents the excitation table of T FF. Step 9 : Draw the logic diagram.
(C-7842) Table P. 6.18.3(c) : Excitation table for T FF
 The design procedure of clocked sequential circuits will

ns e
be clear by solving the following example.

io dg
Ex. 6.19.1 : For the state diagram given in Fig. P. 6.19.1(a)
draw the clocked sequential circuit using
T flip-flops.
Step 5 : Write the state equation :
 The
at le
K-map for Qn +1 output is
Fig. P. 6.18.3(b). The state equation can be obtained
shown in
ic w
from this K-map.
bl no

(C-685) Fig. P. 6.19.1(a) : Given state diagram

Soln. :
Pu K

Step 1 : Write the state table :


(C-660) Fig. P. 6.18.3(b)  The state table is written from the given state diagram
ch

as shown in Table P. 6.19.1(a).


6.19 Design of Clocked Synchronous
(C-7837) Table P. 6.19.1(a) : State table
State Machine using State Diagram :
Te

 Design of a clocked sequential circuits will start from the

set of given specifications and it will end with drawing a

logic diagram.

 The stepwise design procedure is as follows :

Step 1 : A state diagram or timing diagram or some other


information is given, which describes the Step 2 : Number of flip-flops :

behaviour of the circuit that is to be designed.  As seen from the state table, there are no equivalent

Step 2 : Draw the state table. states. So there won’t be any reduction in the state
diagram.
Step 3 : The number of states can be reduced by state
reduction methods.  The circuit goes through four states. Hence we need to
use 2 flip-flops.
Step 4 : Assign binary values to each state for the states in
steps 2 and 3 using state assignment technique. Step 3 : Write the circuit excitation table :

Step 5 : Determine the number of flip-flops required and  The circuit excitation table is shown in Table P. 6.19.1(b).

assign a letter symbol to each one. The type of FF used is T type FF.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-35 Flip Flops
(C-7760) Table P. 6.19.1(b) : Circuit excitation table 6.20 Case Study : Use of Sequential Logic
Design in a Simple Traffic Light
Controller :
Traffic Light Controller :

 A traffic light controller is to be designed which controls


the traffic lights LA and LB shown in Fig. 6.20.1(a) located
at the intersection of two roads, on the basis of the
status of the traffic sensors TA and TB.

ns e
 The shaded portion of the circuit excitation table
corresponds to the excitation table of a T FF.

io dg
Step 4 : K maps and simplifications :
 Fig. P. 6.19.1(b) shows the K-maps and corresponding
simplified expressions for TA TB i.e. inputs of the two T

at le
flip-flops and the Y output.
ic w
bl no

(C-6097) Fig. 6.20.1(a) : Traffic lights to be designed

 Thus the inputs to the traffic light controller are the


outputs of the traffic sensors, CLOCK and RESET as
Pu K

shown in Fig. 6.20.1(b).


ch

 The outputs of the traffic light controller are the driving


signals for the lights LA and LB as shown in Fig. 6.20.1(b).

(C-686) Fig. P. 6.19.1(b) : K-maps and simplifications


Te

Step 5 : Draw the logic diagram :


 The logic diagram for required clocked sequential circuit
is shown in Fig. P. 6.19.1(c).

(C-6098) Fig. 6.20.1(b) : Traffic light controller block diagram

Operation and state diagram :


Initially (State A) :

 Initially after resetting the controller, the status of


outputs is as follows :

(C-687) Fig. P. 6.19.1(c) (C-6099)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-36 Flip Flops

 If TA = 1 which indicates traffic on road A then the state


remains the same i.e. state A as shown in Fig. 6.20.1(c).
 But if TA = 0 (No traffic on road A), irrespective of the
value of TB, the controller outputs will get modified as
follows and the controller moves to state B.

(C-6100)
(C-6103) Fig. 6.20.1(c) : State diagram of a traffic light

ns e
controller
 This is as shown in Fig. 6.20.1(c).

io dg
State table :
State B :
 We can derive the state table from the state diagram.
 After some time in state B, the controller will move to  The four states requires 2 flipflops (D-type). The state
the next state i.e. state C which is defined as follows : table of this controller is as shown in Table 6.20.1(a).

at le (C-8311)Table 6.20.1(a) : State table


ic w
(C-6101)
bl no

State C :

 The controller will continue to be in state C as long as


TB = 1 which indicates that there is traffic on road B.
Pu K

Circuit excitation table :


 But if TB = 0 (No traffic on road B), irrespective of the
(C-8312)Table 6.20.1(b) : Circuit excitation table
ch

value of TA, the controller outputs will change in the

following manner and it will move to the next state D as


shown in Fig. 6.20.1(c).
Te

(C-6102)
K-Maps and simplification :

State D :

 After some time in state D, the controller will


automatically move to the initial state A as shown in
Fig. 6.20.1(c).

(C-8310)

(C-6104) Fig. 6.20.2 : K-maps (Contd..)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-37 Flip Flops

ns e
io dg
at le
ic w
bl no
Pu K
ch

(C-6105) Fig. 6.20.3 : Realization of the traffic controller

Review Questions
Te

Q. 1 Mention the types of digital systems. Explain with


block diagram their operating principle.

Q. 2 What is a flip-flop ?

Q. 3 State and explain the triggering methods used for


flip-flops.

Q. 4 What is the function of preset and clear inputs in flip-


flop ?

(C-6104) Fig. 6.20.2 : K-maps Q. 5 Explain with truth table the working of clocked RS
flip-flop.
Realization :
Q. 6 State the disadvantages of RS flip-flop. How can
 The traffic light controller is as shown in Fig. 6.20.3.
they be avoided ?

Q. 7 Explain with diagram the working of D type flip-flop.


Give its truth table.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 6-38 Flip Flops

Q. 8 Give reason why D flip-flop is called as data latch ? 4. D type and T type.

Q. 9 Draw the circuit using logic gates of a T-type Q. 22 Design a conversion logic to convert a JK flip-flop to
flip-flop. Draw its symbol and write its truth table. a D flip-flop.

Q. 10 Explain S-R flip flop using NOR gates. Q. 23 Write a short note on race around condition in JK

Q. 11 Describe how two cross-coupled NAND gates form a flip-flop.

R-S flip-flop ? Write its truth table. Q. 24 Draw neat circuit diagram of clocked JK flip-flop
Q. 12 What are the various types of flip-flops ? using NAND gates. Give its truth table and explain

ns e
race around condition.
Q. 13 Draw the circuit of SR flip-flop using NAND gate.

io dg
Q. 14 Draw the schematic diagram of JK flip-flop and Q. 25 What is race around condition ? How does it gets
eliminated in master slave JK FF ? Explain.
describe its working. Write down its truth table.
Q. 26 Explain how JK FF is converted into :
Q. 15 What is race around condition ?

Q. 16 at le
Draw the circuit of J-K flip-flop using NAND gate.
Q. 27
1. D FF

Carry out
2.

the
T FF

following flip flop conversions


ic w
Q. 17 Draw a neat diagram of master slave J-K flip-flop. (Conversion tables and K-maps expected) :
Explain how race around condition is avoided using 1. S-R to D
master slave J-K flip-flop ?
bl no

2. D to S-R
Q. 18 Explain the working of the master slave JK flip-flop.
3. J-K to S-R

Q. 19 Can a flip flop be used as a memory ? If so how –


Q. 28 If Q output of a D-type Flip-Flop is connected to D
many bits can be stored by R-S flip flop ?
Pu K

input, it acts as a toggle switch. State whether true


Q. 20 Explain T flip-flop. or false ? Justify your answer.
ch

Q. 21 Explain the following flip-flops : Q. 29 What is the basic difference between pulse-triggered

1. Clocked SR. and edge-triggered flip-flops ?

2. JK with preset and clear.


Te

3. Master Slave JK.





Powered by TCPDF (www.tcpdf.org)


Unit 3

Chapter

7
ns e
io dg
at le
ic w
Counters
bl no
Pu K

Syllabus
Application of flip-flops : Counters - Asynchronous, Synchronous and modulo n counters, Study of
ch

7490 modulus n counter ICs & their applications to implement mod counters.
Te

Chapter Contents
7.1 Introduction 7.8 Synchronous Counters
7.2 Asynchronous / Ripple Up Counters 7.9 Modulo – N Synchronous Counters
7.3 Asynchronous Down Counters 7.10 UP / DOWN Synchronous Counter
7.4 UP / DOWN Counters 7.11 Lock Out Condition
7.5 Modulus of the Counter (MOD-N Counter) 7.12 Bush Diagram
7.6 Ripple Counter IC 7490 (Decade Counter) 7.13 Applications of Counters
7.7 Problems Faced by Ripple Counters

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-2 Counters

7.1 Introduction :  For example, the output of an up counter will be


0-1-2-3…
Definition :
 Down counters are the counters that count from large
 The digital circuit used for counting pulses is known as to small count. Their output goes on decreasing as they
counter. It is a sequential circuit. receive clock pulses.
 Counters is the most widely used application of flip-  For example, the output of a down counter will be
flops. It is a group of flip-flops with a clock signal 7-6-5-4-3…
applied.  Up/Down counter is the combination of up counter

ns e
 Counters count the number of clock pulses. Therefore and down counter.
with some modifications we can use them for measuring

io dg
7.2 Asynchronous / Ripple Up
frequency or time period.
Counters :
7.1.1 Types of Counters : SPPU : May 08.
Logic diagram :

at le
University Questions.
Q. 1 What do you mean by binary ripple counter ?
 Fig. 7.2.1 shows the logic diagram of a 2-bit ripple up
counter.
ic w
(May 08, 2 Marks)

 Counters are basically of two types :


1. Asynchronous or ripple counters.
bl no

2. Synchronous counters.
1. Asynchronous or ripple counters :

 For these counters the external clock signal is applied to


Pu K

(C-771) Fig. 7.2.1 : A two bit asynchronous binary up counter


one flip flop and then the output of preceding flip-flop
is connected to the clock of next flip-flop.  The number of flip-flops used is 2. Note that the
ch

2. Synchronous counters : number of bits will always be equal to the number of

 In synchronous counters all the flip-flops receive the flip-flops. Thus a 4 bit counter will use four flip-flops.

external clock pulse is applied to all the flip-flops  The toggle (T) flip-flops are being used. But we can use
Te

simultaneously. the JK flip-flops also with J and K connected


 Ring counter and Johnson counter are the examples of permanently to logic 1.
synchronous counters.
 External clock is applied to the clock input of flip-flop A
7.1.2 Classification of Counters : which is the LSB flip-flop and QA output is applied to

 Depending on the way in which the counter outputs the clock input of the next flip-flop i.e. FF-B.
change, the synchronous or asynchronous counters are Operation of the counter :
classified as follows :
 Initially let both the flip-flops be in reset condition.
 QB QA = 00

On the first negative going clock edge :

 As soon as the first falling edge of the clock hits FF-A, it


will toggle as TA = 1. Hence QA will become equal to 1.
(C-6214)
 QA is connected to clock input of FF-B. Since QA has
 Up counters are the counters that count from small to
changed from 0 to 1, it is treated as the positive clock
big count. Their output goes on increasing as they
edge by FF-B.
receive clock pulses.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-3 Counters

 There is no change in the status of QB because FF-B is a  QB Q A = 11 ….. After the third CLK pulse
negative edge triggered FF. th
At the 4 negative clock edge :
 Therefore after the first clock pulse the counter outputs th
 On the 4 falling clock edge, FF-A toggles and
are
QA changes from 1 to 0.
QB QA = 01 ….. After the first CLK pulse
 This negative going change in QA acts as a negative
At the second falling edge of clock : clock pulse for FF-B. Hence it toggles to change QB from
 On the arrival of second falling clock edge, FF-A toggles 1 to 0.

ns e
again and QA changes from 1 to 0.  QB QA = 00 ….. After the fourth CLK pulse
nd
 QA = 0 Corresponding to 2 negative clock edge.

io dg
 So the counter has reached the original state. The
 This change in QA (from 1 to 0) acts as a negative clock counter operation will now repeat.
edge for FF-B. So it will also toggle, and QB will change
 Table 7.2.1 summarizes the operation of the counter
from 0 to 1.
and Fig. 7.2.3 shows the timing waveforms.


at le
 QB = 1

Hence after the second clock pulse the counter outputs


(C-6843) Table 7.2.1 : Summary of operation of a 2-bit
binary ripple up counter
ic w
are
QB QA = 10 ….. After the second CLK pulse
bl no

 Note that both the outputs are changing their state.

 But both the changes do not take place simultaneously.


QA will change first from 1 to 0 and then QB will change
from 0 to 1.
Pu K

 This is due to the propagation delay of FF-A. So both


flip-flops will never trigger at the same instant.
ch

 Therefore the counter is called as an asynchronous


counter. This is shown in Fig. 7.2.2.
Te

(C-772) Fig. 7.2.2 : FFs do not change their


state simultaneously (C-773) Fig. 7.2.3 : Timing diagram for a 2
bit ripple up counter
At the third falling edge of clock :

 On arrival of the third falling edge, FF-A toggles again Why is it called counter ?

and QA becomes 1 from 0.  See Fig. 7.2.3. The decimal count corresponds to the
 Since this is a positive going change, FF-B does not number of clock pulses, which counter has received.
respond to it and remains inactive. So QB does not
 Thus this circuit counts the clock pulses. Hence it is
change and continues to be equal to 1.
called as counter.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-4 Counters

Number of states : Logic diagram :

 As seen from Table 7.2.1, this counter has four distinct  We can apply all the basic concepts which were
states of output namely 00, 01, 10 and 11. In general the introduced for the 2-bit ripple up counter to the 3-bit
n
number of states = 2 where n is equal to the number of ripple up counter.
flip-flops.
 Fig. 7.2.5 shows the logic diagram of a 3-bit ripple up
Maximum count : counter. Since it is a 3-bit counter, we need to use
 As seen from Table 7.2.1, the maximum count is 3-flip-flops.
3 (decimal) i.e. 1 1 binary.

ns e
2
Maximum count = 3 = 2 – 1

io dg
n
 In general the maximum count = (2 – 1), where
n = Number of flip-flops.

7.2.1 Two Bit Asynchronous Up Counter


using JK Flip-Flops :


at le
Logic diagram :

A 2 bit asynchronous counter up using JK flip-flops is


ic w
shown in Fig. 7.2.4.

(C-775) Fig. 7.2.5 : 3-bit ripple up counter


bl no

 Operation of the 3-bit ripple up counter takes place in


exactly similar manner as that of a 2-bit counter.

Truth table :
Pu K

 Table 7.2.2 summarizes the operation of the 3-bit


asynchronous up counter.
ch

(C-776(a))Table 7.2.2 : Summary of operation of a


3-bit ripple up counter

(C-774) Fig. 7.2.4 : Two bit ripple up counter


Te

using JK flip-flops

 Note that the J and K inputs of both the flip-flops are


connected to logic 1 so actually JK flip-flops are
converted into T flip-flops.
 The operation of this circuit is exactly same as that of
the counter using the T flip-flops.

7.2.2 3 Bit Asynchronous Up Counter :

SPPU : May 07, Dec. 09, Dec. 12


 Note that the asynchronous preset and clear terminals
University Questions. are also being used.
Q. 1 Draw and explain 3-bit asynchronous up-counter.  Both of them are active low inputs. Hence for the
Also draw the necessary timing diagram. normal output of the counter preset and clear terminals
(May 07, Dec. 12, 6 Marks) should be connected to logic 1.
Q. 2 Draw 3-bit asynchronous counter. Explain timing Number of states :
n 3
diagram for the same. (Dec. 09, 8 Marks)  Number of states = 2 = 2 = 8.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-5 Counters

 The 3 bit ripple up counter can have 8 distinct states  Since it is a 4 bit ripple up counter, we need to use four
i.e. QC QB QA can take up values from 000, 001, 010, flip flops.
........110, 111.  Initially all the flipflops have zero output.

Maximum count :  QDQCQBQA = 0000.


n
 Maximum count = 2 – 1 = 8 – 1 = 7. Refer Table 7.2.2.  All the flip flop are negative edge triggered. CLK signal
The maximum count is QC QB QA = 1 1 1 i.e. decimal 7.
is applied to the clock input of FF-A whereas Q outputs
Note that QC is treated as MSB and QA as LSB.
of every F/F is applied to the clock input of next F/F.
Timing diagram :

ns e
 For example QA to CLK of FF-B, QB to CLK of FF-C and so
 The timing diagram of a 3-bit ripple up counter are as on.

io dg
shown in Fig. 7.2.6.
Truth table :

 Table 7.2.3 shows the truth table for 4 bit asynchronous


up counter. Its output passes through 16 states form

at le 0000 i.e. (0)10 to (1111) i.e. (15)10.

(C-6844) Table 7.2.3 : Truth table for a 4-bit asynchronous


ic w
up counter
bl no

(C-776) Fig. 7.2.6 : Timing diagram for a


3-bit ripple up counter
Pu K

7.2.3 4 Bit Asynchronous up Counter :


SPPU : May 08, May 11.
ch

University Questions.
Q. 1 Draw and explain 4-bit binary up counting with this
concept. Also draw the necessary timing diagram.
Te

Is there any frequency division concept in it ?


Comment on frequency generated at the output of
each flip-flop. (May 08, 4 Marks)
Q. 2 Draw 4-bit asynchronous counter. Also explain
timing diagram for the same. (May 11, 8 Marks)  After 1111, the output again become 0000 and the
operation repeats itself.
Logic diagram :
 Fig. 7.2.7(b) shows the timing diagram for the 4-bit
 Fig. 7.2.7(a) shows the circuit diagram of a 4 bit
asynchronous up counter.
asynchronous counter using the T flip flops.
 QD acts as MSB of the output whereas QA acts as the
LSB.

Number of states :
 The number of state through which the output of a 4 bit
up counter passes is 16 (from 0 to 15).

Maximum count :

 The maximum count is 15 i.e. 1111. Thus a 4-bit ripple


(C-777) Fig. 7.2.7(a) : 4 bit asynchronous up counter up counter will count from 0000 to 1111.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-6 Counters

Timing diagram :

ns e
io dg
(C-778) Fig. 7.2.7(b) : Waveforms of a 4 bit asynchronous up counter

7.2.4 at le
State Diagram of a Counter :  But the counters which can count in the downward
direction i.e. from the maximum count to zero are called
ic w
 The state diagram of a counter represents the states of down counters.
a counter graphically.  The countdown sequence for a 3-bit asynchronous
bl no

 For example for a 2-bit up counter the state diagram is down counter is as follows :
shown in Fig. 7.2.8(a) and for a 2-bit down counter the
(C-6970) Table 7.3.1 : Truth table of 3 bit down counter
state diagram is shown in Fig. 7.2.8(b).

 The number written inside a circle represents the state


Pu K

number, whereas the arrow shows the direction of


counter (up or down).
ch

 Note that in the counters only the state is important.


Hence in the state diagram we have not shown any
input or output conditions.
Te

(a) For a 2-bit up (b) For a 2-bit down


counter counter
(C-2762) Fig. 7.3.1 : State diagram
(C-779)Fig. 7.2.8 : State diagram
 Thus counting takes place as follows :
QC QB QA = 111, 110, 101, 100, 011, 010, 001, 000.
7.3 Asynchronous Down Counters :
 From this sequence it is evident that FF-A should toggle
7.3.1 3- Bit Asynchronous Down Counter :
at every negative going clock edge but FF-B should
Truth table and state diagram : change its state only at those instants when QA changes
from LOW (0) to HIGH (1) and QC should changes only
 All the counters discussed so far have counted upwards
when QB changes from LOW to HIGH.
from zero. So they can be called as up counters.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-7 Counters
nd
 Thus in a down counter, each FF except the first one QC QB QA = 1 1 0 ….. After the 2 CLK pulse
(FF-A) should toggle when the output of its preceding  The down counting will thus take place. Similarly the
flip-flop changes from LOW to HIGH. counter will count down to pass through the states 101,
 If all the FFs are negative edge triggered i.e. responding 100, 011, 010, 001 and 000. The operation repeats itself
to the negative CLK edge, then we can place an inverter thereafter.
in front of every CLK input or we can drive the CLK input
of next FF from the Q output of the preceding FF and 7.4 UP / DOWN Counters :
not from the Q outputs as shown in Fig. 7.3.2.  We have designed the up counters and the down

ns e
Logic diagram : counters separately.

io dg
 But in practice both these modes are generally
 A 3-bit asynchronous down counter is shown in
combined together and an UP/DOWN counter is
Fig. 7.3.2. The clock input is applied directly to the clock
–– –– formed.
input of FF-A. But QA is connected to clock of FF-B, QB
 A mode control (M) input is also provided to select

at le
to clock of FF-C and so on.


either up count or down count mode of operation.

A combinational circuit is required to be designed and


ic w
used between each pair of flip-flops in order to achieve
the up/down operation.
bl no

Types of up/down counters :

 The up/down counters are of two types :


(C-781) Fig. 7.3.2 : A 3-bit asynchronous down counter
1. UP/DOWN ripple counters.
Operation :
2. UP/DOWN synchronous counters.
Pu K

 Initially let all the flip-flops be in the reset condition.


7.4.1 UP/DOWN Ripple Counters :
 QC QB QA = 0 0 0
ch

 In the up/down ripple counter all the FFs operate in the


 As soon as the first falling clock pulse arrives, FF-A
–– toggle mode. So either T flip-flops or JK flip-flops are to
toggles. So QA becomes 1 and QA changes from 1 to 0.
be used.
––
Te

 The negative going change in QA acts as a clock to FF-B.  The LSB flip-flop receives clock directly. But the clock to
Hence FF-B will change its state. So QB becomes 1 and –
every other FF is obtained from Q or Q output of the
–– previous FF.
QB changes from 1 to 0.
–– UP counting mode (M = 0) :
 This negative going change in QB acts as a clock to FF-C.
Hence FF-C will change its state. So QC becomes 1 and  The CLK signal is applied directly to the clock input of
–– the LSB flip-flop.
QC becomes 0.
 For the remaining flip-flops, the Q output of the
 Thus after the first clock pulse the output of counter are,
st
preceding FF is connected to the clock of the next stage
QC QB QA = 1 1 1 ….. After the 1 CLK pulse
if up counting is to be achieved. For this mode, the
 Corresponding to the second falling clock edge, FF-A mode select input M is at logic 0 (M = 0).
––
toggles. QA becomes 0 and QA becomes 1. This positive
DOWN counting mode (M = 1) :
––
going change in QA does not alter the state of FF-B. So
 The clock signal is applied directly to the clock input of
–– –
QB remains 1 and QB remains 0. So there is no change in the LSB flip-flop. For the remaining flip-flop, the Q
the state of FF-C. Hence after the second clock pulse the output of the preceding FF is connected to the clock of
counter outputs are, the next FF.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-8 Counters

 This will operate the counter in the down counting  Table 7.5.1 shows the relation between 2, 3 and 4 bit
mode. For down counting mode the mode select input counters and their modulus.
M is kept at logic 1 (M = 1). (C-8049)Table 7.5.1

7.5 Modulus of the Counter (MOD-N


Counter) :
SPPU : Dec. 09, Dec. 10.
University Questions.
Q. 1 What is MOD counter ? (Dec. 09, 2 Marks)

ns e
7.5.1 Design of Asynchronous MOD
Q. 2 What is the advantage of MOD counter ?
Counters :

io dg
(Dec. 10, 2 Marks)
Ex. 7.5.1 : Design MOD-5 asynchronous counter, and
Definition : also draw the waveforms.
 Modulus (MOD) of a counter represents the number of Soln. :


at le
states through which the counter progresses during its
operation. It is denoted by N.
Thus MOD-N counter means the counter progresses
Step 1 : Draw the state diagram :
 The state diagram of MOD-5 ripple counter is as shown
in Fig. P. 7.5.1(a).
ic w
through N states.

 A MOD-4 counter will have 4 states. A MOD-6 counter


bl no

will have 6 states etc.


(C-794) Fig. P. 7.5.1(a) : State diagram of a
 Thus a 3 bit counter which has 8 states is a MOD 8 MOD-5 ripple counter
counter, and a 4 bit counter is a MOD-16 counter.
Step 2 : Write truth table for the reset logic :
Pu K

 In general m number of flip-flops are required to  Table P. 7.5.1 shows the truth table for the reset logic.
m
construct mod-n counter, where N 2 . (C-6216) Table P. 7.5.1 : Truth table for the reset logic
ch

 We can design a modulo counter with the help of the


basic ripple counter structure and a combinational logic
called reset logic.
Te

 The 2-bit ripple counter is called as MOD-4 counter and


3-bit ripple counter is called as MOD-8 counter. So in
general, an n-bit ripple counter is called as modulo-N
counter where
n
MOD number = 2

 So we can conclude that modulus of a counter Step 3 : K-map :


represents the number of states through which the
counter progresses during its operation.

 Can we have a modulo-5 counter using a 3-bit ripple


counter ? That means can we restrict the number of
states to only 5 instead of 8 under normal conditions ?

 The answer is yes. We can design such modulo counters


with the help of the basic ripple counter structure and a (C-795) Fig. P. 7.5.1(b) : K-map and simplification

combinational logic called reset logic.  The K map is as shown in Fig. P. 7.5.1(b).

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-9 Counters

Soln. :
 The states 0 through 4 are valid states and the output Y
Step 1 : Write the truth table :
of reset logic (Y) is inactive (1) for them.
(C-6268) Table P. 7.5.2
 The states 5, 6 and 7 are invalid states. If counter enters

into any one of these states that Y = 0 (active) and will

reset all the flip-flops.

Step 4 : Logic diagram :

ns e
 The logic diagram of a MOD-5 ripple counter is shown

in Fig. P. 7.5.1(c).

io dg
at le
ic w
Step 2 : Draw the K map :

The K map is shown in Fig. P. 7.5.2(a).


bl no

(C-796) Fig. P. 7.5.1(c) : Logic diagram of


MOD-5 ripple counter
Pu K

Step 5 : Timing diagram :


ch

 The timing diagram is as shown in Fig. P. 7.5.1(d).


Te

(C-801) Fig. P. 7.5.2(a) : K map


 The simplified equation for Y output of the reset logic is
as follows :

–– –– –– ––
Y = QD + QC QA + QB ( )
(C-797) Fig. P. 7.5.1(d) : Timing diagram of Step 3 : Draw the circuit diagram :
MOD-5 ripple counter

Ex. 7.5.2 : For MOD-11 asynchronous up counter :

1. Draw circuit diagram. Use T flip-flop.

2. Write truth table.

3. Draw timing diagram.

4. If the output frequency is 11 kHz what is


the clock input ? (C-802) Fig. P. 7.5.2(b) : MOD 11 counter

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-10 Counters

7.5.2 Frequency Division Taking Place in  The one cycle period of QB is 4 T. Hence the frequency
Asynchronous Counters :
of QB is given by,
SPPU : May 05, May 08, Dec. 11 fCLK
1 1
fB = = ….. since = fCLK
University Questions. 4T 4 T
Q. 1 Assume 16 MHz clock source in a system. How will
 Similarly the frequency of QC output is given by
you divide this frequency by a factor 8 ? Explain
your logic with suitable circuit diagram. 1 fCLK
fC = =
(May 05, Dec. 11, 8 Marks) 8T 8

ns e
Q. 2 What do you mean by binary ripple counter ? Draw
Note : In any counter, the signal at the output of last FF
and explain 4-bit binary up counting with this

io dg
concept. Also draw the necessary timing diagram. (i.e. MSB) will have a frequency equal to the input
Is there any frequency division concept in it ? clock frequency divided by the MOD number of the
Comment on frequency generated at the output of
counter.
each flip-flop. (May 08, 10 Marks)


at le
In the chapter on flip-flops, we have seen that a flip-flop
in toggle mode divides the clock frequency by 2.
7.5.3 Disadvantages of Ripple Counters :
ic w
–  Every flip-flop has its own propagation delay. In ripple
 That means the frequency of Q or Q output waveform
of a toggle flip-flop is exactly half of the clock counter the output of the previous FF is used as clock
bl no

frequency. for the next FF.


 The concept of frequency division is applicable to the
 Hence the propagation delay goes on accumulating. For
counters as well because we use flip-flops in the toggle
a 3-bit ripple counter the propagation delay of the first
mode for counters.
Pu K

FF gets added to that of the second FF to decide the


 Refer to the timing waveforms of a 3-bit asynchronous
transition time for the third stage.
ch

up counter and observe the frequencies of QA, QB and


QC waveforms with respect to the clock frequency.  This accumulated time delay is the main problem with

the ripple counters, because the propagation delay goes


Te

on increasing with increase in number of flip-flops.

 This will put a limitation on the maximum clock

frequency.

7.6 Ripple Counter IC 7490


(C-807) Fig. 7.5.1 : Timing waveforms of a 3-bit
(Decade Counter) :
asynchronous up counter
Internal structure :
Conclusions :
 IC 7490 is a TTL MSI decade counter. It contains four
 Let one cycle period of the clock signal be T sec. Hence
master slave flip-flops and a few logic gates to provide
the clock frequency fCLK = (1/T) Hz.
 Output of the least significant flip-flop (i.e. FF-A) has a a divide-by-two counter and a three stage binary
one cycle period of (2 T) as shown in Fig. 7.5.1. Hence, counter which provides a divide by 5 counter. (MOD-5),
1 fCLK
Frequency of QA output = fA = = as shown in Fig. 7.6.1.
2T 2

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-11 Counters

ns e
(C-1374) Fig. 7.6.1 : The basic internal structure of IC 7490

io dg
 IC 7490 is a MOD-10 or decade counter. It is a 14 pin IC
and its pin configuration as shown in Fig. 7.6.2.

at le
ic w
bl no

(C-1375) Fig. 7.6.2 : Pin configuration of IC 7490

Description :
Pu K

(C-1376) Fig. 7.6.3 : A simplified internal diagram of IC 7490


 Table 7.6.1 explains the pin configuration of IC 7490.
ch

Table 7.6.1 : Pin name and description of IC 7490 Function table :

Pin name Description  The reset/count function table of IC 7490 is shown in


Table 7.6.2.
Input B This is clock input to the internal MOD-5
Te

ripple counter, which is negative edge (C-6217) Table 7.6.2 : Reset/count truth table

triggered.
R0(1), R0(2) Gated zero reset inputs.
VCC + 5 V DC
R9 (1), R9 (2) These are gated set to nine inputs.
QD, QC , QB Outputs of internal MOD-5 counter with QD
as MSB.

GND Logic ground. Acts as a reference.


Conclusion :
QA Output of internal mod-2 counter or FF-A.
1. If both the reset inputs R0(1) and R0(2) are at logic 1 then
Input A Clock input to FF-A which is negative edge
triggered. all the flip-flops will be reset and the output is given by

QD QC QB QA = 0 0 0 0
7.6.1 The Internal Diagram of IC 7490 :
2. If both the preset inputs R9(1) and R9(2) are at logic 1 then
 A simplified internal diagram of IC 7490 is shown in
the counter output is set to decimal 9.
Fig. 7.6.3.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-12 Counters

 QD QC Q B QA = 1 0 0 1 (C-6218) Table P. 7.6.1 : Summary of operation


of IC 7490 decade counter
3. If any one pin of R0(1) , R0(2) and one of R9(1) , R9(2) is low,

then the counter will be in the count mode.

Ex. 7.6.1 : Demonstrate the use of IC 7490 as a decade


counter and explain its operation.

Dec. 04, 3 Marks.

ns e
Soln. :
 The connection diagram for IC 7490 as decade counter

io dg
is shown in Fig. P. 7.6.1.

at le
ic w
bl no

(C-1377) Fig. P. 7.6.1 : 7490 as decade counter

1. Note that the QA output of FF-A is externally connected


7.6.2 Other Applications of IC 7490 :

to the input B which is the clock input of the internal  Some of the other applications of IC 7490 are as follows:
Pu K

mod-5 ripple counter. 1. Symmetrical Bi-quinary divide by ten counter.


ch

2. Hence QA will toggle on every falling edge of the clock 2. Divide by two (MOD-2) and divide by 5 (MOD-5)

input whereas the outputs QD QC QB of the MOD-5 counter.

counter will increment from 000 to 100 on the low 7.6.3 Symmetrical Bi-quinary Divide by
Te

going change of QA output. Ten Counter :

3. Table P. 7.6.1 summarizes the operation of the 7490 as  In this application the clock input is applied to the B
input and QD output is connected to A input as shown
decade counter. Due to cascading of MOD-2 and
in Fig. 7.6.4.
MOD-5 counters, the overall configuration becomes a
 The output is obtained at QA output. It is a perfect
MOD - 10 i.e. decade counter configuration.
square wave with a 50% duty cycle at a frequency equal
4. The reset inputs R0(1), R0(2) and the preset inputs R9(1), to fCLK / 10.

R9(2) are connected to ground so as to make them

inactive.

5. As shown in Table P. 7.6.1 the counter counts from 0000

to 1001 i.e. from 0 to 9. After 1001 the MOD-5 counter

resets to 000 and QA changes to 0. Hence the next


(C-1378) Fig. 7.6.4 : Symmetrical bi-quinary divide
count after 1001 is 0000 and recycling begins.
by ten counter

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-13 Counters

Operation : Step 2 : Draw the circuit :

 The operation of this counter has been summarized in  The total number of states is 6.
Table 7.6.3.  Note that QC and QB are connected to R01 and R02
(C-6219) Table 7.6.3 : Summary of operation of symmetrical
respectively.
bi-quinary divide by 10 counter
 So when the counter output is (6)10 i.e. QC QB QA = 110

then the counter will reset to 000.

ns e
io dg
at le
ic w
(C-1380) Fig. P. 7.6.2 : MOD 6 counter using IC 7490

Output waveform : Ex. 7.6.3 : Design a MOD-7 counter using 7490.


bl no

Soln. :
 The waveform at QA output is shown in Fig. 7.6.5. It
 MOD - 7 counter counts through the 7 states as shown
shows that QA output is a perfect square wave.
in Fig. P. 7.6.3(a).
Pu K
ch

(C-1379) Fig.
7.6.5 : Waveforms showing a
Te

perfect square wave output

Ex. 7.6.2 : Draw MOD 6 counter using IC 7490. Write


truth table. Dec. 04, 3 Marks.

Soln. : (C-1381) Fig. P. 7.6.3(a) : State diagram for MOD-7 counter


Step 1 : Truth table :
 From all the invalid states (7, 8, 9) and from 6 the
 Table P. 7.6.2 shows the truth table for a MOD 6 counter.
counter should reset.
(C-7764) Table P. 7.6.2
Design of reset logic :

 The output of the reset logic should be 1 corresponding

to all the invalid states. The reset logic output is applied


to R0(1) and R0(2).

 The truth table for the reset logic is shown in

Table P. 7.6.3 and the K-map is shown in Fig. P. 7.6.3(b).

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-14 Counters

(C-6220) Table P. 7.6.3 : Truth table of the reset logic Ex. 7.6.4 : Draw basic internal architecture of IC 7490.
Design a divide by 20 counter using same.
Dec. 11, 8 Marks.
Soln. :
Soln. : Solve it yourself.

Ex. 7.6.5 : Explain the internal diagram of IC 7490.


Design MOD 7 and MOD 98 counter using
7490. (May 12, 8 Marks)

ns e
Soln. :

io dg
 Refer section 7.6.1 for internal diagram of IC 7490. Refer
Ex. 7.6.3 for MOD-7 counter.

MOD 98 counter :

at le Step 1 : Number of ICs required :


 Up to MOD - 100, two IC 7490s will be sufficient. Hence
to implement a divide by 98 counter, we have to use
ic w
two decade counter ICs.

Step 2 : Design of reset logic :


bl no

 To design the reset logic, earlier we used to draw the


K-maps.

(C-1382) Fig. P. 7.6.3(b) : K-map for the output of reset logic  The 3, 4 or at the most 5 variable K-maps are practically
Pu K

possible to handle.
 Note that for all the states beyond 1001, we have
 But here, there will be 8 FFs and so there will be 8-
entered a logic 1 in the K-map treating all those states
ch

variables. So use of K-maps is practically impossible.


as invalid states.
 Hence we will simplify the reset logic design as follows :
Expression for Y :
 A divide by 98 counter counts through 98 states from
 From the K-map, the simplified expression for Y is given
Te

0 to 97 and the counter should reset as soon as the


by,
count becomes 98.
Y = QC QB QA + QD
 So in order to reset the counter at 98, connect the Q
Logic diagram : outputs which are equal to 1 in the count of 98 to an
 The logic diagram of the MOD-7 counter is shown in AND gate as shown in Fig. P. 7.6.5 and then connect the
Fig. P. 7.6.3(c). AND output to the R0(1) and R0(2) i.e. reset inputs of both

the ICs.

(C-6005) Fig. P. 7.6.5

Step 3 : Draw the logic diagram :

 The logic diagram of the MOD-98 counter is shown in


(C-1383) Fig. P. 7.6.3(c) : MOD -7 counter using IC 7490 Fig. P. 7.6.5(a).

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-15 Counters

ns e
(C-3602) Fig. P. 7.6.5(a) (C-3604) Fig. P. 7.6.6(b)

2. MOD 45 counter :

io dg
Ex. 7.6.6 : Design the following using IC7490 :

1. MOD 97 counter Step 1 : Number of ICs required :

 Upto MOD - 100, two IC 7490s will be sufficient


2. MOD 45 counter.
Step 2 : Design of reset logic :

Soln. :at le Dec. 12, 8 Marks, May 17, 6 Marks)


 Refer previous example for the procedure to design the
reset logic.
ic w
1. MOD 97 counter :
 So in order to reset the counter at 45, connect the Q
Step 1 : Number of ICs required :
outputs which are equal to 1 in the count of 45 to an
bl no

 Upto MOD - 100, two IC 7490s will be sufficient. AND gate as shown in Fig. P. 7.6.6(c) and then connect
the AND output to the R0(1) and R0(2) i.e. reset inputs of
Step 2 : Design of reset logic :
both the ICs.
Pu K

 Refer previous example for the procedure to design the

reset logic.
ch

 So in order to reset the counter at 97, connect the Q

outputs which are equal to 1 in the count of 97 to an


Fig. P. 7.6.6(c) : Reset logic
Te

(C-6007)
AND gate as shown in Fig. P. 7.6.6(a) and then connect
Step 3 : Draw the logic diagram :
the AND output to the R0(1) and R0(2) i.e. reset inputs of
 The logic diagram of the MOD-45 counter is shown in
both the ICs. Fig. P. 7.6.6(d).

(C-6006) Fig. P. 7.6.6(a) : Reset logic

Step 3 : Draw the logic diagram :


(C-3606) Fig. P. 7.6.6(d)

 The logic diagram of the MOD-97 counter is shown Ex. 7.6.7 : Design and draw logic diagram of Mod-82

in Fig. P. 7.6.6(b). counter using IC7490. Dec. 14, 6 Marks)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-16 Counters
Soln. :
Ex. 7.6.8 : Design a MOD-11 counter using IC7490.
Step 1 : Number of ICs required :
Show states with the help of timing diagram.
Upto MOD - 100, two IC 7490s will be sufficient. May 15, 7 Marks
Step 2 : Design of reset logic : Soln. :

 Refer previous example for the procedure to design the MOD-11 counter using IC7490 :
reset logic.
Step 1 : To design a divide by 11 (MOD-11) counter we
 So in order to reset the counter at 82, connect the Q have to use two IC 7490 counter ICs.

ns e
outputs which are equal to 1 in the count of 82 to an
Step 2 : Design of reset logic. Both ICs should reset as
AND gate as shown in Fig. P. 7.6.7(a) and then connect

io dg
soon as the count is equal to 11 decimal.
the AND output to the R0(1) and R0(2) i.e. reset inputs of

both the ICs.

at le (a) Reset diagram


ic w
bl no

(C-4939)Fig. P. 7.6.7(a) : Reset logic

Step 3 : Draw the logic diagram :


Pu K

(b) To Ro(1) and Ro(2) of both IC7490


ch

(C-5129) Fig. P. 7.6.8

So in order to reset the counter at 11, connect the Q


Te

output which are equal to ‘1’ in the count of 11 to an

AND gate as shown in Fig. P. 7.6.8(b).


(C-4940) Fig. P. 7.6.7(b) : MOD-82 using IC 7490
Step 3 : The logic diagram of the MOD-11 is
The logic diagram of the MOD-82 counter is shown in
Figs. 7.6.7(b). Fig. P. 7.6.8(c).

(C-5130) Fig. P. 7.6.8(c)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-17 Counters

Timing diagram :

ns e
io dg
(C-803) Fig. P. 7.6.8(d)

Soln. :
at le
Ex. 7.6.9 : Design and draw logic diagram of Mod 72 counter using IC 7490. Dec. 16, 6 Marks
ic w
Step 1 : Number of ICs required :

Upto MOD - 100, two IC 7490s will be sufficient.


bl no

Step 2 : Design of reset logic :

 Refer previous example for the procedure to design the reset

logic.
Pu K

 So in order to reset the counter at 72, connect the Q outputs


ch

which are equal to 1 in the count of 72 to an AND gate as

shown in Fig. P. 7.6.9(a) and then connect the AND output to


the R0(1) and R0(2) i.e. reset inputs of both the ICs.
Te

(C-5713) Fig. P. 7.6.9(a) : Reset logic

Step 3 : Draw the logic diagram :

 The logic diagram of the MOD-72 counter is shown in Fig. P. 7.6.9(b)

(C-5709) Fig. P. 7.6.9(b)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-18 Counters

Ex. 7.6.10 : What is Mod counter ? Explain MOD-26 Step 2 : Design of reset logic :
counter using IC 7490. Draw design for the
 Refer previous example for the procedure to design the
same. Dec. 17, 6 Marks
reset logic.
Soln. :
So in order to reset the counter at 56, connect the Q
Refer section 7.5 for definition of Mod counter.
outputs which are equal to 1 in the count of 56 to an
Step 1 : Number of ICs required :
AND gate as shown in Fig. P 7.6.11(a) and then connect
Upto MOD - 100, two IC 7490s will be sufficient.
the AND output to the R0(1) and R0(2) i.e. reset inputs of

ns e
Step 2 : Design of reset logic : both the ICs.

io dg
 Refer previous example for the procedure to design the

reset logic.

 So in order to reset the counter at 26, connect the Q

at le
outputs which are equal to 1 in the count of 26 to an

AND gate as shown in Fig. P. 7.6.10 and then connect


the AND output to the R0(1) and R0(2) i.e. reset inputs of
(C-7151) Fig. P. 7.6.11(a) : Reset logic
ic w
Step 3 : Draw the logic diagram :
both the ICs.
bl no
Pu K

(C-6331) Fig. P. 7.6.10


ch

(C-7152) Fig. P. 7.6.11(b) : MOD-56 using IC 7490


Step 3 : Draw the logic diagram :
 The logic diagram of the MOD-56 counter is shown in
 The logic diagram of the MOD-26 counter is shown in
Fig. P. 7.6.11(b).
Fig. P. 7.6.10(a).
Te

Ex. 7.6.12 : Design MOD 93 counter using IC 7490.


May 19, 6 Marks
Soln. :

1. MOD 93 counter :

Step 1 : Number of ICs required :

 Upto MOD - 100, two IC 7490s will be sufficient..

Step 2 : Design of reset logic :


(C-6322) Fig. P. 7.6.10(a) : MOD-26 counter  Refer previous example for the procedure to design the

Ex. 7.6.11 : Design and draw MOD 56 counter using IC reset logic.
7490 and explain its operation.  So in order to reset the counter at 93, connect the Q
May 18, 6 Marks outputs which are equal to 1 in the count of 93 to an
Soln. : AND gate as shown in Fig. 2(a) and then connect the
Step 1 : Number of ICs required : AND output to the R0(1) and R0(2) i.e. reset inputs of both

Upto MOD - 100, two IC 7490s will be sufficient. the ICs.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-19 Counters

(C-3603) Fig. P. 7.6.12(a) : Reset logic

Step 3 : Draw the logic diagram :

ns e
The logic diagram of the MOD-93 counter is shown in Fig. P. 7.6.12(b).

io dg
at le
ic w
bl no

(C-7935) Fig. P. 7.6.12(b)

7.7 Problems Faced by Ripple


Pu K

Counters :

 The two major problems associated with the ripple


ch

counters are as follows :

1. Generation of unwanted short duration pulses called


glitch.
Te

2. Propagation delay. (C-815) Fig. 7.8.1 : A 2-bit (MOD-4) synchronous counter

7.8 Synchronous Counters :  The JA and KA inputs of FF-A are tied to logic 1. So FF-A
will work as a toggle flip-flop. The JB and KB inputs are
Definition : connected to QA.

 Hence FF-B will toggle if QA = 1 and there won’t be any


 If the “clock” pulses are applied to all the flip-flops
state change if QA = 0, at the instant when the negative
connected in a counter simultaneously, then such a
counter is called as synchronous counter. clock edge is applied.

 These counters are also known as parallel counters. The Operation :


state of all the flip flops will change simultaneously in
 Initially let both the FFs be in the reset state. Let FF-A be
the synchronous counter.
the LSB flip-flop and FF-B be the MSB flip-flop.
7.8.1 2-Bit Synchronous up Counter :  QB QA = 0 0 ….. Initially
st
Logic diagram : At the 1 negative clock edge :

 A 2-bit or MOD-4 synchronous counter is shown in  As soon as the first negative clock edge is applied, FF-A
Fig. 7.8.1. will toggle and QA will change from 0 to 1.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-20 Counters

 But at the instant of application of negative clock edge, 7.8.2 3-Bit Synchronous Binary up Counter :
QA = 0  JB = KB = 0. Therefore FF-B will not change its
Logic diagram :
state. So QB will remain 0.
 We can extend the principle of operation of a 2-bit
 QB QA = 0 1 ….. After the first clock pulse
synchronous counter to a 3-bit counter shown in
nd
At the 2 negative clock edge : Fig. 7.8.3.
 At the instant when we apply the second negative clock
edge, FF-A toggles again and QA changes from 1 to 0.
 But at this instant QA was 1. So JB = KB = 1 and FF-B also

ns e
will toggle. Hence QB changes from 0 to 1.

io dg
 QB QA = 1 0 ….. After the second clock pulse

Next negative clock edges :

 Similarly on application of the third falling clock edge,


Fig. 7.8.3 : A 3-bit synchronous binary counter

at le
FF-A will toggle from 0 to 1 but there is no change of
state for FF-B.
 QB QA = 1 1 ….. After the third clock pulse


(C-817)

FF-A acts as a toggle FF since JA = KA = 1.

QA output of FF-A is applied to JB as well as KB. Hence if


ic w
QA = 1 at the instant of triggering, then FF-B will toggle
 On application of the next clock pulse, QA will change
but if QA = 0 then FF-B will not change its state.
from 1 to 0 as QB will also change from 1 to 0. Hence
bl no

 QB QA = 0 0 ….. After the fourth clock pulse  QA and QB are ANDed and the output of AND gate is
applied to JC and KC.
 This is the original state. The operation of counter will
repeat after this. The operation is summarised in  Hence when QA and QB both are simultaneously high,
then JC = KC = 1 and FF-C will toggle. Otherwise there is
Pu K

Table 7.8.1 and the timing diagram is shown in


Fig. 7.8.2. no change in the state of FF-C.
ch

 In this way the 2-bit synchronous counter has four Operation :


distinct states namely QB QA = 00, 01, 10 and 11.  Initially all the FFs are in their reset state.
 The maximum count of a 2-bit counter is (11)2 or (3)10.  QC Q B Q A = 0 0 0
(C-6224) Table 7.8.1 : Summary of operation of a 2-bit
Te

st
synchronous counter 1 clock pulse :

 FF-A toggles and QA changes to 1 from 0. But since


st
QA = 0 at the instant of application of 1 falling clock
edge, JB = KB = 0 and QB does not change state.
 QB remains 0.
 Similarly QC also does not change state.
 QC = 0.
st
 QC Q B Q A = 001 ….. After 1 clock pulse
nd
2 clock pulse :

 FF-A toggles and QA becomes 0.


nd
 But at the instant of application of 2 falling clock edge
QA was equal to 1. Hence JB = KB = 1. Hence FF-B will
toggle and QB becomes 1.
(C-816) Fig. 7.8.2 : Timing diagram for a  Output of AND gate is 0 at the instant of negative clock
2-bit synchronous counter
edge. So JC = KC = 0. Hence QC remains 0.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-21 Counters
nd
 QC Q B Q A = 010 …After the 2 clock pulse  Note that the waveforms of synchronous counter are
rd exactly same as those of an asynchronous counter.
3 clock pulse :
rd
 After the 3 clock pulse, the outputs are QC QB
QA = 0 1 1.
th
4 clock pulse :

 Note that QB = QA = 1. Hence output of AND gate = 1


th
and JC = KC = 1, at the instant of application of 4

ns e
negative edge of the clock.

io dg
 Hence on application of this clock pulse, FF-C will toggle
and QC changes from 0 to 1.

 FF-A toggles as usual and QA becomes 0.

 Since QA was equal to 1 earlier, FF-B will also toggle to (C-818) Fig. 7.8.4 : Timing diagram for a

at le
make QB = 0.

 QC Q B Q A = 1 0 0
th
…After the 4 clock pulse 7.8.3
3-bit synchronous counter

Design of the 3 Bit Synchronous


ic w
 Thus the counting progresses. Counter :
th
 After the 7 clock pulse the output is 111 and after the SPPU : May 12, Dec. 19.
bl no

th
8 clock pulse, all the flip-flops toggle and change their .University Questions.
th
outputs to 0. Hence QC QB QA = 0 0 0 after the 8 pulse
Q. 1 Design 3-bit synchronous up-counter using MS
and the operation repeats.
JK-flip-flop. (May 12, 6 Marks)
Pu K

 Table 7.8.2 summarizes operation of the three bit


Q. 2 Design 3-bit Synchronous up counter with JK flip-
synchronous counter.
flops. (Dec. 19, 6 Marks)
ch

(C-6225) Table 7.8.2 : Summary of operation of a 3-bit


 Let us now design a 3 bit synchronous counter first
synchronous counter
using the T flip flops and then using JK flip flops.
Te

Design using T flip flops :

Steps to be followed :

Step 1 : Decide the number of flip flops.

Step 2 : Write the excitation table of T flip flop.

Step 3 : Write the excitation table of the counter.

Step 4 : From the circuit excitation table write K-maps and


obtain simplified equations.
Timing diagram : Step 5 : Draw the logic diagram.
 Timing diagram for a 3-bit synchronous counter is
Step 1 : Decide number of FFs :
shown in Fig. 7.8.4.
 A 3 bit counter goes through 8 states. So it needs three
 The number of states through which this counter
progresses is 8 namely QC QB QA = 000, 001, 010, 011, flip flops.

100, 101, 110, 111. Step 2 : Excitation table of T FFs :


 The maximum count is (111)2 or (7)10.
 Table 7.8.3(a) shows the excitation table of T FF.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-22 Counters

(C-7842) Table 7.8.3(a) : Excitation table of a T FF  For example consider the shaded columns of
Table 7.8.3(c). i.e. QC, QC+1 and TC

 Consider the first row. QC = 0, QC+1 = 0 . So as per the


excitation table of a T FF TC should be 0. Similarly all
other entries are made.

Step 4 : Write K maps and obtain simplified equations :


Step 3 : State diagram and circuit excitation table :
 Count sequence for a 3 bit up counter is given in

ns e
Table 7.8.3(b) and Fig. 7.8.5 shows the corresponding
state diagram.

io dg
(C-6226) Table 7.8.3(b)

at le
ic w
bl no
Pu K

(C-1506) Fig. 7.8.6 : K-maps for different FF inputs


ch

 Refer Figs. 7.8.6(a), (b), (c) for the K-maps corresponding


to all the FF inputs. The simplified equations for TA, TB

(C-819) Fig. 7.8.5 : State diagram and TC also are shown.


Te

 Table 7.8.3(c) shows the circuit excitation table. Step 5 : Draw the logic diagram :
(C-7838) Table 7.8.3(c) : Circuit excitation table
 By using the simplified equations for TA, TB and TC we

can draw the logic diagram for the 3 bit synchronous

shown in Fig. 7.8.7.

 Table 7.8.3(c) has been written by referring to the


present and next state of an output and the required
value to input is written as per the excitation table of (C-1507) Fig. 7.8.7 : Logic diagram of a 3-bit

T FF. synchronous counter

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-23 Counters

Design using JK flip flops :

Step 1 : Number of flip flops :

 For a 3 bit counter, we need 3 flip-flops.

Step 2 : Excitation table of JK FF :

(C-7761) Table 7.8.4 : Excitation table of JK FF

ns e
io dg
at le
Step 3 : State diagram and circuit excitation table : (C-1509) Fig. 7.8.9 : K-maps and simplified equations for

Step 5 : Logic diagram :


different FF inputs
ic w
 Fig. 7.8.10 shows the logic diagram of a 3 bit
synchronous counter using JK flip-flops.
bl no

(C-1508) Fig. 7.8.8 : State diagram


Pu K

(C-7050) Table 7.8.5 : Circuit excitation table


ch

(C-1510) Fig. 7.8.10 : 3 bit synchronous counter using JK FFs

7.8.4 Four Bit Synchronous Up Counter :


Te

SPPU : Dec. 10.

University Questions.
Q. 1 Draw a 4-bit synchronous counter. Also explain
timing diagram for the same. (Dec. 10, 10 Marks)

Logic diagram :
Step 4 : K maps and simplified expressions for all FF
inputs :  All the concepts of a synchronous counter are extended

to design a 4-bit synchronous counter shown in

Fig. 7.8.11.

 Note that JD and KD of the FF-D are connected to the

output of an AND gate and the inputs of this AND gate

come from the outputs of the three preceding FFs.

 Operating principle of this counter is same as that of the


Fig. 7.8.9 (cont...)
3-bit counter discussed earlier.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-24 Counters

ns e
(C-1391) Fig. 7.8.11 : A four bit synchronous counter

Timing diagram :

io dg
 The timing diagram of a four bit synchronous counter is as shown in Fig. 7.8.12.

 Note that the principle of frequency division is applicable to the synchronous counters as well.

at le
ic w
bl no
Pu K
ch

(C-1392) Fig. 7.8.12 : Timing diagram of a 4-bit synchronous counter

Step 4 : Draw the logic diagram and timing diagram.


7.9 Modulo – N Synchronous
Counters : SPPU : May 06, Dec. 06. 7.9.1 Synchronous Decade Counter :
Te

University Questions.
 For the design of a synchronous decade counter follow
Q. 1 What is MOD counter ?
the steps given below :
(May 06, Dec. 06, 5 Marks)

 We have already discussed the MOD-N asynchronous Step 1 : Write the excitation table for T FF and circuit

counters. Now let us discuss the Modulo-N synchronous excitation table :


counters.
 Excitation table for T FF is shown in Table 7.9.1.
 The steps to be followed to design a MOD-N
(C-7842) Table 7.9.1 : Excitation table for T flip-flop
synchronous counter are as follows.
Steps to be followed :

Step 1 : Decide the number of FFs and type of FF to be


used.
Step 2 : Write the circuit excitation table.
Step 3 : From the circuit excitation table write down the  The circuit excitation table is shown in Table 7.9.2.
K-maps and obtain simplified expressions for the
outputs.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-25 Counters

(C-7839)Table 7.9.2 : Circuit excitation table

ns e
io dg
at le (C-848) Fig. 7.9.2 : Logic diagram of a
synchronous decade counter
ic w
Step 2 : K-maps and simplifications : Ex. 7.9.1 : Design a synchronous counter for the

 K-maps for TD , TC ,TB ,TA and their simplified expressions sequence shown in Fig. P. 7.9.1(a).
bl no

are given in Figs. 7.9.1(a), (b), (c) and (d). May 06, 8 Marks..

(C-6227) Table P. 7.9.1(a) : Desired sequence


Pu K
ch
Te

Soln. :

Step 1 : Determine the desired number of FFs :

 From the given sequence the number of FFs is equal to


3. This is a MOD-5 synchronous counter since the
number of states is 5.

 The state diagram is shown in Fig. P. 7.9.1(a) which


shows that 101, 110, 111 are unwanted states.

(C-847) Fig. 7.9.1

Step 3 : Draw the logic diagram :


 Fig. 7.9.2 shows the logic diagram of a synchronous
decade counter. (C-849) Fig. P. 7.9.1(a) : State diagram

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-26 Counters

Step 2 : Write the excitation table and state table :

 The type of FF used is JK flip-flop. The excitation table


for a JK FF is as shown in Table P. 7.9.1(b).

 We have already seen how to write the excitation table


for JK FF.

Excitation table of JK FF :

 Table P. 7.9.1(b) exhibits the excitation table of a JK FF.

ns e
(C-7840) Table P. 7.9.1(b) : Excitation table of JK FF

io dg
at le
Circuit excitation table :

The circuit excitation table is as shown in


(C-850) Fig. P. 7.9.1(b) : K-maps and simplifications
ic w

Table P. 7.9.1(c). Step 4 : Logic diagram :

(C-7841) Table P. 7.9.1(c) : Circuit excitation table  The logic diagram of MOD-5 synchronous counter is
bl no

shown in Fig. P. 7.9.1(c).


Pu K
ch
Te

(C-851) Fig. P. 7.9.1(c) : Logic diagram of MOD-5


 Refer to the shaded portion of the circuit excitation synchronous counter
table. This is nothing but the excitation table of FF-C.
Step 5 : Draw the timing diagram :
The JC and KC values have been decided based on QC
 The timing diagram of MOD-5 synchronous counter is
and QC + 1.
shown in Fig. P. 7.9.1(d).
 Similarly the entries for JB and KB are based on QB and
QB + 1 whereas those for JA and KA are based on QA and
Q A + 1.

Step 3 : K-maps and simplifications :

 K-maps for the J and K inputs of all the FFs and the
corresponding simplified equations are shown in
Fig. P. 7.9.1(b).

 Note that the inputs to this combinational circuit are


QA , QB , QC and outputs are JA , KA through JC, KC. (C-852) Fig. P. 7.9.1(d) : Timing diagram of MOD-5 counter

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-27 Counters

Ex. 7.9.2 : Design a MOD-5 synchronous counter using


T flip-flops.
Soln. :

Step 1 : Decide number of FFs :

 Since the number of states is 5 we need to use

3 flip-flops.

Step 2 : Excitation tables :

ns e
 The excitation table of a T flip-flop is shown in

io dg
Table P. 7.9.2(a).

(C-7842) Table P. 7.9.2(a) : Excitation table of a T flip-flop

Fig. P. 7.9.2(a) : K-maps and simplifications

at le (C-853)

Step 4 : Draw the logic diagram :


ic w
 The logic diagram of MOD-5 synchronous counter using
T FFs is shown in Fig. P. 7.9.2(b).
Circuit excitation table :
bl no

 The circuit excitation table is shown in Table P. 7.9.2(b).

 Refer to the shaded portion of the circuit excitation

table.
Pu K

 This is nothing but the excitation table of FF-C. The TC


ch

values are decided based on QC and QC + 1.

 Similarly the entries for TB are based on QB and QB + 1

whereas those for TA are based on QA and QA + 1.


Te

(C-854) Fig. P. 7.9.2(b) : MOD-5 synchronous


(C-7843) Table P. 7.9.2(b) : Circuit excitation table counter using T flip-flops

7.10 UP / DOWN Synchronous Counter :

 The operating principle of a synchronous up down


counter is same as that of the up/down ripple counters.
But the design procedure and realization are different.

 The mode control (M) is used for selecting the mode of


operation.

 The steps to be followed for the design are as follows :

Steps to be followed :
Step 3 : K-maps and simplifications :
Step 1 : Write the circuit excitation table.
 K-maps for the TA, TB and TC inputs of the three FFs and
Step 2 : Write K-maps and obtain the simplified
the corresponding simplified equations are shown in
expressions.
Fig. P. 7.9.2(a).
Step 3 : Draw the logic diagram.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-28 Counters

7.10.1 3-bit Up/Down Synchronous Counter :

SPPU : Dec. 08, May 10.

University Questions.

Q. 1 Design and implement 3 bit up/down synchronous


counter using MS-JK flip-flop with its truth table.
Also draw timing diagram. (Dec. 08, 12 Marks)

ns e
Q. 2 Explain with a neat diagram working of 3-bit
up-down synchronous counter. Draw necessary

io dg
timing diagram. (May 10, 10 Marks)

 The number of FFs to be used is three. We will use three

toggle flip-flops.

 at le
Let upcounting takes place with M = 0 and down
ic w
counting takes place for M = 1.

Step 1 : Write the circuit excitation table : (C-855) Fig. 7.10.1 : K-maps and simplified expressions
bl no

 The circuit excitation table is shown in Table 7.10.1. Step 3 : Draw the logic diagram :

(C-6228) Table 7.10.1 : Excitation table for a 3-bit up/down  The logic diagram for a 3-bit synchronous up down
synchronous counter counter is shown in Fig. 7.10.2.
Pu K
ch
Te

(C-856) Fig. 7.10.2 : Logic diagram of a 3-bit


synchronous up down counter

7.10.2 Advantages of Synchronous Counter :

1. All the FFs receive the clock signal simultaneously i.e. at

the same instant, the propagation delay problem is

reduced to a great extent. Recall that the propagation

delay is one of the problems of the ripple counter.

Step 2 : K-maps and simplified equations : 2. The total propagation delay between the instant of
application of clock edge and the instant at which the
 The K-maps and simplified equations for TC , TB and TA
MSB output changes is equal to sum of propagation
are shown in Fig. 7.10.1.
delay of one FF and that of one AND gate.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-29 Counters

 Total propagation delay = td of one FF + td of one gate Sr. Asynchronous Synchronous


Parameter
No. counter counter
3. This is much less than the propagation delay of an
asynchronous (ripple) counter. 4. Propagation P.D. = n  (td) P.D. = (td)FF +
delay where n is (td)gate.
Due to reduced propagation delay, the synchronous number of FFs It is much shorter
counters can operate at a much higher clock frequency and td is p.d. than that of
per FF. asynchronous
than the asynchronous counters.
counter.
7.10.3 Comparison of Synchronous and
5. Maximum Low because of High due to
Asynchronous Counters :

ns e
frequency the long shorter
SPPU : May 12, Dec. 12, of operation propagation propagation

io dg
May 14, Dec. 16, May 18. delay. delay.

University Questions.
7.11 Lock Out Condition :
Q. 1 What is the difference between synchronous
counters and asynchronous counters ? SPPU : Dec. 08.

at le
Q. 2
(May 12, Dec. 12, 4 Marks)

Explain the difference between asynchronous and


University Questions.
Q. 1 What is lock out condition ? (Dec. 08, 3 Marks)
ic w
synchronous counter and convert J-K flip flop into  A counter is supposed to follow the sequence of only
D-FF. Show the design. (May 14, 6 Marks) the desired states as shown in Fig. 7.11.1(a).
bl no

Q. 3 Explain the difference between asynchronous and  If it enters into an unused or unwanted state, then it is
synchronous counter and convert SR flip-flop into T expected to return back to a desired state.
flip-flop. Show the design. (Dec. 16, 6 Marks)
 Instead if the next state of an unwanted state is again an
Pu K

Q. 4 Compare Asynchronous counter with Synchronous unwanted state as shown in Fig. 7.11.1(b) then the
counter. Design MOD.11 up counter using IC counter is said to have gone into the lockout conditions.
74191. (May 18, 6 Marks)
ch

 Comparison of synchronous and asynchronous counters


is given in Table 7.10.2.
Table 7.10.2 : Comparison of synchronous
Te

and asynchronous counters

Sr. Asynchronous Synchronous


Parameter
No. counter counter

1. Circuit Logic circuit is With increase in (a) State diagram showing (b) State diagram showing
complexity simple number of states, the desired sequence the lockout condition
the logic circuit
becomes (C-831) Fig. 7.11.1
complicated.
7.11.1 Bushless Circuit : SPPU : Dec. 08.
2. Connection Output of the There is no
pattern preceding FF, connection University Questions.
is connected to between output of
clock of the preceding FF and Q. 1 How lock out condition avoided ?
next FF. CLK of next one. (Dec. 08, 3 Marks)
3. Clock input All the FFs are All FFs receive
Definition :
not clocked clock signal
simultaneously. simultaneously.  The sequential circuit which enters into the lockout
condition is called as the bushless circuit.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-30 Counters

How to avoid lockout situation ? Soln. :


Step 1 : Draw the state diagram :
 In order to avoid the lockout condition, we have to use
some additional logic circuit to ensure that the next  The state diagram is shown in Fig. P. 7.11.1(a). It shows

state of the counter is its initial state, if it enters into an that the next state of all the unused states (1, 3 and 5) is
undesired state. forced to be the initial state (0).

 Thus the next state of each unwanted state should be  This is essential in order to avoid the lockout condition.
the initial state as shown in Fig. 7.11.2(a).

ns e
io dg
(C-833) Fig. P. 7.11.1(a) : State diagram of desired counter

at le Step 2 : Number of flip-flops and type of flip-flop :

 Since the highest state is 7 i.e. 111 we need to use three


ic w
(a)
flip-flops. Let us use JK flip-flops.

Step 3 : Write the state table :


bl no

 Table P. 7.11.1(a) shows the state table for the required

counter.
Pu K

(C-8053)Table P. 7.11.1(a)
ch

(b)
(C-832) Fig. 7.11.2 : Ways to avoid lockout condition

 However it is not necessary to terminate all the


Te

undesired states into the initial state as shown in

Fig. 7.11.2(a).

 It will be sufficient to terminate only one unused state


Step 4 : K maps and simplification :
into the initial state as shown in Fig. 7.11.2(b).
 Fig. P. 7.11.1(b) illustrates the K-maps for all the FF
 If the circuit enters into say state “3” then its next states
inputs and the corresponding simplified expressions.
will be “5” and “7”. As “7” has next state of “0” which is

the desired state, the lockout condition is avoided.

Sequential counter design using JK and T flip flops :

Ex. 7.11.1 : By using suitable flip-flops design a counter to


go through the following states.

0–2–4–6–7–0

Avoid the lockout condition.


Fig. P. 7.11.1(b) (cont...)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-31 Counters

Ex. 7.12.1 : For the given state diagram in Fig. P. 7.12.1(a)

of a counter draw the bush diagrams required

to bring the counter from invalid to a valid state


in one, two or three clock cycles.

ns e
Fig. P. 7.12.1(a) : Given state diagram

io dg
Soln. :
 Refer Figs. P. 7.12.1(b) to (d).

Fig. P. 7.12.1(b) shows that the next state of each invalid

at le
(C-834) Fig. P. 7.11.1(b) : K-maps and simplification

Step 5 : Draw the logic diagram :


state 1, 3 or 5 is the initial state 0. Hence even if the

counter enters into an unused state, it will return to a


ic w
 The logic diagram for the required counter is shown in valid state after only one clock cycle.

Fig. P. 7.11.1(c).
bl no
Pu K
ch

(C-842) Fig. P. 7.12.1(b) : Entry from invalid to valid state


after only one clock pulse

(C-835) Fig. P. 7.11.1(c) : Logic circuit of the required counter  Similarly for the state diagram shown in Fig. P. 7.12.1(c),
Te

7.12 Bush Diagram : it will take only one clock pulse to bring the counter into

a used state if it enters into an unused one.


 In the previous section we have discussed the meaning

of lock-out condition.

 The lock-out condition can be avoided by taking a

precaution called “bushing”. Bushing means adding

branches in the state diagram.

 The invalid states of a counter are branched in the bush

diagram in such a way that even if the counter enters


(C-842) Fig. P. 7.12.1(c) : Entry from invalid to valid state
into an invalid state, it will return to one of its valid after only one clock pulse
states after one, two or three clock cycles.
 Refer the state diagram of Fig. P. 7.12.1(d). If the counter
 The concept of bushing can be well understood by
enters into state “7” then it will return to “0” after one
referring to the following example.
clock pulse.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-32 Counters

 If it is in the state “5” then it will return to 0 state after  0, 2, 4 and 5 are the unused states. They are forcibly

two clock pulses and if it is in the state “1” then it will terminated into 1, 3, 7 and 6 respectively.

return to “0” state after three clock pulses. Step 1 : Number of FFs and the type of FF :

 Let us use T-type FF.

 Since the highest state in the state diagram is 7, we

have to use three T flip-flops.

ns e
Step 2 : Write the circuit excitation table :

io dg
 Let us obtain the circuit excitation table from the bush
(C-842) Fig. P. 7.12.1(d) : Entry from invalid to valid state diagram of Fig. P. 7.12.2(b).
after 1, 2 or 3 clock cycles
 The excitation table for a T flip-flop is shown in

at le
Ex. 7.12.2 : Design a synchronous counter from the state
diagram shown in Fig. P. 7.12.2(a). Avoid
lockout condition. Draw the bush diagram to
Table P. 7.12.2(a) and the circuit excitation table is

shown in Table P. 7.12.2(b).


ic w
avoid the lockout condition. (C-7842) Table P. 7.12.2(a) : Excitation table for a toggle FF
bl no
Pu K

(C-843) Fig. P. 7.12.2(a) : Given state diagram (C-6229) Table P. 7.12.2(b) : Circuit excitation table
ch

Soln. :

 So as to avoid the lockout condition, we have to ensure

that the counter should return back to a valid state, as


Te

soon as it enters into an unused state. The bush

diagram is shown in Fig. P. 7.12.2(b).

Step 3 : Draw the K-maps and obtain simplified

expressions :

 Fig. P. 7.12.2(c) shows the K-maps for various T inputs.

The simplified Boolean expressions also are given

alongwith the K-maps.


(C-844) Fig. P. 7.12.2(b) : Bush diagram

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-33 Counters

6. In the digital triangular wave generator.

7. In the frequency divider circuits.

Review Questions

Q. 1 What is counter ? What are its types ?

Q. 2 Explain the terms “synchronous” and

ns e
“asynchronous”.

io dg
Q. 3 What is the difference between synchronous and
asynchronous counter ?

Q. 4 What are the merits and demerits of the

at le synchronous
counter ?
counter over the asynchronous
ic w
Q. 5 State the procedure for designing mod counters
(C-845) Fig. P. 7.12.2(c) : K-maps and simplifications
from N-bit ripple counter.
Step 4 : Logic diagram :
bl no

Q. 6 What is the similarity between a decade counter and


 The logic diagram is as shown in Fig. P. 7.12.2(d).
a binary counter ? Explain in brief.

Q. 7 What are the applications of counter ?


Pu K

Q. 8 How is a counter different from a register ?


ch

Q. 9 Write the count sequence of three bit binary down

counter.

Q. 10 How does a counter works as frequency divider ?


Te

Q. 11 What is synchronous counter ?

Q. 12 Define modulus of counter. State the states of MOD

5 counter.
(C-846) Fig. P. 7.12.2(d) : Logic diagram of
the required counter Q. 13 Explain the working of 4-bit ripple counter

(asynchronous counter) using T flip-flop with


7.13 Applications of Counters :
suitable circuit diagram and timing diagram.
Some of the applications of counters are :
Q. 14 Write a note on Mod-8 synchronous up-counter
1. In digital clock.
using T flip-flop.
2. In the frequency counters.
Q. 15 Write a note on Mod-8 synchronous down counter
3. In time measurement.
using T flip-flop.
4. In digital voltmeters.
Q. 16 Draw the divide by 7 asynchronous up counter using
5. In the counter type A to D converter.
T flip flop. Write truth table. Draw timing diagram.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 7-34 Counters

Q. 17 Draw the circuit for mod-12 counter. Explain the Q. 20 Design a 3 bit synchronous counter using JK fillip
same with neat waveforms. flops.

Q. 18 Compare synchronous and ripple counters. Q. 21 What is lock out ? How is it avoided ?
Q. 22 Explain decade counter.
Q. 19 Compare counters and registers.



ns e
io dg
at le
ic w
bl no
Pu K
ch
Te

Powered by TCPDF (www.tcpdf.org)


Unit 3

Chapter

8
ns e
io dg
at le Registers
ic w
bl no
Pu K

Syllabus
Shift register types (SISO, SIPO, PISO & PIPO) & applications.
ch
Te

Chapter Contents
8.1 Introduction 8.7 Parallel In Serial Out Mode (PISO)

8.2 Data Formats 8.8 Parallel In Parallel Out (PIPO)

8.3 Classification of Registers 8.9 Bidirectional Shift Register

8.4 Buffer Registers 8.10 Applications of Shift Registers

8.5 Shift Registers 8.11 Sequence Detector

8.6 Serial In Parallel Out (SIPO)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 8-2 Registers

8.1 Introduction :  Registers are classified on the basis of how the data bits
are entered and taken out from a register. There are four
Definition : possible modes as follows :

 In chapter 6, we have stated various applications of


flip-flops. One of them was “registers”.

 Flip-flop is a 1 bit memory cell which can be used for


storing the digital data.

ns e
 To increase the storage capacity in terms of number of
bits, we have to use a group of flip-flops. Such a group

io dg
of flip-flops is known as a register.

 Thus register is a group of flip-flops. The “n-bit” register


will consist of “n” number of flip-flops and it is capable
Fig. 8.3.1 : Modes of operation of registers

8.2
at le
of storing an “n bit” word.

Data Formats :

(C-729(i))

We can design all these registers using discrete


flip-flops such as SR, JK or D flip-flops.
ic w
 The data can be entered in serial (one bit at a time)  But the registers are also available in the IC form.
manner or in the parallel form (all the bits at the same  Some of the registers available in 54/74 TTL series as
bl no

time) into a register. listed in Table 8.3.1.


 And the stored data can be retrieved in the serial or Table 8.3.1 : Shift registers available in 74/54 series
parallel form.
IC number Description
Pu K

 A four bit digital data 1 0 1 1 is represented in the serial


7491, 7491A 8 bit serial in, serial out
and parallel forms in Fig. 8.2.1.
ch

7494 4 bit parallel in, serial out


7495 4 bit serial/parallel in, parallel out
(shift right shift left)
7496 5 bit parallel in / parallel out, serial in /
Te

serial out.

8.4 Buffer Registers :

Logic diagram :

 The simplest type of register constructed using four


D flip-flops is shown in Fig. 8.4.1.
 This is a 4 bit register, but we can construct an n-bit
(C-729(h)) Fig. 8.2.1 : Data representation in serial register by following the same principle.
and parallel forms  This register is also called as the buffer register.
 Each D-flip-flop is negative edge triggered and all the
8.3 Classification of Registers :
flip-flops are connected to a common clock signal.
SPPU : Dec. 04. Hence all of them are triggered at the same instant of
time, i.e. all the flip flops will change their state at the
University Questions.
same instant of time.
Q. 1 Explain different modes of universal shift register  Buffer registers are used for temporary storage of digital
with application of each. (Dec. 04, 6 Marks) words.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 8-3 Registers

Operation : Conclusions :

 Let us assume that the word to be stored is  Some of the important conclusions from the discussion
B3 B2 B1 B0 = 1 0 1 0. till now are as follows :

1. There must be one-FF for each bit to be stored.


 These bits are connected to the D inputs of the four D
Therefore to store a 4 bit number we need four
flip-flops as shown in Fig. 8.4.1.
flip-flops.
 Then the clock pulse is applied. 2. Note that all the four input bits B3 B2 B1 B0 are loaded
 Corresponding to the first negative edge of the clock into the buffer register simultaneously, i.e. at the same

ns e
pulse, the outputs of all the D flip-flops will follow their instant of time.

io dg
respective inputs. 3. Therefore this way of applying the input and taking the
output is called as Parallel Input Parallel Output (PIPO)
 Q3 Q2 Q1 Q0 = B3 B2 B1 B0 = 1 0 1 0
and the mode of operation is called as parallel shifting.

at le 8.5

Definition :
Shift Registers :
ic w
 The binary data in a register can be moved within the
register from one flip-flop to the other or outside it with
bl no

application of clock pulses.

 The registers that allow such data transfers are called as


shift registers.
Pu K

 Shift registers are used for data storage, data transfer


(C-729(j)) Fig. 8.4.1 : A four bit buffer register
and certain arithmetic and logic operations.
using D flip-flops
ch

Modes of operation of a shift register :


 Even if the inputs are now changed, the output remains
 The various modes in which a shift register can operate
latched to 1 0 1 0 till the next negative edge of the clock
are as follows :
Te

arrives at the input.


1. Serial Input Serial Output. (SISO).
 Thus the buffer register is capable of storing the digital
2. Serial Input Parallel Output. (SIPO).
data.
3. Parallel In Serial Out. (PISO).
Schematic diagram :
4. Parallel In Parallel Out. (PIPO).
 The schematic diagram of a buffer register is as shown  These modes are explained in brief in Table 8.5.1.
in Fig. 8.4.2.
Table 8.5.1 : Brief explanation of various modes of
shift register

Sr. Illustrative
Mode Comments
No. diagram

1. Serial input Refer Fig. A Data bits shift from left


serial to right by 1 position
output per clock cycle.
(serial shift
(C-730) Fig. 8.4.2 : Schematic diagram of buffer register right)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 8-4 Registers

Sr. Illustrative Q. 2 Draw and explain 4 bit SISO and SIPO shift
Mode Comments register. Give applications of each.
No. diagram
(May 18, 6 Marks)
2. Serial input Refer Fig. B Data bits shift from
Q. 3 Draw and explain SISO and PIPO type of shift
serial right to left by 1
register. Give application of each.
output position per clock.
(May 19, 6 Marks)
(serial shift
left) Principle :

3. Serial input Refer Fig. C All output bits are  Data bits shift from right to left by 1 position per clock

ns e
parallel made available cycle as shown in Fig. 8.5.1(a).
output simultaneously after 4-

io dg
clock pulses.

4. Parallel Refer Fig. D All inputs are loaded


input serial simultaneously but (C-731) Fig. 8.5.1(a) : Principle of Serial shift left register
output output bit by bit.

5. at le
Parallel
input
Refer Fig. E All inputs are loaded
simultaneously and are
Logic diagram :

 The serial input serial output type shift register with shift
ic w
parallel available at the output left mode is shown in Fig. 8.5.1(b).
output simultaneously.
bl no

(C-731) Fig. A
Pu K

(C-732) Fig. 8.5.1(b) : Serial shift left register


ch

 Let all the flip-flops be initially in the reset condition


(C-731) Fig. B
i.e. Q3 = Q2 = Q1 = Q0 = 0.

 We are going to illustrate the entry of a four bit binary


Te

number 1 1 1 1 into the register.

 When this is to be done, this number should be applied


(C-731) Fig. C (C-731) Fig. D to “Din” bit by bit with the MSB bit applied first.

 The D input of FF-0 i.e. D0 is connected to serial data


input (Din). Output of FF-0 i.e. Q0 is connected to the
input of the next flip-flop i.e. D1 and so on.

(C-3167) Fig. E Operation :

 Before application of clock signal let Q3 Q2 Q1 Q0 = 0 0 0


8.5.1 Serial Input Serial Output
0 and apply MSB bit of the number to be entered to Din.
(Shift Left Mode) :
So Din = D0 = 1.
SPPU : May 05, May 18, May 19
 Apply the clock. On the first falling edge of clock, the FF-
University Questions.
0 is set and the stored word in the register is
Q. 1 Draw and explain the circuit diagram of 3-bit
Q3 Q2 Q1 Q0 = 0 0 0 1
register with Serial Left Shift. (May 05, 4 Marks)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 8-5 Registers

Important note :

Using the SISO mode, we needed 4-clock pulses to store a


4-bit word. So in general we can conclude that it requires n
number of clock pulses to store an n bit word using SISO
mode.
(C-733) Fig. 8.5.2 : Shift register status after
first falling clock edge

 Apply the next bit to Din. So Din = 1.

ns e
 As soon as the next negative edge of the clock hits,

io dg
FF-1 will set and the stored word changes to,
Q3 Q2 Q1 Q0 = 0 0 1 1

at le
ic w
(C-738) Fig. 8.5.4 : Waveforms for the shift left operation
bl no

(C-734) Fig. 8.5.3 : Shift register status after the second


8.5.2 Serial In Serial Out (Shift Right Mode) :
falling edge of clock
 Apply the next bit to be stored i.e. 1 to Din. SPPU : May 05.
Pu K

University Questions.
 Apply the clock pulse. As soon as the third negative
Q. 1 Draw and explain the circuit diagram of 3-bit
clock edge hits, FF-2 will be set and the output get register with Serial Right Shift. (May 05, 5 Marks)
ch

modified to, Q3 Q2 Q1 Q0 = 0 1 1 1.
Principle :
 Similarly with Din= 1, and with the fourth negative clock
 Data bits shift from left to right by 1 position per clock
edge arriving, the stored word in the register is, Q3 Q2
Te

cycle as shown in Fig. 8.5.5(a).


Q1 Q0 = 1 1 1 1.

(C-737) Table 8.5.2 : Summary of shift left operation


(C-731) Fig. 8.5.5(a) : Principle of Serial shift right register

Logic diagram :

 The serial input serial output type shift register with shift

right mode is shown in Fig. 8.5.5(b).

 Let all the flip-flops be initially in the reset condition


i.e. Q3 = Q2 = Q1 = Q0 = 0.

 We are going to illustrate the entry of a four bit binary

number 1 1 1 1 into the register.


Waveforms for shift left operation :
 When this is to be done, this number should be applied
 The waveforms for the shift left operation are shown in
to “Din” bit by bit with the LSB bit applied first.
Fig. 8.5.4.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 8-6 Registers

 Apply the clock pulse. As soon as the third negative


clock edge gets applied, FF-1 will be set and the output
get modified to,
Q3 Q2 Q1 Q0 = 1 1 1 0

 Similarly with Din = 1 and with the fourth negative clock


edge arriving, the stored word in the register is
(C-739) Fig. 8.5.5(b) : Serial shift right register
Q3 Q2 Q1 Q0 = 1 1 1 1
 The D input of FF-3 i.e. D3 is connected to serial data
 Table 8.5.3 summarizes the shift right operation.
input (Din). Output of FF-3 i.e. Q3 is connected to the

ns e
input of the next flip-flop i.e. D2 and so on. (C-744) Table 8.5.3 : Summary of shift right operation

io dg
Operation :
 Before application of clock signal let Q3 Q2 Q1 Q0 = 0 0 0
0 and apply LSB bit of the number to be entered to Din.

 at le
So Din = D3 = 1.

Apply the clock. On the first falling edge of clock, the FF-
3 is set, and the stored word in the register is,
ic w
Q3 Q2 Q1 Q0 = 1 0 0 0

Waveforms for shift right operation :


bl no

 The waveforms for shift right operation are as shown in


Fig. 8.5.8.

Important note : Using the SISO mode, we needed 4-clock


Pu K

pulses to store a 4-bit word. So in general we can conclude


that it requires n number of clock pulses to store an n bit
(C-740) Fig. 8.5.6 : Shift register status after first
ch

word using SISO mode.


falling clock edge

 The shift register after the application of the first clock


pulse is as shown in Fig. 8.5.6.
Te

 Apply the next bit to Din. So Din = 1.

 As soon as the next negative edge of the clock is


applied, FF-2 will set and the stored word changes to,

Q3 Q2 Q1 Q0 = 1 1 0 0

 This is as shown in Fig. 8.5.7.

(C-745) Fig. 8.5.8 : Waveforms for the shift right operation

8.5.3 Applications of Serial Operation :


SPPU : May 18, May 19
University Questions
(C-741) Fig. 8.5.7 : Shift register status after the
Q. 1 Draw and explain 4 bit SISO and SIPO shift
second falling edge of clock
register. Give applications of each.
 Apply the next bit to be stored i.e. 1 to Din. (May 18, 6 Marks)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 8-7 Registers

Q. 2 Draw and explain SISO and PIPO type of shift  As soon as the loading is complete, and all the flip-flops
register. Give application of each.
contain their required data, the outputs are enabled so
(May 19, 6 Marks)
that all the loaded data is made available over all the
 The transmission of data from one place to the other
output lines simultaneously.
takes place in serial manner as shown in Fig. 8.5.9. The
serial shifting of data transmits one bit at a time per  Number of clock cycles required to load a four bit word
clock cycle. is 4. Hence the speed of operation of SIPO mode is
 It takes a longer time for serial transmission, because

ns e
same as that of SISO mode.
the time required to transmit one bit is equal to the time

io dg
corresponding to one clock cycle.

 The parallel transmission is much faster as it can


transmit 8 bits per clock cycle. But it needs more wires
for connecting a transmitter to receiver.

at le
However for long distance communication where the
distances are in kilometres, serial communication has an
ic w
advantage that only one conductor is required to be
used for the data transfer. (C-747) Fig. 8.6.1 : Serial input parallel output mode
bl no

8.7 Parallel In Serial Out Mode (PISO) :


SPPU : Dec. 08, May 10, Dec. 17.

University Questions.
Pu K

(C-746) Fig. 8.5.9 : Application of serial operation Q. 1 Draw and explain the circuit diagram of 3 bit
register with the following facility :
ch

8.6 Serial In Parallel Out (SIPO) :


1. Parallel in serial out
SPPU : May 18
2. Reset (Dec. 08, 6 Marks)
University Questions
Q. 2 Explain with a neat diagram working of parallel in
Te

Q. 1 Draw and explain 4 bit SISO and SIPO shift serial out 4-bit shift register. Draw necessary timing
register. Give applications of each.
diagram. (May 10, Dec. 17, 6 Marks)
(May 18, 6 Marks)
Principle :
Principle :

 In this operation the data is entered serially and taken  In this mode, the bits are entered in parallel
out in parallel as shown in the following figure. i.e. simultaneously into a shift register as shown below.

(C-731)

 In this operation the data is entered serially and taken (C-731)


out in parallel.
Logic diagram :
 That means first the data is loaded bit by bit. The
outputs are disabled as long as the loading is taking  The circuit shown in Fig. 8.7.1 is a four bit parallel input

place. serial output register.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 8-8 Registers

the data input terminal and at Q3 we get the serial data


output.

 Thus the parallel in serial out operation takes place.

8.8 Parallel In Parallel Out (PIPO) :


SPPU : May 19
University Questions
Q. 1 Draw and explain SISO and PIPO type of shift

ns e
register. Give application of each.
(May 2019, 6 Marks)

io dg
(C-748) Fig. 8.7.1 : Parallel in serial out shift register Principle :

 All inputs are loaded simultaneously and are available at


 Output of previous FF is connected to the input of the
the output simultaneously as shown below.

 at le
next one via a combinational circuit.
The binary input word B0, B1, B2, B3 is applied through

the same combinational circuit.


ic w
 There are two modes in which this circuit can work
namely shift mode or load mode.
bl no

(C-3167) Fig. 8.8.1(a) : Principle of PIPO register

Load mode : Logic diagram :


 In order to load the word B3 B2 B1 B0 into the shift  Fig. 8.8.1(b) demonstrates the parallel in parallel out
register we have to select the load mode by setting mode of operation.
Pu K

shift/ load input to 0.


 The 4 bit binary input B0 B1 B2 B3 is applied to the data
ch

 When the shift / load line is low (0), the AND gates 2, inputs D0, D1, D2 and D3 respectively of the four
4 and 6 become active. flip-flops.
 They will pass B1, B2 and B3 bits to the inputs D1, D2 and
 As soon as a negative clock edge is applied, the input
Te

D3 of the corresponding flip-flops. D0 is directly


binary bits will be loaded into the flip-flops
connected to B0.
simultaneously.
 On the low going edge of clock, the binary inputs B0 B1
B2 B3 will get loaded into the corresponding flip-flops.
Thus parallel loading takes place.

Shift mode :

 In order to operate the shift register in the shift mode


we have to select the shift mode by applying a logic 1 to
the shift/ load input.

 When the shift / load line is high (1), the AND gates 2,

4, 6 become inactive. Hence the parallel loading of the (C-749) Fig. 8.8.1(b) : Parallel in parallel out shift register

data becomes impossible.


 The loaded bits will appear simultaneously to the output
 But the AND gates 1, 3 and 5 become active. Therefore
side. Only one clock pulse is essential to load all the bits.
the shifting of data from left to right bit by bit on
application of clock pulses becomes possible. D0 acts as Therefore PIPO mode is the fastest mode of operation.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 8-9 Registers

Applications of PIPO shift register :  Hence if we want to use the shift register to multiply and
divide the given binary number, then we should be able
PIPO shift registers can be used as :
to move the data in either left or right direction as and
1. A temporary storage device.
when we want.
2. As a delay element.
 Such a register is called as a bi-directional register. A
8.9 Bidirectional Shift Register :
four bit bi-directional shift register is shown in Fig. 8.9.1.
SPPU : May 06, Dec. 11, Dec. 13.  There are two serial inputs namely the serial right shift
University Questions. data input DR and the serial left shift data input DL

ns e
Q. 1 Draw and explain 4-bit shift register having shift left alongwith a Mode control input (M).

io dg
and shift right facilities. Explain any one
application of such register. (May 06, 6 Marks) Operation :
Q. 2 Draw and explain 4 bit bidirectional shift register. With M = 1 : Shift right operation
(Dec. 11, 8 Marks)
 If M = 1, then the AND gates 1, 3, 5 and 7 are enabled


at le
Q. 3 Explain with a neat diagram working of 3 bit
bidirectional shift register. (Dec. 13, 6 Marks)

If a binary number is shifted left by one position then it


whereas the remaining AND gates 2, 4, 6 and 8 will be
disabled.
ic w
 Hence the data at DR (shift right input) is shifted right bit
is equivalent to multiplying the original number by 2.
by bit from FF-3 to FF-0 on the application of clock
 On the other hand if a binary number is shifted right by
bl no

pulses.
one position then it is equivalent to dividing the original
 Thus with M = 1 we get the serial right shift operation.
number by 2. This is illustrated below.
With M = 0 : Shift left operation
 Let a four bit number Q3 Q2 Q1 Q0 = 0010 = (2)10 is
Pu K

existing in a shift register.  When the mode control M is connected to “0” then the

 Now with a 0 applied at the input, if we shift these AND gates 2, 4, 6 and 8 are enabled while 1, 3, 5 and 7
ch

contents by one position to left then we get Q3 Q2 Q1 are disabled.

Q0 = 0100 = (4)10.  Therefore the data at DL (shift left input) is shifted left bit

 Thus the shift left is equivalent to multiplying by 2. Now by bit from FF-0 to FF-3 on application of the clock
Te

shift it right by one position to get Q3 Q2 Q1 Q0 = 0001 = pulses.

(1)10. Thus shifting right is equivalent to dividing by 2.  Thus with M = 0 we get the serial left shift operation.

 Note that M should be changed only when CLK = 0,

otherwise the data stored in the register may get

changed in an undesirable manner.

8.9.1 A 3-bit Bidirectional Register using the


JK Flip Flops : SPPU : Dec. 05.

University Questions.

Q. 1 Draw the circuit diagram of 3-bit bi-directional shift


register. (Dec. 05, 6 Marks)

Logic diagram :

 A 3-bit bi-directional shift register using JK flip flops is


(C-750) Fig. 8.9.1 : A 4-bit bi-directional shift register shown in Fig. 8.9.2.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 8-10 Registers

 This circuit is same as the 4 bit bi-directional shift 4. Serial to parallel converter.
register discussed in the previous section. 5. Parallel to serial converter.
 The only modification is the inclusion of an inverter 6. Ring counter.
between the J and K inputs of each flip flops.
7. Twisted ring counter or Johnson counter.
 With M = 1 the shift right operation will take place as
8. Serial data transmission.
discussed in the previous section and with M = 0 the
 We have already seen the use of shift register for
shift left operation is going to take place.
temporary storage of data and for multiplication or

ns e
division.

8.10.1 Serial to Parallel Converter :

io dg
.SPPU : Dec. 08.
University Questions.
Q. 1 Explain how shift registers are used as serial to

at le 
parallel converters ? (Dec. 08, 3 Marks)

In many applications, the data in serial form needs to be


ic w
converted in the parallel form.
bl no

 The Serial Input Parallel Output Mode (SIPO) of the shift

register is used for such applications.

8.10.2 Parallel to Serial Converter :


Pu K

(C-751) Fig. 8.9.2 : A 3 bit bi-directional shift .SPPU : Dec. 08.

register using JK flip flops University Questions.


ch

Q. 1 Explain how shift registers are used as parallel to


8.10 Applications of Shift Registers :
serial converter ? (Dec. 08, 3 Marks)

SPPU : May 06, May 10, May 18.  The data communication between two computers takes
Te

University Questions. place in the serial transmission form.


Q. 1 Draw and explain 4-bit shift register having shift left  But internally the data processing takes place in the
and shift right facilities. Explain any one
parallel form.
application of 4-bit shift register.
 Hence we need to use a parallel to serial converter to
(May 06, 6 Marks)
convert the internal parallel data into a serial data
Q. 2 Give any four applications of shift registers.
suitable for transmission.
(May 10, 4 Marks)
Q. 3 Draw and explain 4 bit SISO and SIPO shift  We can use the parallel to serial mode of the shift
register. Give applications of each. register for this operation.
(May 18, 6 Marks) 8.10.3 Ring Counter :

 Some of the common applications of a shift SPPU : Dec. 08, May 17.
register are : University Questions.
1. For temporary data storage. Q. 1 Write short note on ring counter.

2. For multiplication and division. (Dec. 08, 6 Marks)


Q. 2 Explain with a neat diagram ring counter.
3. As a delay line.
(May 17, 6 Marks)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 8-11 Registers

Logic diagram : On the second falling edge of clock :

 Fig. 8.10.1 shows a typical application of shift registers  At the second falling edge of the clock, only FF-2 will be
called Ring Counter. set because D2 = Q1 = 1.

 The connections reveal that they are similar to the  FF-1 will reset since D1 = Q0 = 0. There is no change in

connections for shift right operation, except for one status of FF-3 and FF-0.

change.  So after the second clock pulse the outputs are,

ns e
Q3 Q2 Q1 Q0 = 0 1 0 0.

 Similarly after the third clock pulse the outputs are,

io dg
Q3 Q2 Q1 Q0 = 1 0 0 0.

 And after the fourth one the outputs are,

at le
(C-871) Fig. 8.10.1 : A four bit ring counter

Q3 Q2 Q1 Q0 = 0 0 0 1.

These are the outputs from where we started. Hence the


ic w
 Output of FF-3 is connected to data input D0 of FF-0.
operation repeats from this point onwards.
Ring counter is a special type of shift register.
Number of output states :
bl no

Operation :
 The number of output states for a ring counter will
 Initially a low clear (CLR) pulse is applied to all the always be equal to the number of flip-flops. So for a

flip-flops. 4-bit ring counter the number of states is equal to 4.


Pu K

 Hence FF-3, FF-2 and FF-1 will reset but FF-0 will be  The operation of a four bit ring counter is summarized
in Table 8.10.1.
ch

preset. So the outputs of the shift register are :


(C-871(a)) Table 8.10.1 : Summary of
Q3 Q2 Q1 Q0 = 0 0 0 1.
operation of a ring counter
 Now the clear terminal is made inactive by applying a
Te

high level to it.

 The clock signal is then applied to all the flip-flops

simultaneously. Note that all the flip-flops are negative

edge triggered.

On the first negative going CLK edge :

 As soon as the first falling edge of the clock hits, only


Waveforms for the ring counter :
FF-1 will be set because Q0 = D1 = 1.
 The waveforms for the 4-bit ring counter are as shown
 The FF-0 will reset because D0 = Q3 = 0 and there is no
in Fig. 8.10.2.
change in the status of FF-2 and FF-3.
 These waveforms clearly show that the presetted “1”
 Hence after the first clock pulse the outputs are shifts one bit per clock cycle and forms a ring. Hence
Q3 Q2 Q1 Q0 = 0 0 1 0. the name ring counter.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 8-12 Registers

Q. 3 Draw and explain Johnson counter with initial state


“1110”, from initial state. Explain all possible states.
(May 14, Dec. 16, 6 Marks)

Logic diagram :

 In the ring counter the outputs of FF-3 were connected



directly to the inputs of FF-0 i.e. Q3 to J0 Q to K0.
3

 Instead if the outputs are cross coupled to the inputs i.e.

ns e
––
if Q3 is connected to K0 and Q is connected to J0 then
3

io dg
the circuit is called as twisted ring counter or Johnson’s
counter.
(C-1360) Fig. 8.10.2 : Waveforms of a four bit ring counter
 The Johnson’s counter is shown in Fig. 8.10.4.

 at le
Applications of ring counter :

Ring counters are used in those applications in which


several operations are to be controlled in a sequential
ic w
manner.

 For example in resistance welding the operations called


bl no

squeeze, hold, weld and off are to be performed


sequentially.
(C-1362) Fig. 8.10.4 : Twisted ring counter or Johnson counter
 We can use a ring counter to initiate these operations.
Pu K

 All the flip-flops are negative edge triggered, and clock


Ring counter using JK Flip Flop :
pulses are applied to all of them simultaneously.
 A ring counter can also be constructed using the JK flip
ch

flops, as shown in Fig. 8.10.3.  The clear inputs of all the flip-flops are connected

together and connected to an external clear signal. Note

that all these clear inputs are active low inputs.


Te

Operation :

 Initially a short negative going pulse is applied to the


clear input of all the flip-flops. This will reset all the

flip-flops. Hence initially the outputs are, Q3 Q2 Q1 Q0

(C-1361) Fig. 8.10.3 : Ring counter using JK flip flops = 0 0 0 0.


––
8.10.4 Johnson’s Counter (Twisted / Switch  But Q = 1 and since it is coupled to J0 it is also equal
3
Tail Ring Counter) :
to 1.
SPPU : Dec. 08, May 10, May 11, May 14, Dec. 16.
 J0 = 1 and K0 = 0 …. Initially
University Questions.
Q. 1 Write short notes on Johnson counter. On the first falling edge of clock pulse :
(Dec. 08, May 10, 6 Marks)
 As soon as the first negative edge of clock arrives, FF-0
Q. 2 Explain twisted ring counter in brief.
(May 11, 4 Marks) will be set. Hence Q0 will become 1.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 8-13 Registers

 But there is no change in the status of any other (C-6295) Table 8.10.2 : Summary of operation
of Johnson’s counter
flip-flop.

 Hence after the first negative going edge of the clock

the flip-flop outputs are,

Q3 Q 2 Q 1 Q 0 = 0 0 0 1

On the second negative going clock edge :

ns e
 Before the second negative going clock edge, Q3 = 0

io dg
––
and Q = 1. Hence J0 = 1 and K0 = 1. Also Q0 = 1.
3

Hence J1 = 1.


at le
Hence as soon as the second falling clock edge arrives,

FF-0 continues to be in the set mode and FF-1 will now


––


Note that there are 8 distinct states of output.

In general we can say that the number of states of a


ic w
set. Hence Q1 will become 1 and Q = 0. Johnson’s counter is twice the number of flip-flops used.
1

 Therefore for a 4-flip-flop Johnson’s counter, there are


 There is no change in the status of any other flip-flop.
bl no

8-distinct output states.


 Hence after the second clock edge the outputs are,
Waveforms for Johnson’s counter :
Q3 Q2 Q1 Q0 = 0 0 1 1.
 The waveforms for a 4-bit Johnson’s counter are shown
Pu K

 Similarly after the third clock pulse, the outputs are, in Fig. 8.10.5.

Q3 Q2 Q1 Q0 = 0 1 1 1.
ch

 And after the fourth clock pulse, the outputs are,

Q3 Q2 Q1 Q0 = 1 1 1 1.
Te

––
Note that now Q = 0 i.e. J0 = 0 and K0 = 1.
3

 Hence as soon as the fifth negative going clock pulse

strikes, FF-0 will reset.

 But the outputs of the other flip-flops will remain (C-875) Fig. 8.10.5 : Waveforms of Johnson counter

unchanged. So after the fifth clock pulse, the outputs


Ex. 8.10.1 : Explain ring counter design having initial
are, state 01011 from initial state explain all
th
Q3 Q2 Q1 Q0 = 1 1 1 0 ….. after the 5 clock pulse possible states in the ring. Dec. 10, 10 Marks

Soln. :
 This operation will continue till we reach the all zero
 Since there are 5 bits in the given initial state, we have
output state. (i.e. Q3 Q2 Q1 Q0 = 0 0 0 0). to use 5 flip flops as shown in Fig. P. 8.10.1.

 The operation of Johnson’s counter is summarised in  When we apply a low going clear (CLR) pulse, then flip
flops 0, 1, B and 3 will be preset to 1 output while flip
Table 8.10.2.
flops 2 and 4 are reset to 0 output.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 8-14 Registers

 Q4 Q3 Q2 Q1 Q0 = 01011 ...Initially
 The remaining states are as shown in Table P. 8.10.1.

(C-3563)Table P. 8.10.1 : Ring counter states

(C-3277) Fig. P. 8.10.4(a) : A three bit ring counter

ns e
 The operation of a three bit ring counter is summarized
in Table P. 8.10.4.

io dg
(C-3278)Table P. 8.10.4 : Summary of operation of a ring
Ex. 8.10.2 : Explain the ring counter design for the initial counter
condition 10110. May 11, 4 Marks.

Soln. :
at le
Similar to the previous example.
ic w
Ex. 8.10.3 : Explain the Johnson’s counter design for
initial state 0110. From initial state explain
and draw all possible states.
bl no

Dec. 09, 8 Marks. Waveforms :


Soln. :  The waveforms of a 3-bit ring counter are as shown in
Fig. P. 8.10.4(b).
 The required Johnson’s counter is shown in
Pu K

Fig. P. 8.10.3.
ch

 Counters 0 and 3 are reset to 0 while counters 1 and 2

and preset to 1 initially.

 Q3 Q 2 Q 1 Q 0 = 0 1 1 0 …initially
Te

 The other possible states of this Johnson’s counter are

listed in Table P. 8.10.3.

(C-3565) Table P. 8.10.3 : All possible states


of a Johnson’s counter

(C-3279) Fig. P. 8.10.4(b) : Waveforms of a 3-bit ring counter

Compare the time periods of waveforms at Q2 and CLK.


1
Tout = 3 TCLK  fout = f …Proved.
3 CLK

Ex. 8.10.5 : Draw and explain 4 bit Ring counter. Write


Ex. 8.10.4 : Draw and explain 3-bit ring counter.
the Truth Table for same showing all possible
Dec. 15, 6 Marks states if initial state is 1100.

Soln. : May 18, 6 Marks


Soln. :
 The 3 bit ring counter is as shown in Fig. P. 8.10.4(a).
 Refer section 8.10.3 for 4-bit ring counter.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 8-15 Registers

 Since there are 4 bits in the given initial state, we have 3-bit Ring Counter:
to use 4 flip flops as shown in Fig. P. 8.10.5. The 3 bit ring counter is as shown in Fig. P. 8.10.6(a).
 When we apply a low going clear (CLR) pulse, then
flip-flops 2 and 3 will be preset to 1 output while flips-
flops 0 and 1 and are reset to 0 output.

 Q3 Q2 Q1 Q0 = 1100 …initially

 The remaining states are as shown in Table P. 8.10.5.

ns e
io dg
(C-3277) Fig. P. 8.10.6(a) : A three bit ring counter

 The operation of a three bit ring counter is summarized


in Table P. 8.10.6(a).

at le
(C-7148) Fig. P. 8.10.5 : A four bit ring counter
(C-3278) Table P. 8.10.6(a) : Summary of operation of a ring
counter
ic w
(C-7149) Table P. 8.10.5 : Ring counter states
bl no
Pu K

 Fig. P. 8.10.6(b) shows the state diagram of a 3 bit ring


counter.
ch

Ex. 8.10.6 : Draw 3-bit Ring and Twisted ring counter.


Draw state diagram for 3-bit Ring and
Te

Twisted ring counter, assuming initial state as


001. Dec. 19, 7 Marks (C-1885) Fig. P. 8.10.6(b)

Soln. : 3-bit Twisted Ring Counter :

 Refer Sections 8.10.3 and 8.10.4 for Ring and Twisted  The required Johnson’s / Twisted Ring counter is shown

ring counter. in Fig. P. 8.10.6(c).

(C-8328) Fig. P. 8.10.6(c) : Required Johnson’s counter

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 8-16 Registers

 Counters 1 and 2 are reset to 0 while counter 0 preset to  The PRBS generator consists of a number of flip-flops
1 initially. and a combinational circuit which provides a suitable
 Q2 Q1 Q0 = 001 …initially feedback.
Applications of PRBS generator :
 The other possible states of this Johnson’s counter are
listed in Table P. 8.10.6(b).  Since the sequence produced is random, the PRBS
Table P. 8.10.6(b) : All possible states generator is also called as a Pseudo Noise (PN)
of a Johnson’s counter generator and the generated signal is called noise.
 This noise can be used to test the noise immunity of the

ns e
system under test.
 PRBS generator is an important part of the data

io dg
encryption system. Such a system is required to protect
the data from data hackers.

Review Questions

at le
8.10.5 Sequence Generator :
Q. 1 What is the function of a shift register ? Give its
applications.
ic w
Definition :
Q. 2 State the types of shift registers.
 A sequence generator is a sequential circuit which
Q. 3 With a neat diagram explain the operation of 4-bit
bl no

generates a desired sequence at its output. The output


left shift register. Give its truth table and timing
sequence is in synchronization with the clock input.
diagram.
 It is possible to design a sequence generator using
counters or using the shift registers. Q. 4 With a neat diagram explain the operation of 4-bit
Pu K

8.11 Sequence Detector : SISO (Serial-In-Serial-Out) register. Draw the timing


diagram and give its truth table.
Definition :
ch

 A sequence detector is a synchronous FSM which is Q. 5 With a neat diagram explain the operation of 4-bit
designed to detect a specific sequence of bits arriving at Serial-In-Parallel Out (SIPO) register. Give the truth
its input. table and timing diagram.
Te

 The detector will produce a logic “1” at its output (Y) as Q. 6 With a neat diagram explain the operation of 4-bit
soon as it detects the specified sequence of bits. This
Parallel-In-Serial-Out (PISO) register. Give the truth
concept is illustrated in Fig. 8.11.1(b).
table and timing waveforms.
 As shown in Fig. 8.11.1(b), in the long data string at the
Q. 7 What is meant by “Universal shift register” ?
input (X), the desired input sequence may sometimes
get overlapped. Q. 8 What do you understand by a bi-directional shift
 Fig. 8.11.1(a) shows the block diagram of a sequence register ? Explain its operation.
detector. Q. 9 List any one shift register IC. Sketch its schematic
diagram and pin configuration. Give its
specifications.

Q. 10 Draw circuit diagram of 3 bit SIPO shift register. Use


(C-691) Fig. 8.11.1(a) : Block schematic of a sequence detector
D flip flops. Explain its working.
8.11.1 Pseudo Random Binary Sequence
(PRBS) Generator : Q. 11 Define bi-directional shift register. Draw 3 bit

 Another important application of a shift register is the bi-directional shift register using D flip flop.
pseudo random generator. It is used for generating the 
random sequences.

Powered by TCPDF (www.tcpdf.org)


Unit 4

Chapter

9
ns e
io dg
at le Computer Organization
& Processor
ic w
bl no

Syllabus
Pu K

Computer organization and computer architecture, Organization, Functions and types of computer
units - CPU (Typical organization, Functions, Types), Memory (Types and their uses in computer), IO
ch

(Types and functions) and system bus (Address, data and control, Typical control lines, Multiple-Bus
Hierarchies); Von Neumann and Harvard architecture; Instruction cycle.
Processor : Single bus organization of CPU; ALU (ALU signals, functions and types); Register
(Types and functions of user visible, control and status registers such as general purpose, address
Te

registers, data registers, flags, PC, MAR, MBR, IR) & control unit (Control signals and typical organization
of hard wired and microprogrammed CU). Micro Operations (Fetch, Indirect, Execute, Interrupt) and
control signals for these micro operations.
Case Study : 8086 processor, PCI bus.

Chapter Contents
9.1 Introduction 9.5 CPU Architecture and Register Organization

9.2 Basic Organization of Computer and Block 9.6 Instruction, Micro-instructions and
Level Description of Functional Units Micro-operations : Interpretation and Sequencing

9.3 Von Neumann and Harvard Architecture 9.7 Control Unit : Hardwired Control Unit Design
Methods

9.4 Basic Instruction Cycle 9.8 Control Unit : Soft Wired (Micro programmed)
Control Unit Design Methods

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 9-2 Computer Organization & Processor

9.1 Introduction : 9.2 Basic Organization of Computer


and Block Level Description of
 There are various people involved in making of a Functional Units :
computer.
9.2.1 Structural Components of a Computer :
 The chip designer who designs and manufactures the
 It is the way in which components related to each other.
chip, the system designer who designs the system using Fig. 9.2.1 shows the structure of a digital computer.
the manufactured chip or microprocessor and the  It is made up of three main components namely the
programmer who makes software using the system. Central Processing Unit (CPU) or the processor, the

ns e
memory to store the programs and data, and the Input /
 Those attributes of a computer that are necessary to be
Output (I/O) devices. The functions of these are

io dg
known to a system designer or a programmer are called explained below :
as the architectural features of the computer.

 Hence the people manufacturing the chip have to reveal

at le
certain things about the processor to the system
designer and the programmer using the datasheets for
their processor chips.
ic w
 Those attribute of a computer or moreover the
processor, that are just used for the designing purposes
bl no

of the processor and are not revealed are called as the


organizational features of the processor.

Sr. Fig. 9.2.1 : Structure of a computer


Computer architecture Computer organization
Pu K

No.
1. The components of a computer are central processing
1. It refers to those It refers to the unit, Input and Output (I/O) devices and memory.
ch

attributes of a system implementation of these 2. The three units are connected with the help of system

visible to the features and is mostly interconnection i.e. buses.

programmer. not known to the user. 3. Memory is used to store code (programs) and data. It
can be various kinds of like semiconductor memory
Te

2. Instruction set, number of Control signals, using ICs, magnetic memory or optical memory etc.
bits used for data interfaces, memory 4. I/O devices are used to accept a input or give an output
representation, technology etc. form the by the CPU. There are various input devices like
addressing techniques part of the computer keyboard, mouse, scanner; and various output devices
like CRT, printer etc.
etc. form the part of organization.
5. The CPU is further divided into three units as shown in
computer architecture.
the Fig. 9.2.2.
3. For example, is there a For example, is there a
multiply instruction? dedicated hardware
multiply unit or it is done
by repeated addition?

4. All INTEL 80x86 All INTEL 80x86


microprocessors share microprocessors differ in
the same basic their organization.
architecture.
Fig. 9.2.2 : Structure of a CPU

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 9-3 Computer Organization & Processor

6. The components of the Central Processing Unit (CPU) 4. In Fig. 9.2.4, we can see that the computer is divided
are Arithmetic and Logic Unit (ALU), Control Unit(CU) into three main components namely data storage
and CPU Registers, which are also connected with the facility, data movement facility and the data processing
internal buses. facility. These are the different functions can perform a
7. ALU is used to perform arithmetic operations like computer.
addition, subtraction etc. and logical operations such as
AND, OR etc.
8. CPU Registers are used to store the data temporarily in

ns e
the CPU to save memory access time.
9. The CU is further divided in three parts as shown in

io dg
Fig. 9.2.3.

at le
ic w
bl no

Fig. 9.2.4 : Functional view of a Computer


Fig. 9.2.3 : Structure of Control Unit
9.3 Von Neumann and Harvard
10. Control Unit comprises of control memory, control unit
Architecture :
Pu K

register and sequencing logic.

11. The control memory stores the microinstructions loads  There are two ways of memory interfacing architectures
ch

it into the control unit register and the sequencing logic for a processor depending on the processor design.

gives these signals in a proper sequence to execute a  The first one is called Von Neumann architecture and
later Harvard architecture.
instruction.
9.3.1 Von Neumann Architecture :
Te

9.2.2 Functional View of a Computer :


 Fig. 9.3.1 shows the connection for Von Neumann
1. It is the operation of the individual component as the architecture of computer.
part of the structure.

2. The functions of a computer are data processing, data


storage, data movement and control.

3. There can be various paths followed by the data as


shown in the Fig. 9.2.4. They can be: the data may be
taken into the processor from the input device,
processed and then the result may be given at the
output device; the data may be taken into the processor
from the input device, processed and the result stored
in memory; the data may be taken from memory,
processed and the result given to output device; the
data taken from an input device and given to an output
device etc. Fig. 9.3.1 : Von Neumann Architecture of a computer

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 9-4 Computer Organization & Processor

 The name is derived from the mathematician and early  The instructions from the programs are taken by the
computer scientist John Von Neumann. processor, decoded and executed accordingly.
 The computer has a common memory for data as well
 The data is also stored in the memory.
as code to be executed.
 The processor needs two clock cycles to complete an  The data is taken from memory and the operation is
instruction, first to get an instruction and second to get performed on that data, as well as the results are stored
the data. in the memory.
 This system has three units CPU, Memory and I/O  In some cases the input to an operation and the result
devices. The CPU has two units Arithmetic Unit and

ns e
may also be from input and output devices.
Control unit. Let us discuss these units in detail.
 Memory in the Von Neumann system has a special

io dg
1. Input unit :
organization wherein the data and instructions are
 A computer accepts inputs from the user through these
stored in the same memory.
devices i.e. input devices.
 We will see about this in the subsequent section.
 The commonly used input devices are keyboard and

at le
mouse. Besides that, there are devices like joystick,
camera, scanner etc. which are also input devices.
Key features of a Von Neumann machine :

 The Von Neumann machine uses stored program


ic w
 The devices input the data accepted from the user in a concept.
proper coded form understood by the computer.
 The program and data are stored in the same memory
2. Output unit :
bl no

unit.
 The result is given back by the computer to the user  Each location of the memory has a unique address i.e.
through an output device. Input devices and output
no two locations have the same memory address.
devices are also called as human interface devices,
 Execution of instruction in Von Neumann machine is
Pu K

because they are used to interface the human to the


computer. carried out in a sequential manner (unless explicitly

 The mainly used output devices are monitor and printer. altered by the program itself) from one instruction to
ch

But there are many other output devices like plotter, the next.
speaker etc. Detailed structure of the CPU :
3. Arithmetic and Logic Unit (ALU) :  The block diagram of the computer proposed by Von
Te

 Arithmetic or logic operations like multiplication, Neumann have a minimal number of registers along
addition, division, AND, OR, EXOR etc. are performed by with the above blocks.
ALU.
 This computer has a small set of instruction and an
 Operands are brought into the ALU, where the
instruction was allowed to contain only one operand
necessary operation is performed.
address.
4. Control unit :
 Fig. 9.3.2 gives the detailed structure of IAS CPU.
 The control unit as we know is the main unit that
 The structure shown in Fig. 9.3.2 consists of the
controls all the operations of the system, inside and
outside the processor. following registers.

 The memory or I/O devices have to be controlled by the Accumulator (AC) :

computer to perform the operation according to the  It normally provides one of the operand to ALU and
instruction given to it. stores the result.
5. Memory unit : Data Register (DR) :

 Memory is used to store the programs and data for the  It acts as buffer storage between the CPU and the main
computer. memory or I/O devices.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 9-5 Computer Organization & Processor

ns e
Fig. 9.3.3 : Instruction pointed by PC is fetched by the CPU

io dg
for execution. Subsequently PC is made to point to the
next instruction

Fetching an instruction :

at le Fig. 9.3.2 : Structure of the CPU

Program Counter (PC) :


 CPU interacts with memory through two special

registers :
ic w
 It always contains the address of the next instruction to 1. MAR (Memory Address Register) : It provides address

be executed. of memory location from where data or instruction is to


bl no

Instruction Register (IR) : be retrieved or to which data is to be stored.

 It holds the current instruction i.e. the opcode and 2. DR (Data Register) : It acts as buffer storage between

operand of the instruction to be executed. the main memory and the CPU.
Pu K

Memory Address Register (MAR) :  The function and operation of these registers will be

 The address from which the data or instruction is to be understood by the example below.
ch

fetched is provided by the processor through MAR.  The instruction to be executed is brought from the

 It also is used to forward the address of memory memory to the CPU, through the following steps :

location where data is to be stored. 1. The address of the instruction is transferred from PC to
Te

Execution of a program by Von Neumann machine : MAR.

 The program to be executed is stored in memory. MAR  PC

 A register, PC (Program counter) always points to the 2. MAR puts this address on the address bus for selection

first instruction of the program when the computer of the required location of the memory.
––––
starts. 3. Control Unit generates the RD (read control signal)

 CPU fetches the instruction pointed by PC. PC contents signal to perform read operation on memory. Required

are automatically incremented to point to the next instruction is given on data bus by the memory.

instruction. Instruction on data bus is accepted in DR (Data


Register).
 If the initial value of PC = 0000 (in binary), first
instruction be fetched for execution. After fetching Execution of instruction :
“Instruction number 1”, PC will be incremented by one.
 The fetched instruction is in the form of binary code and
It is assumed that the size of each instruction is 1 byte.
is loaded into instruction register (IR) from DR (Data
PC = PC + 1 = 0 + 1 = 1 Register).

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 9-6 Computer Organization & Processor

 The instruction specifies what action the CPU has to provides the value in the given address (which is the
take. instruction) to MBR.
 The CPU interprets the instruction and performs the 2. The PC value has to be incremented to point to the next
required action. The action could be : instruction (Sometimes the value of PC may have be
1. Data transfer between CPU and memory. completely changed in case of some special instructions
2. Data transfer between CPU and I/O. called as branching instructions).

3. The CPU may perform an arithmetic or logic 3. The instruction is loaded into Instruction Register (IR)
operation on data. from the MBR.
4. Finally the processor interprets or decodes the

ns e
9.3.2 Harvard Architecture :
instruction. The processor performs required operations

io dg
 Fig. 9.3.4 shows the connection for Harvard computer in the execute cycle.
architecture.
 In the execute cycle the operation asked to be
performed by the instruction is done. It may comprise of
one or more of the following operations :


at le Fig. 9.3.4

The name is originated from Harvard's, “Harvard


Mark I” a relay based old computer.
1. Transfer of data between processor and memory or
between processor and IO module.
ic w
2. Processing of data like some arithmetic or logical
 In this case there are two separate memories for storing operations on data.
data and program. 3. Change of the sequence of operation i.e. branching
bl no

 In this case the processor can simultaneously access instructions.


instruction as well as the data and hence can complete
9.4.1 Interrupt Cycle :
an instruction execution in one cycle.
 Fetch and execute are not the only two states in the
9.4 Basic Instruction Cycle :
Pu K

instruction cycle.
 The instruction cycle is a representation of the states
 There is one more state i.e. Interrupt cycle.
that the computer or the microprocessor performs
ch

 In this subsection we will see the concept of interrupt in


when executing an instruction.
short and the interrupt cycle.
 The instruction cycle comprises of two main steps to be
 Interrupt is a mechanism by which I/O modules can
followed to execute the instruction, namely the fetch
interrupt normal sequence of processing. Interrupt can
Te

operation in the fetch cycle and the execution operation


be because of some request from an I/O device to
during the execute cycle.
service that particular device.
 This service may take or gives data or some control
operation.
 It may also because of some unexpected operation in
Fig. 9.4.1 : Basic instruction cycle the program execution by the CPU itself.

 Fig. 9.4.1 shows the basic instruction cycle. It comprises  Interrupt cycle as discussed earlier is added to

of the fetch and executes cycle in a loop to execute instruction cycle.

huge number of instructions, until it reaches the halt  During this cycle the processor checks for interrupt, and
instruction. if present and enabled services the same.
 The fetch cycle comprises of the following operations :  If no interrupt is present then it fetches the next
instruction else if interrupt pending then it performs the
1. Program Counter (PC) holds address of next instruction
following operations :
to fetch; hence the CPU (Processor) fetches instruction
1. Suspend the execution of current program.
from memory location pointed to by PC. This is done by
providing the value of the PC to the MAR and giving the 2. Save the context of the current program under
Read control signal to the memory. On this the memory execution.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 9-7 Computer Organization & Processor

3. Set the PC value to start address of interrupt handler routine also called as interrupt service routine. Interrupt service
routine is a small program which when executed, services the interrupting source.
4. Process the interrupt service routine (ISR) and then.
5. Restore the context and continue execution of the interrupted program.
 Thus the complete basic instruction cycle with Interrupts can be as shown in the Fig. 9.4.2.

ns e
io dg
Fig. 9.4.2 : Complete basic instruction cycle


at le
You will notice in Fig. 9.4.2, the interrupts are checked for, after the execute cycle and processed if enabled and exist; else,
it fetches the next instruction.
The detailed instruction cycle is shown in Fig. 9.4.3.
ic w
bl no
Pu K
ch
Te

Fig. 9.4.3 : Detailed instruction cycle

 In Fig. 9.4.3, there are some states drawn on the upper  Again to fetch the operands, we require the buses.
side, while some on the lower side. After fetching the operand, if more operands are
 The ones on the upper side are the operations carried required for multiple operand instructions, then the next
out on the buses or are external operations, while the state is again calculate the operand address i.e. the
ones at the lower level are the operations carried out address of the next operand.
inside the CPU or are internal operations.  Once all the operands are fetched, the data operation is
 The instruction cycle begins from the “Instruction carried out as per the operation indicated in the
address calculation” state, wherein the address of the instruction.
next instruction is calculated or the value of the PC is  Now for the result storage again the address of operand
updated. is calculated and the result is stored in the specified
 Then the instruction is fetched, which requires the location of the memory.
operation on the buses.  In case of multiple operands again the calculation and
 The instruction fetched is then decoded. Until this state, storage process for the operand continues until all the
it is the fetch cycle.
operands are stored.
 In the execute cycle, the operand address is calculated
 Now begins the interrupt cycle, wherein the first step is
and the operands are fetched from the calculated
address. to check the presence of an enabled interrupt.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 9-8 Computer Organization & Processor

 If there is none, then the next state as seen in the  It consists of PIPO (Parallel in parallel out) register as
Fig. 9.4.3 is the calculation of next instruction address shown in Fig. 9.5.2.
i.e. executes the next sequential instruction.  This section is also called as scratch pad memory. It
stores data and address of memory.
 But in case the interrupt is present and enabled then the
servicing of the same is done as discussed earlier in this  The register organization affects the length of program,
the execution time of program and simplification of the
section.
program.
 In the Fig. 9.4.3, you will also notice that there are two
 To achieve better performance, the number of registers
paths from the end of the previous instruction.
should be large.

ns e
 The one that goes to the state “Instruction address
 The architecture of microcomputer depends upon the
calculation” for the next instruction; and the one that

io dg
number and type of the registers used in
goes to the “Operand address calculation” for vector
microprocessor.
instructions.
 It consists 8-bit registers or 16 bit registers.
 Vector instructions are those instructions wherein the
 The register section varies from microprocessor to
operation is same but the data on which the operation

at le
is to be performed in a huge block of data or an array of
data. 

microprocessor.
The registers are used to store the data and address.
These registers are classified as :
ic w
 Hence in the second case, the instruction is already
fetched and decoded i.e. the operation is already know, o Temporary registers
and the operation is to be performed on a block of data. General purpose registers
bl no

o
 After completing the operation on one set of operands, o Special purpose registers.
the CPU returns to the next operand address calculation
1. Register section :
state, wherein it calculates and fetches the next
operand.
Pu K

 Then it performs the operation, stores the result and


again for the next set of operand, until all the operands
ch

in the array are completed.


9.5 CPU Architecture and Register
Organization :
Te

 Fig 9.5.1 shows the architecture of microprocessor. This Fig. 9.5.2 : 8 bit register

architecture is divided in different groups as follows : 2. Arithmetic and logical unit :


1. Registers
 This section processes data i.e. it performs arithmetic
2. Arithmetic and logic unit
and logical operations.
3. Interrupt control
 It performs arithmetic operations like addition,
4. Timing and control circuitry.
subtraction and logical operations like ANDing, ORing,
EX-ORing, etc.
 The ALU is not available to the user. Its word length
depends upon the width of an internal data bus.
 The ALU is controlled by timing and control circuits.
 It accepts operands from memory or register. It stores
result of arithmetic and logic operations in register or
memory.
 It provides status of result to the flag register. Flag
register shows status of result.
 ALU looks after the branching decisions.
Fig. 9.5.1 : General architecture of a microprocessor

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 9-9 Computer Organization & Processor

3. Interrupt control : 9.6 Instruction, Micro-instructions and


 This block accepts different interrupt request inputs.
Micro-operations : Interpretation
When a valid interrupt request is present it informs
and Sequencing :
control logic to take action in response to each signal.  The structure of the CPU seen in section 9.5 is shown in
4. Timing and control unit : details in Fig. 9.6.1.
 This structure has a speciality that all the control signals
 This is a control section of microprocessor made up of
are shown in it.
synchronous sequential logic circuit.
 Programs are executed as a sequence of instructions. As
 It controls all internal and external circuits.
seen in the previous sections of this chapter, each

ns e
 It operates with reference to clock signal.
instruction consists of a series of steps that make up the
 This accepts information from instruction decoder and
instruction cycle i.e. fetch, decode, etc. Each of these

io dg
generates micro steps to perform it.
steps is, in turn, made up of a smaller series of steps
 In addition to this, the block accepts clock inputs,
called micro-operations or micro-instructions.
performs sequencing and synchronising operations.
 Control signals are issued to perform these
 The synchronization is required for communication

 at le
between microprocessor and peripheral devices.
To implement this it uses different status and control
signals. 
micro-operations and micro-instructions are these
control signals.
Fig. 9.6.1 shows the structure of the CPU with these
ic w
 The basic operation of a microprocessor is regulated by micro-instructions or the control signals.
this unit.  It also shows those registers as already seen in
bl no

 It synchronizes all the data transfers. section 9.2 like PC, MAR, MBR, etc.
 This unit takes appropriate actions in response to  There are some registers like the register ‘Y’ to provide
external control signals. one of the operand to the ALU as shown in the
Fig. 9.6.1.
9.5.1 Interrupt Control :
Pu K

 This block accepts different interrupt request inputs.


When a valid interrupt request is present it informs
ch

control logic to take action in response to each signal.

9.5.2 Timing and Control Unit :


 This is a control section of microprocessor made up of
Te

synchronous sequential logic circuit.


 It controls all internal and external circuits.
 It operates with reference to clock signal.
 This accepts information from instruction decoder and
generates microsteps to perform it.
 In addition to this, the block accepts clock inputs,
performs sequencing and synchronising operations.
 The synchronization is required for communication
between microprocessor and peripheral devices.
 To implement this it uses different status and control
signals.
 The basic operation of a microprocessor is regulated by
this unit.
 It synchronizes all the data transfers.
 This unit takes appropriate actions in response to
external control signals.
Fig. 9.6.1 : Data path structure with control signals

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 9-10 Computer Organization & Processor

 Another register is the ‘Z’ register, which is used to store  To perform this operation the control signals given are
the result given by the ALU. PCout and MARin.
 A “temp” register or the temporary register to store  This will make the PC register give out its data and the
some temporary data. MAR register accept this data.
 The set of registers R0 to Rn (the value of ‘n’ depends on  Also the memory is indicated to perform a read
the registers in the CPU) for general purpose operation from memory hence the signal “Read”.
operations.  To increment the value of PC, the various operations are
 There is also an instruction decoder for decoding the performed on ALU signals i.e. Clear Y, Set Cin, Add, Zin.
instructions stored in the instruction register and in turn  The ‘Y’ register is cleared and the carry flag is set. Now

ns e
provides the micro-instructions or the control signals for when the ALU is said to perform the “ADD” operation it
the resources inside and outside the CPU.

io dg
will add the contents of the ‘Y’ register, carry flag and
 The ALU also gets the control signals from this decoder the contents of the internal data bus.
indicating the operation to be performed like Add, Sub,  The contents of the internal data bus are nothing but
and AND etc. the value given out by the PC register.


at le
The ALU also has an extra input called as Cin i.e. the
carry input as required for adder.
To execute any instruction as seen earlier it is to be
 Hence the PC is added with ‘1’ i.e. the carry flag and
hence incremented value of PC is given to the ‘Z’
register.
ic w
divided into three cycles viz. fetch, execute and interrupt  In the second clock pulse the CPU has to wait for the
cycles.
memory operation, but in the same time it can transfer
 The execute cycle will differ based on the operation to
bl no

the result in ‘Z’ register to the PC register with the


be carried out in the instruction, but the fetch and control signals namely Zout and PCin.
interrupt cycle will be common for all the cycles.
 This could not be done in the previous t-state, as two
 Let us see the micro-instructions to be given for each of
data cannot be given simultaneously on the data bus,
these cycles.
Pu K

else it will get mixed up.


9.6.1 Fetch Cycle :
 Only one data can be given on the data bus in any clock
ch

 Fetch cycle is concerned to fetch (i.e. read from pulse, but as many as required can accept the data.
memory) the instruction.  In the final t-state, the contents received from the
 It involves following operations in different t-states (t- memory i.e. the instruction is transferred to its correct
state is a time state and is equal to one clock pulse) and place i.e. the instruction register.
Te

hence the mentioned microinstructions in Table 9.6.1.  This is done by the control signals namely MBRout and
Table 9.6.1 : Microinstructions for the fetch cycle IRin. This also completes the entire fetch operation of
the instruction.
Operation Microinstructions

T1 PC  MAR PCout, MARin, Read, Clear y, 9.6.2 Execute Cycle :


Set Cin, Add, Zin
 Execute cycle as discussed can be of various types
T2 M  MBR Zout, PCin, Wait for memory fetch
based on the operation to be performed in the
PC  PC + 1 cycle
instruction and the location of the operand. We will see
T3 MBR  IR MBRout, IRin some examples in this subsection.
 As seen in the table, three clock pulses or t-states are  The first example we will take for the execution of a
required for the fetch cycle.
direct addressed operand.
 Note, the control unit is an organizational part of the
CPU; hence the design can vary from processor to  In this case the address of the operand is directly given
processor. in the instruction.
 In the first t-state, the address of the instruction to be  It involves different operations in various t-states as
executed is given to the MAR register from the shown in Table 9.6.2 assuming the instruction ADD
PC register. R1, [X].

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 9-11 Computer Organization & Processor

Table 9.6.2 : Microinstructions for the execute cycle of direct cannot be given simultaneously on the data bus in the
addressed mode of operand access
same t-state.
Operation Microinstructions  And the contents of memory location with the address
T1 IR  MAR IRout(address), MARin, Read, ‘X’ are already put on the data bus in the third t-state.
Clear Cin  The fourth t-state is thus required to transfer the data

T2 M  MBR R1out, Yin, Wait for memory from register ‘Z’ to register R1 using the signals
Zout, R1in.
read cycle
 Another execute cycle we will be studying in this

ns e
T3 MBRout, Add, Zin
MBR + R1  R1 sub-section is for the indirect addressed operand. In this
T4 Zout, R1in

io dg
case, the address given in the instruction is the memory
 In this case of direct addressing mode, the address of location that contains the address of the operand.
the memory operand is in the instruction itself.
 Table 9.6.3 shows the micro-operations required for
 The instruction as we have seen in the fetch cycle
such an execute cycle for an example instruction ADD


at le
reaches the IR register.

Hence the IR register is given a signal to give out the



R1, [[X]]

Table 9.6.3 shows the control signals to be given exactly


ic w
address part and the MAR register to accept this
similar to that of the Table 9.6.2, with a minor difference
address input value by giving the control signals
IRout(address) and MARin. i.e. the value received in the MBR on first memory read
bl no

is the operand address and hence is to be given back to


 At the same time, since the memory is to be read from
the memory to fetch the actual operand.
the control signal is given to the memory i.e. “Read”.
Table 9.6.3 : Microinstructions of the execute cycle of
 Also the carry flag is cleared to get ready for the
an indirect addressed operand instruction
Pu K

addition operation.
Operation Microinstructions
 Since the instruction expects addition of the register ‘R1’
ch

and the data at memory location with address ‘X’, the T1 IR  MAR IRout(address), MARin, Read,
Clear Cin
contents of register ‘R1’ are transferred to the
‘Y’ register, which is one of the operands for any ALU T2 M  MBR R1out, Yin, Wait for memory
read cycle
Te

operation.
T3 MBR  MAR MBRout(address), MARin,
 To perform this transfer operation the control signals
Read
given are R1out and Yin.
T4 M  MBR Wait for memory read cycle
 Also by the end of the second t-state, the data operand
T5 MBRout, Add, Zin
required from the memory will be available in the MBR
MBR + R1  R1
register. T6 Zout, R1in

 In the third t-state the contents of the MBR, which is the 9.6.3 Interrupt Cycle :
content of memory location with the address ‘X’, is
 It is concerned to perform the test for any pending
placed on the internal data bus and the ALU is indicated
interrupts at the end of every instruction execution and
to perform the addition operation.
if an interrupt occurs.
 It adds the contents of the ‘Y’ register and the contents
 It involves the different micro-operations for various
of the internal data bus, and the result is given to the ‘Z’
t-states as shown in Table 9.6.4 points to the top of the
register. stack.
 An extra t-state is required to send the data from the  This stack is used to store the return address of the
‘Z’ register to the register R1, as seen earlier two data interrupted program.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 9-12 Computer Organization & Processor

Table 9.6.4 : Microinstructions for the interrupt cycle T-state Operation Microinstructions

Operation Microinstructions T3 MBR  IR MBRout, IRin


T4 R3  x R3 out, Xin, CLRC
T1 SP  SP – 1 SPout(address), Decrement, Zin
T5 R4  ALU R4 out, ADD, Zin
T2 SP  MAR Zout, MARin, SPin
T6 Z  R3 Zout, R3 in
T3 PC  MBR PCout(return address), MBRin, T7 Check for intr Assumption enabled intr
Write pending
CLRX, SETC, SPout,

ns e
T4 ISR address  PC ISR address out, PCin (new SUB, Zin,
address), Wait for memory T8 SP  SP – 1 Zout, SPin, MARin

io dg
write cycle T9 PC  MDR PCout, MDRin, WRITE
 The control signals are to be generated using the T10 MDR  [SP] Wait for mem access
control unit. The design of this control unit can be done T11 PC  ISR addr PCin ISR addr out

at le
in two ways namely: Hardwired Control Unit and
Microprogrammed Control Unit. We will see these two
methods in the subsequent sections.
3. Write a microprogram for the instruction:
MOV R3 , [R4] OR LOAD R3 , [R4]
ic w
T-state Operation Microinstructions
9.6.4 Examples of Microprograms :
T1 PC  MAR PCout, MARin, Read,
1. Write a microprograms for the instruction : MOV R3, R4 Clear y, Set Cin, Add, Zin
bl no

T2 M  MBR Zout, PCin, Wait for


T-state Operation Microinstructions
PC  PC + 1 memory fetch cycle
T1 PC  MAR PCout, MARin, Read, Clear
T3 MBR  IR MBRout, IRin
y, Set Cin, Add, Zin
Pu K

T4 R4  MAR R4 out, MARin, READ


T2 M  MBR Zout, PCin, Wait for memory T5 Mem  MDR Wait for mem access
fetch cycle
ch

PC  PC + 1 T6 MDR  R3 MDRout , R3 in
T3 MBR  IR MBRout, IRin T7 Check for intr Assumption enabled intr
pending
T4 R3  R4 R4 out, R3 in
CLRX, SETC, SPout,
Te

T5 Check for intr Assumption enabled intr SUB, Zin,


pending T8 SP  SP – 1 Zout, SPin, MARin
CLRX, SETC, SPout, SUB, T9 PC  MDR PCout, MDRin, WRITE
Zin, T10 MDR  [SP] Wait for mem access

T6 SP  SP – 1 Zout, SPin, MARin T11 PC  ISR addr PCin ISR addr out

T7 PC  MDR PCout, MDRin, WRITE 4. Write a microprogram for the instruction : ADD R3, [R4]

T8 MDR  [SP] Wait for mem access T-state Operation Microinstructions

T9 PC  ISR addr PCin ISR addr out T1 PC  MAR PCout, MARin, Read, Clear
y, Set Cin, Add, Zin
2. Write a microprogram for the instruction : ADD R3, R4
T2 M  MBR Zout, PCin, Wait for memory
T-state Operation Microinstructions PC  PC + 1 fetch cycle
T1 PC  MAR PCout, MARin, Read,
T3 MBR  IR MBRout, IRin
Clear y, Set Cin, Add, Zin
T4 R4  MAR R4 out, MARin, READ, CLRC
T2 M  MBR Zout, PCin, Wait for
PC  PC + 1 memory fetch cycle T5 Mem  MDR Wait for men access

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 9-13 Computer Organization & Processor

T-state Operation Microinstructions T-state Operation Microinstructions

T5 MDR  ALU MD Rout , Zin ADD T4 R3  X Rout, Xin, CLRC

T6 R3  X1 R3out, Xin T5 IRdata  ALU IRout, ADD, Zin


T7 Z  R3 Zout, R3 in T6 Z  R3 Zout, R3in
T8 Check for intr Assumption enabled intr T7 Check for intr Assumption enabled intr
pending pending
CLRX, SETC, SPout, SUB,
CLRX, SETC, SPout,

ns e
Zin,
SUB, Zin,
T9 SP  SP – 1 Zout, SPin, MARin
T8 SP  SP – 1 Zout, SPin, MARin

io dg
T10 PC  MDR PCout, MDRin, WRITE
T9 PC  MDR PCout, MDRin, WRITE
T11 MDR  [SP] Wait for mem access
T10 MDR  [SP] Wait for mem access
T12 PC  ISR addr PCin ISR addr out

5. at le
Write a microprogram for the instruction :
ADD R3, [ [R4] ]
7.
T11 PC  ISR addr PCin ISR addr out

Write a microprogram for the instruction : ADD R3, [45H]


ic w
T-state Operation Microinstructions
T-state Operation Microinstructions
T1 PC  MAR PCout, MARin, Read, T1 PC  MAR PCout, MARin, Read,
bl no

Clear y, Set Cin, Add, Zin Clear y, Set Cin, Add, Zin
T2 M  MBR Zout, PCin, Wait for T2 M  MBR Zout, PCin, Wait for
PC  PC + 1 memory fetch cycle memory fetch cycle
PC  PC + 1
T3 MBR  IR MBRout, IRin
MBR  IR
Pu K

T3 MBRout, IRin
T4 mem  MDR Wait for mem access
T4 IR addr  MAR IRout, MARin, D, CLRC
T5 MDR  ALU MDRout, Zin, ADD
ch

T5 R3  X R3 out, Xin
T6 Z  R3 Zout, R3 in
T5 mem MDR Wait for mem access
T7 Check for intr Assumption enabled intr
pending T6 MDR ALU MDRout Zin, ADD
Te

CLRX, SETC, SPout, [R3 + 45]


SUB, Zin,
T7 Z  R3 Zout, R3 in
T8 SP  SP – 1 Zout, SPin, MARin
T8 Check for intr Assumption enabled intr
T9 PC  MDR PCout, MDRin, WRITE
pending
T10 MDR  [SP] Wait for mem access CLRX, SETC, SPout,
T11 PC  ISR addr PCin ISR addr out SUB, Zin,

T9 SP  SP – 1 Zout, SPin, MARin


6. Write a microprogram for the instruction :
ADD R3, 45H
T10 PC  MDR PCout, MDR in, WRITE
T-state Operation Microinstructions
T11 MDR  [SP] Wait for mem access
T1 PC  MAR PCout, MARin, Read, PCin ISR addr out
T12 PC  ISR addr
Clear y, Set Cin, Add, Zin
8. Write a microprogram for the instruction ADDX, [Y]
T2 M  MBR Zout, PCin, Wait for
T-state Operation Microinstruction
PC  PC + 1 memory fetch cycle
T1 PC  MAR PCout, MARin, READ,
T3 MBR  IR MBRout, IRin
CLRT, STC, ADD, Z

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 9-14 Computer Organization & Processor

T-state Operation Microinstruction Symbolic


T-state Microinstruction
operations
T2 mem  MDR Wait for mem access
Zout, PC in T10 PC  MDR PCout, MDRin, WRITE
PC  PC + 1
T11 MDR  [SP] Wait for mem access
T3 MDR  IR MDRout, IRin
T12 PC  ISR addr PCin ISR addr out
T4 Y  MAR yout, MARin, READ,
CLRC 9.6.5 Applications of Microprogramming :
T5 mem  MDR Wait for mem access

ns e
X  Temp Xout, Tin

T6 MDR  ALU MDRout, Zin, ADD

io dg
T7 ZX Zout, Xin

T8 Check for intr Assumption enabled intr


pending

at le CLRX,
SUB, Zin,
SETC, SPout,
ic w
T9 SP  SP – 1 Zout, SPin, MARin

T10 PC  MDR PCout, MDRin, WRITE


bl no

Fig. 9.6.2 : Applications of Microprogramming


T11 MDR  [SP] Wait for mem access
The applications of microprogramming are :
T12 PC  ISR addr PCin ISR addr out
1. In Realization of control unit : Microprogramming is
9. Write a microprogram for the instruction :
used widely now for implementing the control unit of
Pu K

ADD X, [[400]]
computers.
Symbolic
T-state Microinstruction 2. In Operating system : Microprograms can be used to
ch

operations
implement some of the primitives of operating system.
T1 PC  MAR PCout, MARin, READ,
This simplifies operation system implementation and
CLRT, SETC, ADD, Z
also improves the performance of the operating system.
T2 mem  MDR Wait for mem access
Te

3. In High-Level Language support : In High-Level


PC  PC + 1 Zout, PC in
language various sub functions and data types can be
T3 MDR  IR MDRout, IRin
implemented using microprogramming. This makes
T4 IRaddr  MAR IRout, MARin, READ, compilation into an efficient machine language from
CLRC possible.
T5 mem  MDR Wait for mem access 4. In Micro diagnostics : Microprogramming can be used
X  Temp Xout, Tin for detection isolation monitoring and repair of system
T6 MDR  MAR MDRout,MARn, READ errors. This known as micro diagnostics and they
significant enhance system maintenance.
T7 mem  MDR Wait for mem access
5. In User Tailoring : By using RAM for implementing
T8 MDR  ALU MDRout, ADD, Zin
control memory (CM), it is possible to tailor the machine
T9 ZX Zout, Xin
to different applications.
T8 Check for intr Assumption enabled intr
6. In Emulation : Emulation refers to the use of a
pending
microprograms on one machine to execute programs
CLRX, SETC, SPout,
originally written for another machine. This is used
SUB, Zin,
widely as an aid for users in migrating from one
T9 SP  Sp – 1 Zout, SPin, MARin
computer to another.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 9-15 Computer Organization & Processor

9.7 Control Unit : Hardwired Control Inputs


Unit Design Methods : State I I2 …….. Im Outputs
 The hardwired Control unit is viewed as a sequential an S1 S1, 1 S1, 2 …….. S1, m O1
combinational logic circuit.
S2 S2, 1 S2, 2 …….. S2, m O2
 It is used to generate a set of fixed sequences of control
:
signals. It is implemented using any of a variety of ……..
:
“standard” digital logic circuits.
Sn Sn, 1 Sn, 2 …….. Sn, m On
 The major advantages of hardwired control units are
higher speed of operation and smaller space required

ns e
(b) Moore Type
for implementation on silicon wafer i.e. the IC Fig. 9.7.1 : State tables for a finite-state machine

io dg
(Integrated Circuit), since the components required are
2. Delay element method :
lesser.
 This method is implemented using delay elements
 The only disadvantage is that modifications to the
i.e. D-flipflops.
design are slightly difficult.


at le
The use of hardwired control unit is majorly found in the
RISC designs.
 A flipflop is made to give output logic ‘1’ after the
specific event or in a t-state in sequence and the
outputs of these flipflops are used to generate control
ic w
 There are different methods to implement hardwired signals or the micro-instructions i.e. two operations that
control unit : require a delay of 1 t-state between them are separated
1. State table method. by a D flipflop between them. Fig. 9.7.2 shows this
bl no

2. Delay-Element method. implementation.

3. Sequence counter method.


4. PLA method.
Pu K

1. State table method :

 In this method state transition for each instruction is


ch

made and hence a state table is obtained.


 This state table is then combine to form a instruction set
state table, where all the instructions (OPCODE) are
considered as inputs and according to this the next
Te

state is being determined. Each state with a set of Fig. 9.7.2 : Use of D flip flop as a delay element between two
microinstructions to be issued to various components of sets of control signals
the processor as well as external control signals.
 This state table is then implemented using flip-flops and
combinational circuit to generate different control
signals.
 An example state table implementation is shown in
Fig. 9.7.1.
Inputs
State I I2 …….. Im
S1 S1, 1, O1, 2 S1, 2, O1, 2 …….. S1, m, O1, m
S2 S2, 1, O2, 1 S2, 2, O2, 2 …….. S2, m, O2, m
Fig. 9.7.3 : Use of OR gate in delay element method of
: ……..
Hardwired control unit
:
Sn Sn, 1, On, 1 Sn, 2, On, 2 Sn, m, On, m  The signals that activate the same control signal are
……..
ORed together i.e. if a signal has to be activated from
Fig. 9.7.1(a) : Mealy

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 9-16 Computer Organization & Processor

the outputs of multiple flipflops then an OR gate is used inputs to the AND array is from various control signals
as shown in Fig. 9.7.3. generated and the output of the OR array is given as
 In case if a decision is to be made then it is control signals to various components of the processor
implemented using a If-Then-Else circuit i.e. two AND as well as the external control signals required.
gates coupled to a OR gate. This is shown in Fig. 9.7.4.  Fig. 9.7.6 shows the implementation of the PLA method
of implementation of control unit.

ns e
io dg
Fig. 9.7.4 : Implementation of If-Then-Else in delay element
method of Hardwired control unit

3. Sequence counter method :


at le
In this method, multiple clock signals are derived from
the master clock using a standard counter-decoder
approach as shown in the Fig. 9.7.5. These signals are 9.8
Fig. 9.7.6 : PLA Technique

Control Unit : Soft Wired (Micro


ic w
applied to the combinational portion of the circuit. programmed) Control Unit Design
Methods :
bl no

 Micro programmed control unit generates control


signals based on the microinstructions stored in a
special memory called as the control memory.
Pu K

 Each instruction points to a corresponding location in


the control memory that loads the control signals in the
control register.
ch

 The control register is then read by a sequencing logic


that issues the control signals in a proper sequence.
 The implementation of the micro programmed is shown
Te

in the Fig. 9.8.1.


Fig. 9.7.5 : Sequential counter method of Hardwired control
unit implementation

 As shown in Fig. 9.7.5, the counter keeps on


incrementing and generating different counts.
 The counts are decoded using a decoder and the
decoder outputs are given to various components as
control signals in the CPU.
4. PLA method :

 In this method a PLA (Programmable Logic Array) is


used to generate the control signals.
 PLA is an array of AND gates at input and the OR gates
at output.
 The inputs are to be given to the AND gates, which can
be connected to the specific OR gates as required.
 The OR gates outputs are the outputs of the overall PLA
and are used as control signals in the system i.e. the Fig. 9.8.1 : Micro programmed control unit

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 9-17 Computer Organization & Processor

 The Instruction Register (IR), Status flag and condition 9.8.1 Wilkie’s Microprogrammed Control Unit :
codes are read by the sequencer that generates the  First working model of a micro-programmed control
unit was proposed by Wilkie’s in 1952.
address of the control memory location for the
 In the above design, a microinstruction has two major
corresponding instruction in the IR.
components :
 This address is stored in the Control address register
 Control field
that selects one of the locations in the control memory
 Address field
having the corresponding control signals.

 These control signals are given to the microinstruction

ns e
register, decoded and then given to the individual

io dg
components of the processor and the external devices. Fig. 9.8.2 : A typical microinstruction

at le
ic w
bl no
Pu K
ch
Te

Fig. 9.8.3 : Wilkie’s control

 If a microinstruction is encoded as given below :


C0 C1 C2 C3 C4 C5 C6 A2 A1 A0
0 1 0 0 1 1 0 0 1 0

 Then the control information 0100110 indicates that on execution of above microinstruction, control signals C1, C4 and C5
will be activated. Address field contains the address of the next microinstruction.

 Thus, after execution of the above instruction, the next instruction to be executed is one which is at the address 010.

 The control field tells the control signals which are to be activated and the address field provide the address of the next
microinstruction to be executed.

 In Wilkie’s control, control memory is organized as a program logic array.

 The Control Memory Access Register (CMAR) can be loaded from an external source (instruction register) as well as from
the address field of a microinstruction.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 9-18 Computer Organization & Processor

 A machine instruction typically provides the starting Micro-


Hardwired
Attribute programmed
address of a micro-program in control memory. Control
Control
 On the basis of starting address from instruction
Flexibility Not flexible, Flexible, new
register, decoder activates one of the eight output lines.
difficult to modify machine
 This activated line, in turn, generates control signals and for new instructions can
the address of the next microinstruction to be executed. instruction. easily be added.

 This address is once again fed to the CMAR resulting in Ability to handle Difficult Easier

ns e
activation of another control line and address field. Complex
instructions
 This cycle is repeated till the execution of the instruction

io dg
is achieved. Design process Complicated Systematic

 For example, as shown below, if the machine instruction Decoding and Complex Easy
under execution causes the decoder to have an entry sequencing logic

at le
address for a machine instruction in control memory at
line 000. The decoder activates the lines in the sequence
Applications

Instruction set size


RISC p

Small
CISC p

Large
ic w
given below :
Control memory Absent Present
Line Control signal Address of next
Chip area required Less More
bl no

activated generated microinstruction

000 C0, C2, C4, C5 001

001 C1, C3 010


Review Questions
Pu K

010 C0, C1, C3 011


Q. 1 Explain structural components of a computer.
011 C2, C4, C5 2
Q. 2 With neat block diagram explain von neumann
ch

 On execution of microinstruction at address 011, architecture of a computer.


address of the next microinstruction depends on the Q. 3 With neat block diagram explain harvard
external condition. architecture.
Te

 If the condition is true then the address 101 will be Q. 4 With neat block diagram explain general
selected else the address 110 will be selected. architecture of a microprocessor.
9.8.2 Comparison between Hardwired and Q. 5 Exlpain detailed instruction cycle.
Micro-programmed Control :
Q. 6 List and explain applications of microprogramming.
Micro- Q. 7 What are the different methods to implement
Hardwired
Attribute programmed hardwired control unit explain state table method?
Control
Control
Q. 8 What are the different methods to implement
Speed Fast Slow
softwired control unit?
Cost of More Cheaper
implementation Q. 9 Differentiate between hardwired and micro-

Implementation Sequential circuit Programming programmed control unit.


approach




Powered by TCPDF (www.tcpdf.org)


Unit 5

Chapter

10
ns e
io dg
at le Processor Instructions &
ic w
Processor Enhancements
bl no
Pu K

Syllabus
ch

Instruction : Elements of machine instruction; Instruction representation (Opcode and mnemonics,


Assembly language elements) ; Instruction format and 0-1-2-3 address formats, Types of operands,
Addressing modes; Instruction types based on operations (Functions and examples of each); Key
characteristics of RISC and CISC; Interrupt : Its purpose, Types, Classes and interrupt handling (ISR,
Multiple interrupts), Exceptions; Instruction pipelining (Operation and speed up)
Te

Multiprocessor systems : Taxonomy of Parallel Processor Architectures, Two types of MIMD clusters and
SMP (Organization and benefits) and multicore processor (Various alternatives and advantages of
multicores), Typical features of multicore intel core i7.
Case Study : 8086 Assembly language programming.

Chapter Contents
10.1 Instruction Encoding Format 10.7 Pipeline Processing
10.2 Instruction Format and 0-1-2-3 Address 10.8 Instruction Pipelining and Pipelining Stages
Formats

10.3 Addressing Modes 10.9 Pipeline Hazards


10.4 Instruction Set of 8085 10.10 Multiprocessor Systems and Multicore Processor
(Intel Core i7 Processor)

10.5 Reduced Instruction Set Computer 10.11 Flynn’s Classifications


Principles

10.6 Polling and Interrupts

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-2 Processor Instructions & Processor Enhancements

 To understand more clearly, let's observe Fig. 10.1.1


10.1 Instruction Encoding Format :
carefully.
 In 8085 we have variety in instructions; therefore all 1. Normally first bits is OPCODING byte.
instructions will not be of same size. nd
2. 2 bits normally specifies addresing mode.
 The instructions vary from 1 to 6 bytes in length. (Remember MOD and R/M). Sometime it may also
 The obvious question is, what these bytes will contain contain OPCODING part.
and on what parameter the length of the instruction 3. After OPCODING and addresing mode bytes, we

ns e
bytes is decided ? The length of instruction bytes is have following different cases :
dependent upon addresing mode used by programmer.
(a) No additional bytes (Figs. 10.1.1(a), (b), (c) and

io dg
i.e. immediate, register, register relative, based indexed,
(d)).
relative based indexed and so on.
(b) A 2 bits EA (for direct addresing mode
Basically instruction bytes will contain information of :
(Fig. 10.1.1(e)).
(1)

(2)
at le
OPCODE

Addressing mode designations :


(c) A 1 or 2 bits immediate operand (Fig. 10.1.1(f)).

(d) A 1 or 2 bits displacement followed by 1 or 2


ic w
(a) 2 bits Effective Address. bits immediate operand (Fig. 10.1.1(g)).
(b) 1 or 2 bits displacement. 4. If a displacement or immediate operand is 2 bytes
bl no

(c) 1 or 2 bits immediate operand. long, the low order bits always appears first, this is
Intel standard (same was followed by 8085 also).
Pu K
ch
Te

Fig. 10.1.1 : Summary of 8085 instruction format

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-3 Processor Instructions & Processor Enhancements

 To remember these formats, I will give you only a single 12 bit operatoin with a 12 bit immediate operand : S =
format, from that we get these different formats refer 0, W = 1
Fig. 10.1.2. 12 bit operatoin with a sign extended 8 bit immediate
 As shown in Fig. 10.1.2, the first six bits of a multibits operand : S = W = 1
instruction generally contains an opcoding that identifies V bit :
the basic instruction type i.e. ADD, XOR etc.
 Used by shift and rotate, to determine single and
variable - bit shifts and rotate.
V = 0 shift/rotate count is one

ns e
= 1 shift/rotate count is specified in CL register.
Z bit :

io dg
 This bit is used as a compare bit with zero flag in
conditional repeat (REP) and loop instructions.
Z = 0 Repeat/loop while zero flag is clear.

at le = 1 Repeat/loop while zero flag is set.


Refer Table 10.1.1, it summarizes all 5 bits used in
OPCODING field.
ic w
Fig. 10.1.2 : Instruction format
Table 10.1.1 : 5 bits used in OPCODING field
 The following bit, called the D field, generally specifies
Field Value Function
bl no

the direction of the operatoin.

D = 1 means instruction source is specified in REG field. S 0 No sign extension


1 Sign extend 8-bit immediate data to
D = 0 means instruction destination is specified in REG
12 bits if W = 1
field.
Pu K

W 0 Instruction operates on bits data


The next following bit is W. This bit identifies between
1 Instruction operates on word data
bits and word operatoin.
ch

Field Value Function


W = 0 Instruction operates on bits data.
= 1 Instruction operates on word data. V 0 Shift/rotate count is one
nd
1 Shift/rotate count is specified in CL
 Refer Fig. 10.1.2, if you observe in some case, in 2 bits
Te

register
we have MOD, OPCODING and R/M, for some of the
cases we have MOD, REG and R/M. First we will Z 0 Repeat/loop while zero flag is clear
nd 1 Repeat/loop while zero flag is set
concentrate on OPCODING bits in 2 bits of instruction
nd
format. This field is 3 bit wide. Under that we have three  Now concentrate on MOD, R/M and REG field in 2 bits
single bit fields, S, V and Z. of instruction format. The second bits of the instruction
S bit : usually identifies the instruction's operands.
MOD :
 An 8 bit, 2’s complement number can be extended to a
12 bit 2’s complement number by letting all of the bits  The mode (MOD) field indicates whether one of the
in high order bits equal the MSB in low order byte. This operands is in memory or whether both operands are

is referred to as sign extension. register. Table 10.1.2 shows MOD field encoding, this
field is of size 2 bits.
 S bit is used in conjunction with W to indicate sign
Table 10.1.2 : MOD field ENCODING
extension of Immediate fields in arithmetic instructions.
S = 0 No sign extension CODE EXPLANATION
= 1 Sign extended 8 bit immediate data to 12 bits if
00 Memory mode, no displacement follows *
W = 1.
Therefore for 8 bit operatoin : S = W = 0 01 Memory mode, 8 bit displacement follows

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-4 Processor Instructions & Processor Enhancements

CODE EXPLANATION MOD = 11 (binary)

10 Memory mode, 12 bit displacement follows R/M W=0 W=1

11 Register mode (No displacement) 01001 DL DX

 * Except when R/M = 110, then 12 bit displacement 0111 BL BX


follows. As seen MOD is basically concerned with
1001 AH SP
displacement i.e. 8 bit or 12 bit or no displacement.
1001 CH BP
REG :
110 DH SI

ns e
 The Register (REG) field identifies a register that is one
of the instruction operands. REG field depends upon W 111 BH DI

io dg
bit. Table 10.1.3 shows the selection of register(s)  You will find that Table 10.1.3 matches with Table 10.1.4.
depending upon W bit. Secondly when W = 0, you can select ONLY 8 bit source
Table 10.1.3 : REG (Register) field encoding and destination operand. When W = 1, you can select
ONLY
REG W=0 W=1

at le 0001001
001001
AL
CL
AX
CX

12 bit source and destination operand.

Following are the example instructions those are not


correct or valid :
ic w
01001 DL DX
0111 BL BX MOV AH, CX ; Moving 12 bit to 8 bit (right)
1001 AH SP MOV DX, BL ; Moving 8 bit to 12 bit (right)
bl no

1001 CH BP
 Thus any such combination with other register(s) is
110 DH SI
INVALID.
111 BH DI
Case II : Memory MODE (8 bit/12 bit or No displacement)
Pu K

 When W = 0, all 8 bit registers are selected, whereas for


 When MOD selects memory mode (MOD = 00 or 01
W = 1 all 12 bit registers are selected.
or 10), then data transfer is register to/from memory.
ch

 Thus in a number of instructions and mainly in


immediate to memory variety, REG is used as an  In that case R/M field indicates how the effective
extension of the OPCODING to identify the type of address of the memory operand is to be calculated. Now
operatoin i.e. 8 bit or 12 bit. refer Table 10.1.5, it depicts EA calculation.
Te

R/M : Register or memory : This field is of 4 bits. The


Table 10.1.5 : R/M field encoding when MOD = 00/01/10
meaning of R/M bits changes depending upon mode
(MOD) field. MOD = 11 EFFECTIVE ADDRESS CALCULATION
 At this stage we have general clear idea about these W= W=
three fields, now, we will take some cases. R/M R/M MOD = 00 MOD = 01 MOD = 10
0 1
Case I : Register to register transfer 000 AL AX 000 (BX) + (SI) (BX) + (SI) + D8 (BX) + (SI) + D16
 In this operatoin, data movement is within the register 001 CL CX 001 (BX) + (DI) (BX) + (DI) + D8 (BX) + (DI) + D16
either 8 bit or 12 bit. As mentioned in this operatoin REG
010 DL DX 010 (BP) + (SI) (BP) + (SI) + D8 (BP) + (SI) + D16
field identifies ONE of the instruction operands. What
011 BL BX 011 (BP) + (DI) (BP) + (DI) + D8 (BP) + (DI) + D16
about another instruction operand ? It is specified by
R/M, W and MOD bits. Refer Table 10.1.4. 100 AH SP 100 (SI) (SI) + D8 (SI) + D16

Table 10.1.4 : R/M field encoding when MOD = 11 (binary) 101 CH BP 101 (DI) (DI) + D8 (DI) + D16

110 DH SI 110 DIRECT (BP) + D8 (BP) + D16


MOD = 11 (binary)
ADDRESS
R/M W=0 W=1
111 BH DI 111 (BX) (BX) + D8 (BX) + D16
0001001 AL AX D8 = 8 bit displacement D16 = 12 bit displacement
001001 CL CX

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-5 Processor Instructions & Processor Enhancements

 REG field in this case, as usual identify the register that  The CPU has the control unit that provides control
is one of the instruction operand. signals to all the resources inside and outside the CPU.

10.2 Instruction Format and 0-1-2-3 The CPU also has the ALU that performs the arithmetic

Address Formats : and logical operations.

 The CPU also has some registers as seen in the


10.2.1 Instruction Formats :
Fig. 10.2.1. The register PC (Program Counter) is a
 The Control Unit and the ALU (Arithmetic and Logic register that always points to the next instruction to be
Unit) along with some registers constitute the Central executed. Hence the word “Program” in the name.
Processing Unit.

ns e
 It has to increment always to point to the next
 Fig. 10.2.1 shows the basic components of the computer
instruction, which is automatically done. Hence the word

io dg
and their interconnection. Also the internal components
“Counter” in the name.
of the CPU are shown in the Fig. 10.2.1.
 The MAR (Memory Address Register) is used to store
 The computer consists of three basic components
namely the CPU, memory and I/O devices connected the address to be provided to the memory.

 at le
with each other via the buses.
Input devices are required to give the instructions and
data to the system. The output devices are used to give
 And MBR (Memory Buffer Register) is used to store the
data to be given or taken from the memory. Similarly for
I/O devices we have IOAR and IOBR.
ic w
the output devices.  Another register named as IR (Instruction Register) is
 The instructions and the data given by the input device used to store the instruction to be executed by the CPU.
bl no

are to be stored, and for storage we require memory.


 The use of these registers will be further seen in the
next section named as Instruction Cycle.
10.2.2 Instruction Word Format - Number of
Addresses :
Pu K

Elements of an Instruction :
ch

1. Operation code (Opcode) is that part of the instruction


which gives the code for the operation to be performed.
2. Source Operand reference or address 1, gives the
reference of the data on which the operation is to be
Te

performed. This address could be a register, memory or


an input device.
3. Source Operand reference or address 2, gives the
reference of the second data on which the operation is
to be performed. This address could again be a register,
memory or an input device.
4. Result Operand reference gives the reference where the
result after performing operation is to be stored. The
Fig. 10.2.1 : Basic Components of the computer and the CPU
result could be stored in the register, memory or given
 The memory module receives address and control to an output device.
signals (like read, write, timing etc) and also receives and 5. An instruction may have only one address with the
sends data. other two fixed, or may have two addresses with one of
 The memory is assumed to be of ‘n’ locations with the the source operand address as the result operand
addresses from 0 to n – 1. address. Hence the instruction can have one, two or
 CPU module reads instruction and data, writes the data three addresses.
and also sends the control signals. It also receives Fig. 10.2.2 shows an example of a simple instruction
interrupts from the I/O devices and acts accordingly. format with one and two addresses.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-6 Processor Instructions & Processor Enhancements

Opcode Operand Address 1 MUL B ; AC <- AC * M[B]

(a) Single address instruction format STORE P ; M[P]<- AC


LOAD C ; AC<- M[C]
MUL D ; AC <- AC * M[D]
Opcode Operand Address 1 Operand Address 2 SUB E ; AC<- AC – M[E]
(b) Two address instruction format ADD P ; AC <- AC + M[P]
Fig. 10.2.2 : Instruction Word Formats i.e. AC <- (A*B) + (C * D – E)

 Three address, One address and Zero address STORE P ; M[P] <- AC

ns e
instructions. LOAD A ; AC <- M[A]
Zero address instructions :
ADD B ; AC<- AC + M[B]

io dg
PUSH A ; ToS <- A STORE Q ; M[Q] <- AC
PUSH B ; ToS <- B
LOAD P ; AC <- M[P]
ADD ; ToS <- A + B
DIV Q ; AC <- AC / M[Q]
PUSH C
PUSH D
at le ;
;
ToS <- C
ToS <- D
STORE X ;
i.e. AC <- ((A*B) + (C*D – E)) / (A + B)

M[X] <- X I.E. X<- ((A*B)


ic w
ADD ; ToS < C + D
+ (C*D – E)) / (A + B)
MUL ; ToS <- (A + B)*(C + D)
Three address instructions :
bl no

PUSH E ; ToS <- E


((A*B) + (C*D – E)) / (A + B)
PUSH F ; ToS <- F
ADD R1,A,B ; R1<- M[A] + M[B]
SUB ; ToS <- (E – F)
MUL R2,C,D ; R2<- M[C] * M[D]
DIV ; ToS <- (A + B)*(C + D)/(E – F)
Pu K

SUB R2,R2,E ; R2<- R2 – M[E]


POP X ; M[X] <- ToS ADD R1,R1,R2 ; R1< R1 + R2 i.e. R1< ((A*B)
One address instructions : + (C*D – E))
ch

LOAD A ; AC <- M[A] ADD R2,A,B ; R2 <- A + B


ADD B ; AC <- AC + M[B] DIV X,R1,R2 ; M[X] <- R1/ R2 i.e. X < ((A*B)
STORE P ; M[P]<- AC + (C*D – E))/ (A + B)
Te

LOAD C ; AC<- M[C] 10.3 Addressing Modes :


ADD D ; AC <- AC + M[D]
 We have seen in the previous section, that an
MUL P ; AC <- AC * M[P]
instruction has various components among the source
i.e. AC <- (A + B) * (B + C) operand reference.
STORE P ; M[P] <- AC  For the source operand reference we have various
LOAD E ; AC <- M[E] methods to give the same.
SUB F ; AC<- AC – M[F]  Based on these different methods we have different
STORE Q ; M[Q] <- AC addressing modes. In other words, addressing modes
LOAD P ; AC <- M[P] are the different methods (modes) to select (address)
DIV Q ; AC <- AC / M[Q] the operand for an instruction.
i.e. AC <- (A + B)*(C+D)/(E – F)  The different addressing modes are :
STORE X ; M[X] <- X 1. Immediate 2. Direct
i.e. X<- (A + B)*(C + D)/(E – F) 3. Indirect 4. Register
Accumulator type one address format : 5. Register Indirect 6. Displacement (Indexed)
LOAD A ; AC <- M[A] 7. Relative 8. Stack

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-7 Processor Instructions & Processor Enhancements

1. Immediate addressing mode : operand is pointed to by address field contains the


address of (pointer to) the operand.
 In this case the operand is part of instruction. The
operand is in the immediate next location of the opcode,  For example ADD AX, [[1000]]. This instruction adds the
hence the name. contents of memory location pointed to by contents of
memory location 1000, with the contents of accumulator
 For example ADD AX, 0005H. This instruction adds the
and stores the result in accumulator.
contents of register AX with the value 5. In this case 5 is
an operand and is given in the instruction itself. AX  The disadvantage is that this method is slower as
register is also called as the accumulator register. multiple memory locations are to be accessed to get the
operand.
The advantage of this addressing mode is that it is fast.

ns e

 Fig. 10.3.3 shows the structure of an instruction and
 The disadvantage is that it has a limited range.
operand access technique for the indirect addressing

io dg
 Fig. 10.3.1 shows the structure of an instruction and
mode.
operand access technique for the Immediate addressing
mode.

at le
Fig. 10.3.1 : Immediate addressing mode
ic w
2. Direct addressing mode :

 In this case the address field contains memory address


bl no

of the operand.
 For example ADD AX,[0005H]. This instruction adds the
Fig. 10.3.3
contents of memory location 0005H to accumulator. The
4. Register addressing mode :
operand is taken from the memory location specified in
Pu K

the instruction.  In this case the operand is held in the register named in
 In this case there is only a single memory reference to operand address field.
ch

access data.
 There are limited numbers of registers; hence very small
 The advantage is that there are no calculations to work
address field is required. Hence shorter instructions and
out effective address.
faster instruction fetch.
 The disadvantage is that this addressing mode can be
Te

 Also another advantage is that there is no memory


used for a limited address space.
access. We can say that this is the best addressing mode
 Fig. 10.3.2 shows the structure of the instruction and
operand access technique for the direct addressing in terms of time required to access the operand.
mode.  The only disadvantage of this method is the limited
number of registers available in most of the processors.

 For example, ADD AX, BX. This instruction adds the


contents of the registers AX with the contents of
registers BX and the result is stored in the register AX.

 Fig. 10.3.4 shows the instruction structure and the


method of access for the register addressing mode.

 As already discussed the major advantage of this


Fig. 10.3.2 : Direct Addressing mode addressing mode is that it has very fast execution but
3. Indirect addressing mode : limited address space.

 In this case a memory location has the address of the  Thus the processors that have multiple registers helps in
operand in another memory location i.e. a memory improving the performance of the processor.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-8 Processor Instructions & Processor Enhancements

 The disadvantage is that there is extra computation


required for the address calculation.
 The structure of the instruction and the access method
for the displacement addressing mode is shown in
Fig. 10.3.6.

ns e
Fig. 10.3.4 : Register addressing mode

5. Register indirect addressing :

io dg
 In this case the operand memory address in pointed to
by contents of register R.
 It requires one less memory access than indirect

 at le
addressing mode as seen in above point number 3.
Fig. 10.3.5 shows the structure of the instruction and the
way to access the operand for the register indirect Fig. 10.3.6 : Displacement addressing mode
ic w
addressing mode. 7. Relative addressing mode :

 It is a version of displacement addressing where the


bl no

register is the program counter, PC. Program counter is


a register that always points to the next instruction to be
executed by the processor and hence tells the processor
about which next instruction it has to execute.
Pu K

 It is used for branching or transfer of control


instructions. In this case the current value of the
ch

program counter is updated with the relative address


specified in the instruction. This relative address is added
to the current value of program counter and hence the
name as relative addressing mode.
Te

Fig. 10.3.5 : Register Indirect addressing mode


8. Stack addressing mode :
 As seen in Fig. 10.3.5, the instruction contains a register
 This addressing mode is used to access the data from
address field that selects the register that has the
the top of the stack.
memory address of the operand to be accessed.
 It used PUSH and POP instructions to access the stack.
6. Displacement addressing mode : In this case the operand is implicitly on top of stack.
 In this type of addressing mode, there are two address  For example, POP AX. ; Pop top two items bytes from
fields that hold the base address and the displacement. the top of stack.
The base address is held by the register address given in
10.3.1 Examples on Addressing Modes :
the instruction.
 The register address field in the instruction select one of Ex. 10.3.1 : An instruction is stored at location 300 with its
the register that has the address. This address is to be address field at location 301, the address field
has the value 400. A processor register R1
added with the displacement and hence giving the
contains value 200. Evaluate effective address
address of the memory location that has the operand. if addressing mode of instruction is :
 The major advantage of this type of this addressing (a) Direct (b) Immediate
mode is that the pointer need not be continuously
(c) Relative (d) Register indirect
updated; instead the displacement can be given with the
(e) Index with R1 as index register
instruction itself.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-9 Processor Instructions & Processor Enhancements

Soln. : value Y, is actually the address of the address of


(a) Direct : operand. Thus Z = [Y].
(c) Relative :
In this case the instruction has the address field as 400,
hence the effective address is 400. In this case the instruction has the address field as Y,

(b) Immediate : which will be added with the value of program counter.
Thus Z = PC + Y.
In this case the instruction has the address field as 400,
(d) Indexed :
hence the operand itself is 400. This operand is stored in
the immediate next location of the instruction. Since the In this case the address field i.e. Y will have an address

ns e
instruction is stored at location 300, the operand is at that will be added with the value of the register R1
location 301. which has X. Hence the effective address in this case will

io dg
(c) Relative : be the address at location Y + the value of register R1
i.e. X. Thus Z = [Y] + X
In this case the instruction has the address field as 400,
which will be added with the register R1’s value 10.4 Instruction Set of 8085 :

at le
i.e. 200 and hence the effective address will be 600.
(d) Register indirect :
The instruction set of 8085/8088 is divided into number
of groups, of functionally related instructions.
ic w
In this case the register provides the address of the Different groups are :
operand. Since the register R1 has a value 200, the 1. Data transfer group.
effective address in this case will be also 200. 2. Arithmetic group.
bl no

(e) Index with R1 as index register : 3. Bit manipulation group.


4. String instruction group.
In this case the address field i.e. 400 will have an
5. Program transfer instruction group.
address that will be added with the value of the register
6. Process control instruction group.
Pu K

R1 which has 200. Hence the effective address in this


case will be the address at location 400 + the value of  Graphical presentation of different groups is as shown
register R1 i.e. 200. in Fig. 10.4.1.
ch

Ex. 10.3.2 : A two address instruction is stored in memory


at an address designated with the symbol W.
The address field of the instruction (stored at
Te

W+1) is designated by Y. The operand used


during execution of instruction is stored at
address symbolized Z. An index register
contains value X. State how Z is calculated
from other address if addressing mode of
instruction is :

(a) Direct (b) Indirect (c) Relative

(d) Indexed
Soln. :
Fig. 10.4.1
(a) Direct :
Now we will start with instruction set. The information
In this case the instruction has the address field as Y, presented is from the point of view of utility to the assembly
hence the effective address is Y. Thus Z = Y. language programmer. The information given is :
(b) Indirect : 1. Mnemonic (Syntax of the instruction)
In this case the instruction has the address field as Y, 2. Algorithm
hence the operand is at the address which is stored at 3. Operation of the instruction
location Y, i.e. the address field at W + 1, that has the 4. Examples.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-10 Processor Instructions & Processor Enhancements

 While giving you above information some typical Identifier Used In Explanation
symbols/labels are used. I feel that you should know the
source- String operations Name of a string in
significance and meaning of those labels. string memory that is
 Refer Table 10.4.1, which provides, for instruction addressed by register
coding format, different IDENTIFIER where it is used and SI; used only to
explanation of the same. identify string as bits
or word and specify
Table 10.4.1 : Key to instruction coding formats
segment override, if
Identifier Used In Explanation any. This string is

ns e
used in the operation,
Destination data transfer, bit A register or memory
manipulation location that may but is not altered.

io dg
contain data operated Dest-string String operations Name of string in
on by the instruction, memory that is
and which receives (is addressed by register
replaced by) the result DI; used only to
of the operation.

at le
Source data transfer,
arithmetic, bit
A register, memory
location or immediate
identify string as bits
or word. This string
receives (is replaced
ic w
manipulation value that is used in by) the result of the
the operation, but is
operation.
not altered by the
Count Shifts, rotates Specifies number of
bl no

instruction.
bits to shift or rotate;
source-table XLAT Name of memory
written as immediate
translation table
value 1 or register CL
addressed by register
(which contains the
Pu K

BX.
count in the range
Target JMP, CALL A label to which 0-255).
control is to be
ch

interrupt- INT Immediate value of 0-


transferred directly, or
type 255 identifying
a register or memory
interrupt pointer
location whose
number.
content is the address
Te

of the location to optional- RET Number of bytes


which control is to be pop-value (0-64k, ordinarily an
transferred indirectly. even number) to
discard from stack.
short-label Conditional A label to which
transfer, iteration control is to be external- ESC Immediate value
control conditionally OPCODING (0-63) that is encoded
transferred; must lie in the instruction for
within – 128 to + 127 use by an external
bytes of the first bits of
processor.
the next instruction.
We also have some labels, used in operand types. Refer
Accumulator IN, OUT Register AX for word
transfers, AL for bytes. Table 10.4.2 to understand meaning of the same.

Port IN, OUT An I/P port number; Table 10.4.2 : Key to operand types
specified as an
immediate value of 0- Identifier Explanation
255, or register DX (no No operands are written
(which contains port operands)
number in range
0-64k). Register An 8-or 32-bit general register

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-11 Processor Instructions & Processor Enhancements

Identifier Explanation 10.5.1 RISC Versus CISC :


Reg 16 A 32-bit general register Sr.
Properties RISC CISC
No.
Seg-reg A segment register
1. Number of Less More
Accumulator Register AX or AL Instructions
Immediate A constant in the range 0-FFFFH 2. Addressing Less More
immed8 A constant in the range 0-FFH Modes
(1) 3. Instruction Less More
Memory An 8- or 32-bit memory location
Formats

ns e
(1)
Mem8 An 8-bit memory location 4. Instruction Fixed Variable
(1) Size

io dg
Mem16 A 32-bit memory location
5. Control Unit Hardwired Micro-
source-table Name of 156-bits translate table
programmed
source-string Name of string addressed by register SI 6. Number of Single CPU Multiple CPU
dest-string Name of string addressed by register DI Bus Cycles cycle (for 80% cycles

DX at le
short-label
Register DX

A label within – 128 to + 127 bytes of the end 7.


to execute an
instruction
Control Logic
Instructions)

Simple Complex
ic w
of the instruction And
Decoding
Near-label A label in current coding segment
Subsystem
bl no

Far-label A label in another coding segment 8. Pipelining Huge no. of Difficulty in


Near-proc A procedure in current coding segment stages of efficient
Pipelining implementation
far-proc A procedure in another coding segment
9. Design time Smaller time Long time and
Pu K

Memptr16 A word containing the offset of the location in and and less Significant
the current coding segment to which control Probability of probable probability
(1) Design
ch

is to be transferred
Errors
Memptr32 A double word containing the offset and the
10. Complexity Simpler More complex and
segment base address of the location in
of Compiler the results of
another coding segment to which control is to “optimization” may
Te

(1)
be transferred not be most
regptr16 A 32-bit general register containing the offset efficient and the
fastest machine
of the location in the current coding segment
language code
to which control is to be transferred
11. HLL Supported Not Supported
Repeat A string instruction repeat prefix. instructions
(1) Any addresing mode-direct, register indirect, based
10.5.2 RISC Properties :
relative, based indexed or relative based indexed-may be
used. A RISC system must satisfy the following properties :

10.5 Reduced Instruction Set Computer 1. Single-cycle execution of all (or at least 80 percent)

Principles : instructions.
2. Single-word standard length of all instructions.
 Sun SPARC is a RISC processor; while all the processors
3. Small number of instructions (<= 128).
studied till now in this book were CISC processors.
4. Small number of instruction formats (<= 4).
Hence before studying the details of Sun SPARC
processor, we will see how RISC processors are different 5. Small number of addressing modes (<= 4).
than CISC and also the special features of RISC 6. Memory access possible by load and store instructions
processor. only.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-12 Processor Instructions & Processor Enhancements

7. All operations, except load and store, are register to procedures by partial overlapping of the windows. The
register i.e. within the CPU. last N registers of window J will be the first N registers
8. It must have a hardwired control unit. of window J+1.
9. It must also have a relatively large (at least 32) general- 8. If the procedure taking up window J calls a procedure,
purpose CPU register file. which will be assigned the next window J+1, it can pass

10.5.3 Register Window : N parameters to the called procedure by placing their


values into registers (K+W-N) to (K+W-1). The same
1. Since there are a huge number of registers in a RISC registers will be automatically be available to the called
processor, it can be useful in saving the latency period procedure without any further movement of data.

ns e
during procedure call. Parameter passing in is possible
by the register window. This policy also allows 10.5.4 Miscellaneous Features or Advantages

io dg
reasonable HLL support in RISC designs.
of RISC Systems :
2. The register file is subdivided into groups of registers, 1. HLL support :
called register windows.
(a) The support for High Level Language (HLL) features is
3.
at le
A certain group of ‘i’ registers, suppose R0 to R(i-1), are
designated as global registers. The global registers are
accessible to all procedures running on the system at all
(b)
mandatory in the design of any computing system.

The procedure call-return and parameters passing is the


ic w
the times. most time-consuming operation in typical HLL
programs.
4. On the other hand, each procedure is assigned a
separate window within the register file, which is not (c) HLL support is provided in RISC machines by supporting
bl no

accessible to other procedures. efficiently the handling of local variables, constants, and
5. The window base (first register within the window) is procedure calls, while leaving less frequent HLL
pointed to by a field called current window pointer operations to instructions sequences and subroutines.
Pu K

(CWP) located in the CPU’s status register (SR). (d) One of the mechanisms supporting the handling of
procedures, and their parameter passing in particular, is
ch

the feature of the register window, discussed in


section 10.5.3.
2. Implementation of register windows :

(a) A CISC processor’s control unit takes up a large


Te

percentage of the chip area, leaving very little space for


other subsystems and basically not permitting a large
register file, needed for an efficient implementation of
windowing.

(b) A RISC processor’s control unit makes up a must smaller


percentage of the chip area, yielding the necessary
Fig. 10.5.1 : Simple Non-overlapping Register Window
space for a large register file.
6. If the currently running procedure is assigned the
(c) And hence implementation of register windowing (that
register window J, hence taking up registers K, K+1, . . . ,
requires huge number of registers) is possible in RISC
K+W-1 (where W is the number of registers per
processors
window), the CWP contains the value J, hence pointing
to the base of window J. If the next procedure to 3. Pipelining :
execute takes up window J+1, the value in the CWP field (a) Pipelining was used on various CISC systems even
will be incremented accordingly to point to J+1. before the RISC approach became popular.
7. Register windowing can work more efficiently for
(b) But, a streamlined RISC can handle pipelines more
parameter passing between calling and called
efficiently.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-13 Processor Instructions & Processor Enhancements

4. Delayed branch : 6. Dual split cache :

(a) The problem occurs in a system where instructions are Another feature, implemented in not only a CISC but
prefetched, right after a branch. also RISC systems, is separated data and code caches, or
(b) If the branch is conditional, and the condition is not split cache.
satisfied, then the next instruction, which was 7. Instruction Level Parallelism (ILP) :
prefetched, is executed, and since no branch is to be
Superscalar and superpipelined designs are also mostly
performed, no time is lost.
implemented in a RISC design.
(c) But, if the branch condition is satisfied, or the branch is
8. VLSI realization :
unconditional, the next prefetched instruction is to be

ns e
flushed and other instruction pointed to by the branch (a) The chip area, dedicated to the realization of the control
address is to be fetched in its place. The time required unit, is considerably less. Therefore, on a RISC VLSI chip,

io dg
to prefetch the flushed instruction is wasted. there is more area available for other features (cache,
(d) Such waste of time is solved by using the delayed FPU, part of the main memory, memory management
branch approach. unit, I/O ports, etc).
(e)

(f)
at le
In this approach, the instructions are reshuffled such
that the operation does not change the result.
A successful branch is assumed and the execution of the
(b) As a result of the considerable reduction of the control
area, a large number of CPU registers can fit on-chip.
ic w
(c) By reducing the control area on the VLSI chip and filling
branch is delayed until the already prefetched
the area by numerous identical registers, the
instructions are executed. Hence, no time is lost and
regularization factor (which is defined as, ratio of chip
bl no

there is no change in the intended program operation.


area utilized by other features to tha chip space
(g) But the compiler has to take care that the instructions
required by control unit) of the chip increases. The
followed by the branch instructions are to be executed
higher the regularization factor, the lower is the VLSI
irrespective to the branch to be taken or not.
design cost.
Pu K

5. Scoreboarding :
(a) Another problem in instruction pipelines is that of data 9. The computing speed :
ch

dependency. (a) A simpler and smaller control unit in RISC requires fewer
(b) The data in some register put by instruction 1 may be gates. This results in shorter propagation paths for
required by instruction 2; and before the value in the control unit signals, decreasing the delay time for
register is available, the instruction 2 may be ready for control signals and hence yielding a faster operation.
Te

execution yielding a possibly incorrect result.


(b) A significantly reduced number of instructions, formats,
(c) A method used to solve this problem is called as
and addressing modes results in a simpler and smaller
scoreboarding.
decoding system, resulting in faster decoding operation.
(d) A special CPU control register i.e. the scoreboard
(c) A hardwire-controlled system can hence be
register, is required for this purpose.
implemented, with a reduced control unit that will in
(e) If there are 32 registers, a scoreboard register 32-bit
general be faster than a microprogramed control unit.
long will be required; each of its bit represents one of
(d) A relatively large CPU register file also reduces CPU-
the 32 CPU registers.
memory traffic to fetch and store data operands.
(f) If register ‘i’ is involved as a destination in the execution
(e) A large register set can also be used to store parameters
of instruction 1, bit ‘i’ in the scoreboard register is set;
to be passed from a calling to a called procedure, to
and as long as bit ‘i’ is set, any subsequent instruction in
store the information of a process that was interrupted
the pipeline will be prevented from using Ri in any way
by another.
until bit ‘i’ is cleared.

(g) This instruction will now be executed as soon as the 10. Design cost and reliability considerations :
execution of the instruction, which caused bit ‘i’ to be (a) It takes a shorter time to complete the design of a RISC
set, is completed. control unit, because of the smaller instruction set, fixed

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-14 Processor Instructions & Processor Enhancements

instruction length and less instruction formats. Thus 10.6 Polling and Interrupts :
contributing to the reduction in the overall design cost.
 Whenever more than one I/O devices are connected to a
(b) A simpler and smaller control unit will obviously have a
microprocessor based system, any one of the I/O devices
reduced number of design errors and, therefore, a
may ask service at any time.
higher reliability.
 There are two methods in which the microprocessor can
10.5.5 RISC Shortcomings : service these I/O devices. One method is to use the

1. Since a RISC has a small number of instructions, a polling routine, while the other method employs
interrupt.

ns e
number of functions, performed on CISC’s by a single
instruction, will need more instructions on RISC. Hence,  In the polling routine the microprocessor checks

io dg
RISC’s code will be longer. whether any of the I/O devices is requesting for service.

2. More memory will have to be allocated for RISC  The polling routine is a simple program that keeps a
programs, and the instruction accesses between the check for the occurrences of interrupt.
memory and the CPU will be increased. For e.g. : Let us assume that our polling routine is

at le
10.5.6 ON-Chip Register File Versus Cache
Evaluation :

servicing I/O ports 1, 2, 3,………….8. The polling routine
will check the status of the I/O ports in a proper
ic w
sequence.
Modern CISC have on-chip cache to compensate the
 The polling routine will first transfer the status of the I/O
registers of RISC processors. But, let us evaluate the
bl no

port 1 to the accumulator. It then checks the contents of


advantages of register file over on-chip cache.
accumulator to determine if the service request bit is set.
Sr.
CACHE CPU Register file  If the bit is set then I/O port 1 service routine is called,
No.
otherwise the polling routine will move forward to check
Pu K

1. Addressed as locations Separate register


if port 2 is requesting service.
in memory-long addressing-short
 On completion of the service to port 1, the polling
ch

addresses. addresses.
routine will test port 2. The process is repeated till all the
2. Has to be tens of Kbytes About 128 registers (of 8 ports are tested and all the I/O ports those are
to be effective. 4-bytes each, i.e. 512
demanding service are processed.
bytes) will have
Te

 On completion of the polling routine, the


significant effect on
performance. microprocessor will resume with the execution of the
program. Fig. 10.6.1 shows the sequence for polling
3. Information loaded in Information can be
routine.
units of lines (blocks). loaded individually to
each register.  The polling routine has priorities assigned to the
different I/O devices. Once the routine begins port 1 will
4. Slower access (Effective Faster access.
always be checked first, then port 2 and so on.
address calculation,
 Another way that allows the microprocessor stop with
virtual to physical
address translation). the execution of the program and give service to the I/O
devices is interrupt.
5. Information loaded The user can load any
 It is an external asynchronous input that informs the
based on prefetch and information at any time.
replacement policies. microprocessor to complete the instruction that it is
currently executing and fetch a new routine in order to
6. Inaccessible by the user. Fully accessible by the
offer service to the I/O device. Once the I/O device is
user.
serviced, the microprocessor will continue with the
7. Possibility of a miss. No miss is possible. execution of its normal program.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-15 Processor Instructions & Processor Enhancements

will go through the concepts and design issues of the


linear and non-linear pipelining. We will also discuss
collision-free scheduling techniques for performing
dynamic functions.

 Techniques to design instruction pipelines, arithmetic


pipelines are also discussed.

10.7.1 Non-Pipelined System Versus Two


Stage Pipelining :

ns e
 In a non-pipelined system, the processor fetches an

io dg
instruction from memory, decodes it to determine what
the instruction was, read the instructions inputs from the
register file, performs the computation required by the
instruction and writes the result back into

at le the register file. This approach is also called as


un pipelined approach.
ic w
 The problem with this approach is that, the hardware
needed to perform each of these steps (Instruction fetch,
bl no

instruction decode, register read, instruction execution


and register write-back) is different and most of the
hardware is idle at any given moment. Waiting for the
other parts of the processor to complete their part of
Pu K

executing an instruction.

 Pipelining is a technique for overlapping the execution


ch

of several instructions to reduce the execution time of a


set of instructions.

 Two stage pipelining includes two stages i.e. Fetch


instruction and Execute instruction.
Te

Fig. 10.6.1 : Polling sequence


 These two operations are performed for one instruction
10.7 Pipeline Processing : and the next instruction overlapping i.e. when the first
instruction is being executed the next is fetched and
 A processor has many resources like the ALU, buses,
when this instruction is executed the next is fetched and
registers, etc. An attempt to utilize all these attributes to
so on.
their fullest or continuously can be achieved by
 This method of execution the instructions in pipeline
pipelining.
speeds up the processor operation.
 In a pipelined system the instructions flow through the
 This also makes sure that all the units of the processor
processor as if the processor was a pipe.
are busy operating and none of them is starving.
 The instructions move from one stage to another to
 Thus with the help of pipelining the operation speed of
accomplish the assigned operation. Hence at most of the
the processor increases i.e. more the number of pipeline
times each unit of the processor is busy handling one or
stages, faster becomes the processor, but complex in
the other instruction, making the attribute of the
design.
processor being used continuously.
 Two stage instruction pipeline stage is as shown in
 This chapter deals with advanced pipelining and
Fig. 10.7.1(a).
superscalar design in the processor development. We

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-16 Processor Instructions & Processor Enhancements

10.7.2 Basic Pipelined Datapath and Control


for a Six Stage CPU Instruction
Pipeline :
Fig. 10.7.1(a) : Two stage pipeline architecture  The flowchart given in Fig. 10.7.2 gives an instruction
flow through the six stage pipelined processor.

ns e
io dg
(b) Timing diagram of (c) Timing diagram of

at le
execution of instructions in
non-pipelined system.
execution of instructions in

Fig. 10.7.1
pipelined systems.
ic w
 In case of a system without pipelining the time required
for executing a set of instructions is much more than the
bl no

time required executing the same set of instructions in a


pipelined system.
 The comparison of the execution of five instructions in a
Pu K

system with and without pipeline is shown in


Figs. 10.7.1(b) and 10.7.1(c).
ch

 You will notice that the time required for executing five
Fig. 10.7.2 : Six stage pipeline flowchart
instructions on a non-pipelined system is 10 clock pulses
while that on two stage pipelined processor is This can be shown on a time scale as in Fig. 10.7.3.
Te

6 clock pulses. Thus the number off clock pulses


required in two stage processor will always be x/2 + 1,
where 'x' is the number of clock pulses in non-pipelined
instructions and '2' is the number of stages.

 If we increase the number of instructions, we can make


the expression as x/2 (since '1' is negligible for huge
number of instructions) clock pulses for a two stage
pipelined processor, wherein 'x' clock pulses in case of
non-pipelined processor.

 If we try to generalize this expression, we can write it as


x/n, where x is number of clock pulses required for non- Where,

pipelined instructions and 'n' is the number of stages of


a pipelined processor.

 Thus we can say that the speed-up achieved by a


pipelined processor can be maximum 'n' times of the
non-pipelined processor. Fig. 10.7.3 : Six stage pipelined processor timing diagram

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-17 Processor Instructions & Processor Enhancements

 Pipelining is termed as overlapped parallelism as in case


of pipelining the instructions are overlapped.

 The execution of the instructions is overlapped in such a


manner that there are several instructions under
different process in the pipelined processor as shown in
Fig. 10.7.3.
th
 As shown in the Fig. 10.7.3, for e.g. during the 6 clock
pulse, there are six instructions in the processor.

ns e
Instruction :

io dg
Fig. 10.7.5 : Branch in a six-stage pipeline
1. has its result being written back, instruction
Ex. 10.7.1 : Draw the space-time diagram for a six-
2. is being executed, instruction
segment pipeline showing the time it takes to
3. has its operands being fetched, instruction process eight tasks.

4.
at le
has the address of operands being calculated,
instruction
Soln. :
Clock cycles
Segment :
1 2 3 4 5 6
1 T1 T2 T3 T4 T5 T6 T7 T8
7 8 9

10 11 12 13
– – – –
ic w
5. is being decoded and instruction 2 – T1 T2 T3 T4 T5 T6 T7 T8 – – – –
6. is being fetched. 3 – – T1 T2 T3 T4 T5 T6 T7 T8 – – –
bl no

4 – – – T1 T2 T3 T4 T5 T6 T7 T8 – –
 This shows how pipelining is a overlap parallelism of
5 – – – – T1 T2 T3 T4 T5 T6 T7 T8 –
instructions.
6 – – – – – T1 T2 T3 T4 T5 T6 T7 T8
 This six stage pipeline system can be implemented with
It takes 13 clock cycles to process 8 tasks.
Pu K

six units as shown in Fig. 10.7.4.


10.7.3 Linear Pipeline Procesors :
ch

 In a linear pipelined processor there are n stages

Fig. 10.7.4 : Six stage pipelined architecture connected linearly to perform different operations.
These may perform different operations to execute an
 In pipelining, when a branch instruction is executed the
Te

instruction, perform arithmetic operations or memory


system causes a huge waste of time i.e. processor units
access operations.
starving. The instructions in the pipeline are the
 In a linear pipelined processor with suppose n stages,
sequential instructions.
the partially processed instructions are passed from
 If a branching instruction is given the next instruction to stage i to stage i + 1, where i varies from 1 to k – 1. The
be executed is not the sequential one, instead it is the linear pipeline can either be a synchronous system or
instruction at target instruction. asynchronous system.

 Hence the sequential instructions in the pipeline are to 10.7.3.1 Asynchronous and Synchronous
be cleared and instructions from target are to be Linear Pipelining :
fetched. Clearing the sequential instructions from the  In case of an asynchronous linear pipeline system, there
pipeline is called as flushing of the pipeline. is a set of handshaking signals between the two stages.
Whenever a stage (say stage i) completes its operation,
 These problems are discussed in detail in section 10.8.
it places the result on the input lines of next stage
Also the solutions to the same are discussed in that
(i.e. stage i + 1) and enables the ready (or strobe) signal.
section. This is as shown in the timing diagram in
 The next stage (i.e. stage i + 1) on completing its
Fig. 10.7.5. operation, accepts the data from its input lines and

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-18 Processor Instructions & Processor Enhancements

indicates this to the previous stage (i.e. stage i) by giving  The reservation table follows a diagonal line for
an acknowledgement signal. On this, the stage which synchronous linear pipeline as shown in Fig. 10.7.8.
had placed the data (i.e. stage i), also checks its input if it
 Reservation table is a space-time diagram showing
has previous stage (i.e. stage i – 1) completed its
streamline pattern. Hence as seen in Fig. 10.7.8, for an n-
operation and is ready with the result.
stage pipeline, n clock pulses are required to execute the
 It also repeats the same process as explained with stage i
instruction.
and i + 1. This can be explained as shown in the
 Once the pipeline is filled up completely, the processor
Fig. 10.7.6.
completes one instruction execution every clock pulse.

ns e
10.7.3.2 Clocking and Timing Control :

io dg
Fig. 10.7.6 : Asynchronous linear pipelining system  The clock cycle,  as shown in Fig. 10.7.7, can be
 Hence the asynchronous linear pipelined system will calculated as discussed below. Let i , be the delay time
have variable throughput rate and will experience for stage Si. Hence the clock cycle time can be given
different amount of delay at each stage. as :

at le
In case of a synchronous linear pipelined system the
stages are separated by latches. Whenever a stage
K
 = max [ i ] + d = m + d
1

where m the maximum stage delay and d is, as shown


ic w
completes its part of operation it stores the result in the
latch. in the Fig. 10.7.7, the ‘on’ period of the clock pulse.

 The clock signal is synchronously given to all the latches,  The data from each stage is latched in the master
bl no

such that on reception of the clock signal each stage flipflop of latch register during the rising edge and given
takes the output of the latch connected to its input. This to the slave flipflop during the falling edge. In fact, m
system is shown in Fig. 10.7.7. >> d; hence we can say that,   m.

Hence the pipeline frequency can be given as = 1 / .


Pu K


This frequency f, is also termed as the throughput of the
system as it gives the time required for one instruction
ch

to come out of the pipeline.


 The actual throughput of the pipeline may be less than
Fig. 10.7.7 : Synchronous linear pipelining system the maximum throughput given by f, which may be
Te

 The latches are infact master-slave flipflops. The time because of more than one clock pulses, may be required

required by each stage is expected to be equal; and it is for the initiation of successive instructions.

this time that determines the clock period as well as the  The initiation of successive instructions may take more
speed of the pipelined system. clock pulses because of their data or control

 The utilization of the stages or the utilization pattern of dependency.

stages in a synchronous pipeline can be represented by  The clock pulse at each stage is expected to arrive
the reservation table. simultaneously. But, because of the time delay of the
path, different stages get the pulse at different time
offset s; this problem is referred to as clock skewing.
 Assuming the shortest logic path to get the clock at a
delay of tmin and the longest logic path to get the clock
pulse at delay of tmax ; to avoid this problem the m  tmax
+ s and d  tmin – s. Thus the clock period with skew is :
d + tmax + s    m + tmin – s
 Hence in ideal case, s = 0, tmax = m and tmin = d. Hence,
Fig. 10.7.8 : Reservation table of a synchronous linear
pipeline even with the clock skewing  = m + d

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-19 Processor Instructions & Processor Enhancements

10.7.3.3 Speedup, Efficiency and Throughput : number of stages, termed k0 (optimal no. of stages), the
PCR starts reducing as shown in the Fig. 10.7.10.
 A linear pipeline of k stages will take k + (n – 1) clock
cycles to execute n instructions; the first instruction will
take k clock cycles and the remaining n – 1 instructions
will take one clock cycle each (assuming there are no
dependency of the instructions). Hence with the clock
cycle width being , the total time required to execute
these n instructions will be
TK =  [ k + (n – 1) ] …(10.7.1)

ns e
 An equivalent non-pipelined system the time required to

io dg
execute n instructions will be
T1 = nk …(10.7.2)
 Thus the speedup factor of a k-stage pipelined system Fig. 10.7.10 : Graph of PCR vs. k
can be given as :  To execute a program on sequential non-pipelined

at le Sk =
T1
Tk
=

nk
nk
k + (n – 1) 
system, say the time required is ‘t’. Thus to execute this
program on a k-stage pipeline with equal flow-through
t
ic w
= …(10.7.3) the time required = + d, where d is latch delay.
k + (n – 1) k
 Hence as the number of instructions n increases, Sk 1
Hence, f =
t
bl no

tends to k. Thus, ‘k’ stage pipeline processor can be at +d


k
most ‘k’ times faster than the corresponding
non-pipelined processor. The speed up factor as a The performance / cost ratio (PCR) can be defined as :
function of the number of instructions is shown in the f
PCR =
Pu K

Fig. 10.7.9. (c + kh)


1
= …(10.7.4)
t
 + d (c + kh)
ch

k 
where h is the cost of each latch, c is the cost of all logic
gates of a stage, d is the latch delay, f is the pipeline
Te

frequency, k is the number of stages and t is the


flow-through delay of each stage. The optimal number of
stages can be given as

Fig. 10.7.9 : Relationship of speedup factor with the no. of


tc
k0 = …(10.7.5)
operations dh
The efficiency of such a system is defined as (using
 Also there is a limit to the number of stages. As the
Equation (10.7.3))
number of stages increases the delay and skewing
Sk n
increases. Ek = = …(10.7.6)
k k + (n – 1)
 Hence the finest pipelining or micro-pipelining that has
The pipeline throughput Hk is defined as number of
the subdivision of the stages at almost gate level should
consider this optimal number of stages. operations performed per unit time.
Efficiency
 Most of the systems have the number of stages varying Hk =
One unit time ()
from 2 to 15. Very few systems are designed to have the
n
number of stages beyond 10. =
[ k + (n – 1) ] 
 This is because the increase in the number of stages
nf
should take care that the PCR (performance to cost = …(10.7.7)
k + (n – 1)
ratio) has also increased. But beyond a particular

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-20 Processor Instructions & Processor Enhancements

Hence the maximum throughput as discussed earlier 5


Considering the max speedup the max efficiency =
will be equal to f, when the number of instructions n tends 6
to ∞. = 0.8333

Ex. 10.7.2 : Consider the execution of a program of 15,000 10.7.4 Non Linear Pipeline Processors :
instructions by a linear pipeline processor with
a clock rate of 25 MHz. Assume that the  A dynamic or multi-function pipeline is called as
instruction pipeline has five stages and that non-linear pipeline. In a linear pipeline the operations
one instruction is issued per clock cycle. The that are being performed are fixed; each stage as a fixed
penalties due to branch instructions and out-of- operation.
sequence executions are ignored.

ns e
 But in a non-linear pipeline allows feed forward and
(a) Calculate the speedup factor in using this feedback connections in addition to the streamline

io dg
pipeline to execute the program as
connection. It may also have more than one output
compared with the use of an equivalent
non-pipelined processor with an equal i.e. the output need not be from the last stage. An
amount of flow-through delay. example of three stage non-linear pipeline system is
shown in Fig. 10.7.11.
(b) What are the efficiency and throughput of

Soln. :
at le this pipelined processor ?
ic w
Given : n = 15000, f = 25 MHz, k = 5
nk
Speed up factor (Sk) = Fig. 10.7.11 : A 3-stage non linear pipeline
k + (n – 1)
bl no

15000  5  In the Fig. 10.7.11 besides the three stages connected in


= = 4.9986 streamline, there are also some feedback and
5 + (15000 – 1)
Sk feedforward connections.
Efficiency (Ek) = = 0.99973  The feedforward connection is from S1 to S3 and
k
Pu K

Ek feedback connections from S3 to S2 and S3 to S1. The


Throughput (Hk) = = f Ek = 25 MHz  0.99973 reservation table for such connections may be different
k
for different operations.
ch

= 24.9933 MIPS
 There are two examples of different operations say,
Ex. 10.7.3 : A non-pipeline system takes 50 ns to process
X and Y, for which the reservation table is shown in
a task. The same task can be processed in a
Fig. 10.7.11.
six stage pipeline with a clock cycle of 10 ns.
Te

Determine the speedup and the efficiency of  Fig. 10.7.12 shows the requirement of different stages at
the pipeline for 100 tasks. What is the different times for doing the corresponding operation.
maximum speedup and efficiency that can be For e.g. the function X, has first to be given to S1, then
achieved ? S2, then S3, then S2, then S3, then S1, then S3 and finally
Soln. : to S1 which will give the output.

Given : For the non-pipeline system : tn = 50 ns 1 2 3 4 5 6 7 8

For the pipeline system : k = 6, tp = 10 ns S1 X X X


Number of tasks n = 100 S2 X X
T1 nk1 S3 X X X
Speed up (Sk) = =
Tk k + (n – 1) 
100  50 Fig. 10.7.12(a) : Reservation table for function X
= = 4.7619
6  10 + (100 – 1)  10
1 2 3 4 5 6
Maximum speedup will occur when the number of tasks
S1 Y Y
(n) is very large (n >> k). Hence neglecting the term
n1 S2 Y
50
k – 1, we have Max Speedup = = =5
n 10 S3 Y Y Y
Sk 4.7619
Efficiency (Ek) = = = 0.7936 Fig. 10.7.12(b) : Reservation table for function Y
k 6

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-21 Processor Instructions & Processor Enhancements

 Similarly the function Y is first given to S1, then S3, then number of time units (clock cycle) required between the
S2, then S3, then S1 and finally to S3 which will give the two initiations of a function is called as the latency
output. period between them.
 The number of columns in a reservation table  Some valid latencies or latency sequences that do not
corresponds to the evaluation time of that function. cause any collision are shown in Fig. 10.7.13. The
Hence from Fig. 10.7.12, function X has an evaluation latencies that do not cause collision are called as latency
time of 8 clock cycles while function Y has an evaluation sequence while latencies that cause collision are called
time of 6 clock cycles. as forbidden latencies. Examples of forbidden latencies
 A pipeline initiation table consists of different time when are shown in Fig. 10.7.14.
the next time the same function can be initiated. The

ns e
Cycle repeats

io dg
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
S1 X1 X2 X1 X2 X1 X2 X3 X4 X3 X4 X3 X4 X5 X6
S2 X1 X2 X1 X2 X3 X4 X3 X4 X5 …

at le
S3 X1 X2 X1 X2 X1 X2 X3 X4 X3 X4
(a) Latency cycle (1, 8) = 1, 8, 1, 8, 1, 8, …, (With an average latency of 4.5)
Cycle repeats
X3 X4
ic w
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
bl no

S1 X1 X2 X1 X3 X1 X2 X4 X2 X3 X5 X3 X4 X6 X4 X5 X7 X5
S2 X1 X1 X2 X2 X3 X3 X4 X4 X5 X5 X6 X6 X7 …
S3 X1 X1 X2 X1 X2 X3 X2 X3 X4 X3 X4 X5 X4 X5 X6 X5 X6
Pu K

(b) Latency cycle (3) = 3, 3, 3, 3, …. (With an average latency of 3)


Cycle repeats
ch

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
S1 X1 X1 X2 X1 X2 X3 X2 X3 X4 X3
Te

S2 X1 X1 X2 X2 X3 X3 X4 …
S3 X1 X1 X1 X2 X2 X2 X3 X3 X3
(c) Latency cycle (6) = 6, 6, 6, 6, …, (With an average latency of 6)
Fig. 10.7.13 : Example latencies of function X that do not cause collision
1 2 3 4 5 6 7 8 9 10 11
S1 X1 X2 X3 X1 X4 X1, X2 X2, X3
Stages S2 X1 X1, X2 X2, X3 X3, X4 X4 …
S3 X1 X1, X2 X1, X2, X3 X2, X3, X4

(a) Collision with scheduling latency 2

1 2 3 4 5 6 7 8 9 10 11
S1 X1 X1 X2 X1

Stages S2 X1 X1 X2 X2 …
S3 X1 X1 X1 X2 X2

(b) Collision with scheduling latency 5


Fig. 10.7.14 : Example forbidden latencies i.e. latencies that cause collision of function X

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-22 Processor Instructions & Processor Enhancements

 As shown in Fig. 10.7.13, forbidden latencies are 2 and 5. bit Ci is ‘1’ if the latency i causes a collision, else it is ‘0’.

Besides, 4 and 7 are also forbidden latencies. To detect For e.g., the reservation tables seen in Fig. 10.7.12, will
have the collision vectors as Cx = (1011010) and
the forbidden latency, you need to check the distance
Cy = (1010). Thus for Cx, there is a permissible latency 1,
between the two marks on the reservation table in a
3 and 6 while forbidden latencies of 7, 5, 4 and 2.
row.
2. State Diagrams :
 For e.g. in case of function X, as shown in
Fig. 10.7.12(a), the distance between two marks in S1 is 5  From the collision vector, we can make the state
diagram for the pipeline.

ns e
or 2. Average latency of a latency cycle is defined as the
ratio of sum of all latencies to the number of latencies  The collision vector Cx, achieved above is called as the

io dg
along the cycle. For e.g. (1,8) latency as shown in initial collision vector. When loaded in a register and
Fig. 10.7.13(a), has an average latency of (1+8)/2 = 4.5. shifted right, each bit at the output corresponds to an
increase in latency.
 The average latency of a constant cycle, i.e. latency cycle
 A ‘1’ at the output indicates collision, while a ‘0’ indicates

at le
that contains only one latency value, is same as the
constant value. For e.g. the average latency of the cycle
3 and 6, as shown in Figs. 10.7.13(b) and (c) are
no collision. A ‘0’ is inserted from the left for every clock
cycle. This can be implemented as said by a right shift
ic w
register and OR gates, as shown in
3 and 6 respectively.
Fig. 10.7.15.
10.7.4.1 Collision Free Scheduling or Job
bl no

Sequencing :

 As seen in non-linear pipelining, we cannot sequence


the instructions (job) as in linear pipelining; else there
Pu K

would be collision.

 Hence we need to have optimal job sequencing or


ch

scheduling technique. In a non-linear pipelining while


Fig. 10.7.15 : n-bit right shift register for state transition
scheduling, the main aim is to obtain smallest average
 The state transition diagram can be constructed using
latency without any collision.
this state register.
Te

 This pipeline design theory requires the concepts of


 The next state at time t + p, where p is the permissible
collision vectors, state diagrams, single cycles, greedy
latency and t is some no. of clock pulses, obtained by
cycles and minimal average latency (MAL). shifting the register for p times and ORing it with the
1. Collision vectors : initial collision vector.

 As seen in the previous section, we can separate the  The state diagrams for the collision vectors Cx and Cy

permissible latencies and the forbidden latencies using are as shown in Fig. 10.7.16.

the reservation table.


 For a reservation table with n columns, the maximum
forbidden latency (m) should be ≤ n – 1. The permissible
latency (p), must be as small as possible. An ideal case
would be p = 1. This smallest latency i.e.
p = 1 is possible in linear pipelining, but in non-linear
pipelining, it is difficult to achieve.
 A collision vector is an ‘m’ bit binary vector
(C = Cm Cm-1………..C2 C1), that shows the set of (a) State diagram for collision vector Cy
Fig. 10.7.16(Contd...)
permissible and forbidden latency. In a collision vector, a

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-23 Processor Instructions & Processor Enhancements

(a) What are the forbidden latencies ?


(b) Draw the state transition diagram.
(c) List all simple cycles and greedy
cycles.
(d) Determine the optimal constant latency
cycle and the minimal average latency.
(e) Let the pipeline clock period be  = 20
ns. Determine the throughput of this
pipeline.
Soln. :

ns e
(b) State diagram for collision vector Cy
Fig. 10.7.16 : State diagrams for collision (a) Collision Vector = (100)
x1 x2 x3 x1 x4 x2 x5 x3 x4

io dg
 A transition example can be explained as below. x5
For e.g. the three bit shifts with the initial vector of x1 x2 x3 x4 x5 … …
function X, will result in 0001011; this when ORed with
x1 x2 x3 x4 x5 …
the initial collision vector results in 1011011.

at le
The state diagram (Refer Fig. 10.7.16(a)) shows this
transition for the function X. If the number of shifts is
greater than m, the next state is same as the initial
x1
Collision with latency 1

x1 x2 x2 x3 x3
ic w
collision vector. For e.g. if the number of shifts is 8 or x1 x2 x3 …
more in Fig. 10.7.16(a), it comes back to the initial x1 x2 x3 …
+
bl no

collision state. This transition is denoted by 8 .


Collision with latency 3
3. Single cycle and Greedy cycle :
Hence latencies (1) and (3) are forbidden latencies.
 A single cycle is one in which any state appears not
(b) State transition shift register
more than once. For example for the X function state
Pu K

diagram shown in Fig. 10.7.16(a), the different single


cycles are (3), (6), (8), (1,8), (3,8) and (6,8).
ch

 A cycle that travels for more than one time through the
same state is a greedy cycle. some greedy cycles in the
Fig. 10.7.16(a) are (1,8,3,8), (1,8,6,8), (3,6,3,8,6) etc.
4. Minimal Average Latency (MAL) :
Te

We have already studied the minimal average latency in


the previous sections. There are some bounds on this value
Fig. P. 10.7.4
of MAL. These bounds are listed below :
1. The lower bound of MAL is the maximum number of Using the above shift register, we can generate the
checkmarks in any row of the reservation table
following state transition diagram.
2. The MAL should be lower than or equal to the latency of
any greedy cycle in a reservation table.
3. The upper bound of MAL is equal to the number of 1s
in the initial collision vector plus 1.

Ex. 10.7.4 : Consider following pipeline reservation table.

1 2 3 4

S1 X X

S2 X

S3 X
Fig. P. 10.7.4(a)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-24 Processor Instructions & Processor Enhancements

(c) As seen in the state transition diagram, For a 5-stage pipelined (Y) processor, the time required
to execute n-instruction (Tk) = [k + (n – 1)] .
Simple cycles : (2), (4), (1,4), (1,1,4) (2,4) etc.
where k = 5 stages
Greedy cycles : (1,4,2,4) , (1,1,4,2,4) etc.
n = 100 instructions
(d) Optimal constant latency (2) 1 1
 = = = 0.05 sec
f 20MHz
 Minimum average latency (MAL) = 2
Tk = [5 + (100 – 1)]  0.05 sec
(e) Throughput
Cycle repeats = 5.2 sec

ns e
T1 16 sec
 Speedup = = = 3.07
Tk 5.2 sec
S1 x1 x2 x1 x3 x2 x4 x3 x5 …

io dg
(b) Non-pipelined processor (X) :
S2 x1 x2 x3 x4 …
S3 x1 x2 x3 x4 … Time taken Instructions

16 sec 100

at le
As seen in the above table, one instruction is executed

every two cycles.


1 1 1  MIPS rate, (x) =
1 sec
100 Instruction
x
ic w
 Throughput = f=  16 sec
2 2 20 nsec
= 6.25 MIPS (Million instructions per
= 25 MIPS second)
bl no

Pipelined processor (Y) :


Ex. 10.7.5 : A non-pipelined processor X has a clock rate
of 25 MHz and an average CPI (cycles per Time taken Instructions
instruction) of 4. Processor Y, an improved 5.2 sec 100
Pu K

sucessor of X, is designed with a five-stage 1 sec x


linear instruction pipeline. However, due to
100 Instruction
MIPS rate, (x) = = 19.23 MIPS
ch

latch delay and clock skew effects, the clock 


5.2 sec
rate of Y is only 20 MHz.
Ex. 10.7.6 : Consider the following pipeline reservation
(a) It is a program containing 100 table
instructions is executed on both
Te

0 1 2 3 4 5 6 7 8
processors, what is the speedup of
processor Y compared with that of 1  
processor X? 2   
(b) Calculate the MIPS rate of each
3 
processor.
4  
Soln. :
(a) Program has 100 instruction : 5  

i.e. n = 100 1. Determine latencies in the forbidden list


F and collision vector C
Time taken by a non-pipelined (X) processor to execute
this program (T1) = nk. 2. Draw state Transition diagram
where n = number of instruction = 100 3. List all simple cycles and greedy cycles
k = number of cycles per instruction = 4 4. Determine MAL
1
 = clock width = Soln. :
25 MHz
1 1. Forbidden latencies F = (1, 5, 6, 8) :
I1 = nk =100  4  = 16 sec.
25 MHz  Collision vector C = (10110001)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-25 Processor Instructions & Processor Enhancements

(Since the total number of tasks are 9, there are 8 bits in Soln. :
collision vector i.e. C8 to C1) 1. Forbidden latencies = (2, 4, 6)
2. State transition diagram : Collision vector C = (101010)
(Since the total number of tasks are 7, there are 6 bits in
the collision vector i.e. C6 to C1)

2. State transition diagram

ns e
io dg
3.
at le
Latency cycles :
Fig. P. 10.7.6
3. Latency cycles :
Fig. P. 10.7.7
ic w
(7), (1, 7), (3, 7), (3, 5), (5), (3, 7, 5, 3, 7), (5, 3, 7)
(7), (2, 7), (2, 2, 7), (4, 7), (4, 3), (3, 7), (4, 3, 4, 7), (9), (2, 9),
Simple cycles :
(4, 9) (3, 9)
bl no

(7), (1, 7), (3, 7), (3, 5), (5), (5, 3, 7)


Simple cycles :
Greedy cycles : (3, 7, 5, 3, 7)
(7), (2, 7), (4, 7), (4, 3), (3, 7), (9), (2, 9), (4, 9), (3, 9)
4. Optimal constant latency : (1, 7) or (3, 5)
Pu K

Greedy cycles : (2, 2, 7), (4, 3, 4, 7) 1+7


 Minimal average latency (MAL) = =4
2
4. Optimal control latency : (4, 3)
ch

4+3 5. Since MAL = 4, 1 instruction will be executed every 4


 Minimum average latency (MAL) = = 3.5
2 cycles.
Ex. 10.7.7 : Consider the following pipeline reservation 1 1 1
 Throughput = f= *
Te

table. 4 4 20 nsecs
1 1
Clock cycle (··· frequency =  = )
0 1 2 3 4 5 6 20 nsecs
Stage
1
S1    = = 25 MIPS
70 nsecs
S2  
Ex. 10.7.8 : For a unifunction pipeline, the forbidden set of
S3  
latencies is as given below.
1. Determine latencies in Forbidden list F
F = {1, 3, 6} with the largest forbidden
and collision vector C
latency = 6
2. Draw the state transistor diagram
1. Obtain collision vector
3. List all simple cycles and greedy cycles
2. Draw the state diagram
4. Determine minimum average latency
3. State all simple and greedy cycles
(MAL)
4. Obtain MAL
5. For a pipeline clock period  = 20 ns.
Soln. :
Determine maximum throughput of the
1. Collision vector (C) = (100101)
pipeline.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-26 Processor Instructions & Processor Enhancements

2. Ex. 10.7.10 : For the following reservation table, determine


collision vector state transition diagram and
MAL.

1 2 3 4 5
S1  
S2  

Fig. P. 10.7.8 S3 
3. Latencies : (5), (4), (4, 5), (2, 5), (2, 2, 5), (7), (2, 7), Also find the throughput for  = 25 nsecs

ns e
(4, 7) Soln. :
Simple latencies : (5), (4), (4, 5), (2, 5), (7), (2, 7)

io dg
1. Collision vector (C) = (0110)
Greedy latency : (2, 2, 5) 2. State transition diagram : (Refer Fig. Ex. 10.7.10)
4. Optimal latency : (2, 2, 5) 3. Latencies : (4), (1,4)
2+2+5 Simple latencies : (4), (1,4)
 Minimal average latency (MAL) = =3

Note : at le 3

Optimal latency is the latency that gives minimal


average i.e. the average as minimum.
4.
Greedy latencies
Optimal latency
:
:
NIL
(1,4)
ic w
Ex. 10.7.9 : For the following reservation table, determine
bl no

collision vector, state transition diagram and


MAL.
Clock cycle
1 2 3 4 5 6 7 8
Stage Fig. P. 10.7.10
Pu K

S1   1+4
Minimal average latency (MAL) = = 2.5
2
S2   
ch

S3    5. Since MAL = 2.5, 1 instruction takes 2.5 clock pulses


1 1 1
S4   Throughput = f=  = 16 MIPS
2.5 2.5 25 nsec
Also find the throughput for  = 10 nsec
Te

Ex. 10.7.11 : A certain pipeline with the four stages S1, S2,
Soln. : S3 and S4 is characterized by the following
Table P. 10.7.11.
Table P. 10.7.11

Fig. P. 10.7.9 t0 t1 t2 t3 t4 t5 t6

1. Collision vector (C) = (0011111) S1 X X


2. State transition diagram : (Refer Fig. Ex. 10.7.9) S2 X X
3. Latencies : (6), (7)
S3 X X
Simple latencies : (6), (7)
S4 X X
Greedy latencies : NIL
1. Determine the latencies in the forbidden
4. Optimal latency : (6)
list F and the collision vector C.
Minimal average latency (MAL) = 6
2. Determine the minimum constant latency
5. Since MAL = 6, 1 instruction takes 6 clock pulses
L by checking the forbidden list
1 1 1
  Throughput = f =  3. Draw the state diagram for this pipeline
6 6 10 nsec
and determine MAL.
= 16.67 MIPS

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-27 Processor Instructions & Processor Enhancements

Soln. : instructions can be executed is increased by


1. Forbidden latencies = (2, 4, 5) Overlapping Instruction Execution.
Collision vector C = (011010)
 Latency is the amount of time that a single operation
2. State transition diagram
takes to execute.

 Throughput is the rate at which operations get


executed.
In a non-pipelined processor,
1
Throughput =

ns e
Latency
[expressed as operations/second or operations/cycles]

io dg
In a pipelined processor,
Fig. P. 10.7.11
1
3. Latency cycle : (6), (1, 6), (3), (3, 6), (3, 3, 6), Throughput 
Latency
(6, 1, 6), (3, 6, 6)  Pipelining : To implement pipelining, designers divide a

4.
at le
Simple cycles : (6), (1, 6), (3), (3, 6)
Greedy cycles : (3, 3, 6), (6, 1, 6), (3, 6, 6)
Optimal constant latency : (3)
processor’s data path into sections called stages and
place pipeline latches between each section.
ic w
 As shown in Fig. 10.8.1, at start of each cycle, the
Minimum average latency (MAL) = 3 pipeline latches read their inputs and copy them to their
outputs.
10.8 Instruction Pipelining and Pipelining
bl no

Stages :

 Instruction pipelining is a technique for overlapping the


execution of several instructions to reduce the execution
Pu K

time of a set of instructions.

 Generally, the processor fetches an instruction from


ch

memory, decodes it to determine what the instruction


was, read the instructions inputs from the register file,
performs the computation required by the instruction
Te

and writes the result back into the register file. This
approach is called unpipelined approach.

 The problem with this approach is that, the hardware


needed to perform each of these steps (Instruction fetch,
instruction decode, register read, instruction execution
and register write-back) is different and most of the
hardware is idle at any given moment. Waiting for the
other parts of the processor to complete their part of
executing an instruction. Fig. 10.8.1

 Pipelining is a technique for overlapping the execution  The amount of data path that a signal travels through in
of several instructions to reduce the execution time of a one cycle is called a stage of the pipeline.
set of instructions. A five-stage pipeline is shown in Fig. 10.8.1(b).
 Each instruction takes the same amount of time to Stage 1 : Fetch block
execute in a pipelined processor as it would in a Stage 2 : Decode block
non-pipelined processor, but the rate at which
 Stage 3, 4, 5 are subsequent blocks in execution process.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-28 Processor Instructions & Processor Enhancements

 Fig. 10.8.2 shows the Instruction flow in a pipelined pipelining technique if the time taken for each
processor. stage is 20 ns.
Cycle Soln. : n = 100 instruction, K = 5,  = 20 ns
1 2 3 4 5 6 7 8 9 10 Execution time pipelined = (5 + 100 – 1)  

(Instruction) = (5 + 99)  20 ns
Pipeline I1 I2 I3 I4 I5 I6 I7 I8 I9 I10 = 2080 ns.
IF
stages Execution time unpipelined = (K ) n = 5  20 ns  100
DE I1 I2 I3 I4 I5 I6 I7 I8 I9
= 10000 ns.
FO I1 I2 I3 I4 I5 I6 I7 I8
10000

ns e
EX I1 I2 I3 I4 I5 I6 I7 Speedup ratio is = = 4.80 times.
2080
WB I1 I2 I3 I4 I5 I6

io dg
10.9 Pipeline Hazards :
I1 : executed in 5th cycle
I2 : executed in 6th cycle  Pipelining increases processor performance by
I3 : executed in 7th cycle increasing instruction throughput, because several
instructions are overlapped in the pipeline, cycle time

at le Fig. 10.8.2

Cycle time of a pipelined processor is calculated as


can be reduced, increasing the rate at which instructions
execute.
ic w
Cycle time (unpipelined)  Instruction Hazards (dependencies) occur when
Cycle time (pipelined) = instructions read or write registers that are used by other
Number of pipeline stages
instructions. The type of conflicts are divided into three
bl no

+ Pipeline latch latency


categories :
 As, the number of pipeline stages increases, the pipeline
1. Structural hazards (resource conflicts)
latch latency increases which in turn limits the benefit of
dividing a processor into a very large number of 2. Data hazards (Data dependency conflicts)
Pu K

pipelining stages. 3. Branch difficulties (Control hazards)

Ex. 10.8.1 : An unpipelined processor has a cycle time of


ch

25 ns. What is the cycle time of a pipelined 1. Structural hazards (Resource conflicts) :
version of the processor with 5 evenly divided  These hazards are caused by access to memory by two
pipeline stages, if each pipeline latch has a instructions at the same time. These conflicts can be
latency of 1 ns?
slightly resolved by using separate instruction and data
Te

Soln. : memories.
Cycle time unpipelined
Cycle time pipelined =  Structural hazards occur when the processor’s hardware
Number of stages of pipeline
is not capable of executing all the instructions in the
+ Pipeline latch latency
pipeline simultaneously.
25 ns
= + 1 ns = 6 ns.  Structural hazards within a single pipeline are rare on
5
modern processors because the Instruction Set
To find the speedup of the execution process in a
architecture is designed to support pipelining.
pipelined processor,
2. Data hazards (Data dependency) :
Execution time pipelined = (K + n – 1) 
 This hazard arises when an instruction depends on the
Execution time unpipelined = (K )  n
result of a previous instruction, but this result is not yet
Where, n = Number of instructions available.
 = Time taken for each stage  These are divided into four categories :
K = Number of stages in pipeline 1. RAW – Hazard (Read after write Hazard)

Ex. 10.8.2 : If a processor executes 100 instructions in a 2. RAR – Hazard (Read after read Hazard)
pipelined (5 stage) processor and unpipelined 3. WAW – Hazard (Write after write Hazard)
processor. What is the speedup achieved by 4. WAR – Hazard (Write after read Hazard)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-29 Processor Instructions & Processor Enhancements

RAR Hazard : should fetch from, it consumes some time and also
RAR Hazard occurs when two instructions both read some time is required to flush the pipeline and fetch
instructions from target location. This time wasted is
from the same register. This hazard does not cause a
called as branch penalty.
problem for the processor because reading a register does
not change the register’s value. Therefore, two instructions 10.9.1 Methods to Resolve the Data Hazards
that have RAR Hazard can execute on successive cycles. and Advances in Pipelining :
Example 1 : Instructions having RAR Hazard. The methods used to resolve the data hazards are
ADD r1, r2, r3 discussed in the following sub sections.

ns e
SUB r4, r5, r3 Both Instructions read r3, creating RAR
10.9.1.1 Pipeline Stalls :

io dg
RAW Hazard :
 The hardware inserts a special instruction called (NOP)
This hazard occurs when an instruction reads a register i.e. no operation instruction known as a bubble into the
that was written by a previous instruction. These are also flow of execution stage of pipeline to resolve the RAW
called as data dependencies (or) true dependencies. hazard between two instructions.

ADD
at le
Example 2 : Instructions having RAW - Hazard.
r1, r2, r3


This method is also called as hardware interlocks.

This approach detects the hazard and maintains the


ic w
Subtract reads the output of the RAW hazard program sequence by introducing delays to resolve the
addition creating data hazards (RAW).
bl no

SUB r4, r5, r1


10.9.1.2 Operand Forwarding (or) Bypassing :
WAR and WAW are also called as name dependencies.
These hazards occur when the output register of an  This technique uses a special hardware to detect a
instruction has been either read or written by a previous
Pu K

conflict and then avoid it by routing the data through


instruction.
special paths between pipeline stages.
If the processor executes instructions in the order that
ch

they appear in the program and uses the same pipeline for  Example of a RAW dependency exists between two
all instructions, WAR and WAW hazards do not cause any instructions, instead of transferring an ALU result into a
problem in execution process. destination register, the hardware checks the destination
Example 3 : Instruction having WAR Hazard.
Te

operand, and if it is needed as a source in the next


ADD r1, r2, r3 instructions, it passes the result directly into the ALU
SUB r2, r5, r6 input; bypassing the register file.
WAR Hazard
 This method requires additional hardware paths through
Example 4 : Instructions having WAW Hazard
MUX (Multiplexers).
ADD r1, r2, r3
 In this case there is a multiplexer between the two
SUB r1, r5, r6
stages, wherein the data required by the next instruction
is forwarded.
WAW Hazard
 Let us see this with a program example.
3. Branch Hazards :
 If we take the following sequence of instructions,
 Branch instructions, particularly conditional branch
instructions, create data dependencies between the ADD r1, r2, r3
branch instruction and the previous instruction, fetch
SUB r4, r1, r3
stage of the pipeline.
 We will notice that the destination of instruction 1 is the
 Since the branch instruction computes the address of
the next instruction that the instruction fetch stage source for instruction 2.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-30 Processor Instructions & Processor Enhancements

ns e
Fig. 10.9.1 : Operand forwarding mechanism in pipelining for resolving a data hazard

io dg
 In this case the ALU stage or the execution stage of the language program, it detects the data dependencies and
pipeline will forward the data to the next instruction as re-orders the instructions.
shown in the Fig. 10.9.1. Fig. 10.9.1 assumes that the  If necessary to delay the loading of the conflicting data
system is 6 stage pipelined system. it inserts no-operation instruction (NOP).

at le
As shown in Fig. 10.9.1 the data i.e. the value of register
r1 is passed from the first instruction to the second
10.9.2 Handling of Branch Instructions to
Resolve Control Hazards :
ic w
instruction. Actually the value of the register r1 is
 The methods used to resolve the control hazards are
updated by the write operand stage of the first
discussed in the following sub sections.
instruction. But, before that the same is required by the
bl no

execute stage of second instruction. Hence the value of 10.9.2.1 Pre-Fetch Target Instruction :
this register is passed to the second instruction.  One way of handling a conditional branch is to prefetch
10.9.1.3 Dynamic Instruction Scheduling (or) the target instruction in addition to the instruction
Out-Of-Order (OOO) Execution :
Pu K

following the branch. Both are saved until the branch is


 This is another interesting and very widely used executed.
technique because of the speed up given by it. It is used
ch

 If the branch condition is successful, the pipeline


in Pentium IV processor. continues from the branch target instructions else
 Here the execution of the instructions of a program is sequential instructions are executed.
done out-of-order i.e. not in the sequence as the 10.9.2.2 Branch Target Buffer (BTB) :
Te

instructions were written by the programmer. As and


 The BTB is an associative memory included in the fetch
when the resources of an instruction are available, the
segment of the pipeline.
execution of that instruction is done. If, for an instruction
the resources are not available, it is kept in waiting state  Each entry in BTB consists of the address of a previously
and the further instructions whose resources are executed branch instructions and the target instruction
available will be executed. for that branch. It also stores the next few instructions
after the branch target instructions. When the pipeline
 But, you would think that this approach will have a
decodes a branch instruction, it searches the associative
problem. The logic implemented by the programmer will
memory BTB for the address of the instruction.
not be followed properly i.e. wrong sequence of
instructions will be executed.  If it is in the BTB, the instruction is available directly and
prefetch continues from the new path.
 The answer to this is that, although the instructions are
executed out-of-order, but the write-back is done in  If the instruction is not in BTB, pipeline shifts to a new
order, and hence the final result of the program is in instruction streams and stores the target instruction.

sequence. 10.9.2.3 Loop Buffer :


 The compiler is designed in such a way that, while  This is like a BTB, but is a high speed register file
translating from high-level language to machine maintained by the instruction fetch segment of the

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-31 Processor Instructions & Processor Enhancements

pipeline. When a program loop is detected in the Label Instructions


program, it is stored in the loop buffer. MOV AX,0000H
 The program loop can be executed directly without ADD AX,[SI]
having to access memory until loop mode is removed by INC SI
final branching out. ADD AX,[SI]
10.9.2.4 Branch Prediction : INC SI
ADD AX,[SI]
 A pipeline with branch prediction uses additional logic
INC SI

ns e
to speculate the outcome of a conditional branch
ADD AX,[SI]
instruction before it is executed.

io dg
INC SI
 The pipeline then begins pre-fetching the instructions
ADD AX,[SI]
stream from the predicted path.

 A correct prediction eliminates the wasted time caused  Thus you will notice that we have unrolled the loop, and
written the loop for the number of times it was to be

 at le
by branch penalties.

A detailed operation of branch prediction logic will be


discussed in the later sections of this chapter.
repeated. This totally removes the pipeline stalls due to
the loops.
ic w
 The advantage clearly seen of this method is that there
10.9.2.5 Pipeline Stall (Delayed Branch) :
is no scope for pipeline stall and hence the performance
bl no

Complier detects branch instruction and rearranges the will increase.


machine language code sequence by inserting useful  The major disadvantage of this method is that the
instructions and rearranges the code sequence to reduce the memory required will be more as the loop has to be
delays incurred by Branch Instruction. unrolled and stored in memory. In our example the loop
Pu K

10.9.2.6 Loop Unrolling Technique : was to be repeated for only 5 times, but if the loop was
larger and had to be repeated for say 100 or 1000 times,
ch

 This is a very superb solution to handle the stalls due to the memory consumed would be very huge.
branching in loops.
 In this case a code which has a loop that has to be
10.9.2.7 Software Scheduling or Software
Pipelining :
executed multiple times, will be actually stored multiple
Te

times (or unrolled) so as to remove the need of  In case of software pipelining, the iterations of a loop of
branching. the source program are continuously initiated at regular
 Let us see how this can be implemented with an intervals, before the earlier iterations complete. Thus
example. taking advantage of the parallelism in data path.
 If there is a code for adding an array of 5 numbers, the
 It can be said that software scheduling, schedules the
loop can be written as shown in the code below (using
operations within a loop, such that an iteration of the
processor 8086) :
loop can be pipelined to yield optimal performance.
Label Instructions
 The sequences of the instructions before steady state
MOV AX,0000H
are called as PROLOG, while the ones after the steady
MOV CX,0005H
state are called as EPILOG.
AGAIN : ADD AX,[SI]
 Let us see this with an example. Suppose the source
INC SI code is
DEC CX for(i=0;i<=n-1;i++)
JNZ AGAIN a[i]=a[i]+10;
 This loop can be unrolled to avoid stalling as shown  When this loop is executed by a processor, the
below : processor will do the following:

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-32 Processor Instructions & Processor Enhancements

for(i=0;i<=n-1;i++) {
{ Store a[i-2];
Load a[i]; Add a[i-1]+10;
Add a[i]+10; Load a[i]
Store a[i]; }
} Store a[n-2];
 Here you will notice that the three instructions inside Add a[n-1]+10;
the loop (in each iteration) are the same i.e. each of the Store a[n-1]

ns e
three instructions have to operate on the data a[i].  Thus, you will notice inside the for loop i.e. for each
 When this is converted to pipeline, it will look as shown iteration, each of the three instructions are working on

io dg
in the Fig. 10.9.2. But the three instructions, one below different data and hence are not dependent on each
the other are dependent and hence cannot be pipelined. other and hence allowing pipelining of the three
But the instructions that are circled can be pipelined. instructions without any hazards.
Load

at le
a[0]

Add Load a[1]


10.9.2.8 Trace Scheduling :

 In a general pipelining, the instructions are scheduled in


sequence. This results in a problem or hazard on a
ic w
a[0]+10
branching instruction as discussed in the previous
Store Add Load a[2] section. Trace scheduling is a good solution to avoid
bl no

a[0] a[1]+10 hazard due to branching. Let us see how this can be
Store Add a[2]+10 Load a[3] implemented.
a[1]  In this case the probability of branch to be taken or not
taken is found. Based on this the code is written with all
Pu K

Store a[2] Add Load


instructions in sequence, such that no branching will be
a[3]+10 a[4]
required for most of the times according to the
ch

Store a[3] Add Load probability calculated earlier. This code is called as the
a[4]+10 a[5] trace.
Store Add  The other blocks of code are made for less probable
a[4] a[5]+10 cases i.e. if branching is taken. Hence this trace code and
Te

the other blocks of code are written, with minimizing


Store
branches. Let us see this with a program example.
a[5]
Suppose the source code is:
Fig. 10.9.2 : Software pipelining
if(a[i]==0)
 You will notice in the Fig. 10.9.2, that the three a[i]=a[i]+10;
instructions that are circled are store, add and load. else a[i]=a[i]+1;
These instructions are always independent i.e. they have x[i]=x[i]*x[i];
different data to operate on.
 The number is to be squared in the above code. If the
 For example in the first circle: Store a[0], Add a[1]+10 number is zero then 10 is to be squared and stored
and Load a[2] is performed. Each of these instructions is there, while if it is any other number its incremented
using different data. value is to be squared and stored in the same place.
When this is converted to assembly program, it will look
 Thus the code can be changed to the following :
as shown below :
Load a[0]
Label Instruction
Add a[0]+10
Load a[1] Load a[i] into say AL

for(i=2;i<=n-3;i++) Compare AL with 0

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-33 Processor Instructions & Processor Enhancements

Label Instruction 10.9.2.9 Predicated Execution :


If not equal to zero then branch to label over  This is also a method that removes the branches. Here
Add AL with 10 each instruction has a predicate that decides whether
the instruction is to be executed or not. If the predicate
Branch to label next
is true then the instruction is executed, else it is not
over: Increment AL
executed. The predicate is a condition bit. If the bit is '1'
next: Multiply AL with itself then the instruction is to be executed else it is not to be
Store the result in a[i] executed.

ns e
 Each instruction has the operands and a predicate. This
This can be divided into four blocks as shown in the
removes the branching instructions and hence the stall

io dg
Fig. 10.9.3.
of pipeline.
 An example of predicate instruction is given below,
CMOVZ AX, BX, CX.

at le  This instruction copies the contents of register BX into


register AX, if the predicate register CX is zero. Else the
contents of BX are not copied into AX.
ic w
 Predication mainly implements the if-else statement and
hence the branching required for if-else is removed. It
bl no

can remove the branching required for all the


instructions.
 Hence we can say that predication totally removes the
Fig. 10.9.3 : Division of Blocks of the code need of handling the branches in a pipelined system.
Pu K

The only disadvantage of predication is that the


 Since the path 1-2-4 is the most probable path, this will
instruction size increases.
ch

make the trace. The path 1-3-4 will be a separate block.


 Predication is used in IA-64 processors of Intel, ARM
This is shown in the Fig. 10.9.4. processor.
 Hence in most of the cases i.e. 75% cases the trace will
10.9.2.10 Speculative Loading :
be executed and hence no branching will be required.
Te

Although in 25% cases we will need to branch to  In this case the data is brought from the memory, well
Block 1, but there would be only one branching and not before it is needed.

multiple branching as required in the previous case.  The compiler indicates the data that will be required in
the later parts of the program and the corresponding
Trace :
data is brought and kept in the processor.
Load a[i] into say AL  This removes the latency of memory accesses required
Compare AL with 0 for the data to be brought from the memory.
If not equal to zero then  As the data required later is speculated and brought in
branch to label over advance it is called as speculative loading of data.
Block 1
Add AL with 10
Over : Increment AL 10.9.2.11 Register Tagging :
Multiply AL with itself
Store the result in a[i] Multiply AL with itself  Register tagging is normally done by a unit called as
Store the result in a[i] Reservation Station (RS) in a processor.
 This reservation station is used in order to resolve the
data or resource conflicts amongst the multiple
Fig. 10.9.4 instructions entering the processor.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-34 Processor Instructions & Processor Enhancements

 The operands are made to wait in the reservation station  The number of speculative instructions in the instruction
until their data dependencies are resolved. window or the reorder buffer. Typically only a limited
 A tag is used to identify each reservation station, and number of instructions can be removed each cycle.
the tag unit keeps on monitoring these reservation  Misprediction is expensive (11 or more cycles in the
stations. Pentium II).
 This tag unit also monitors all the registers used 10.9.3.2 Static Branch Prediction :
currently or the reservation stations. This technique is
called as register tagging.  Static Branch Prediction predicts always the same
 This mechanism allows to resolve the register conflicts direction for the same branch during the whole program

ns e
and hence the resultant data hazards. execution.

io dg
 The reservation stations can also be used as buffers  It comprises hardware-fixed prediction and compiler-
between the various stages of pipeline in the processor. directed prediction.
These stages can work simultaneously once the conflict
 Simple hardware-fixed direction mechanisms can be :
is resolved.
(a) Predict always not taken


at le
10.9.3 Branch Prediction :
Branch prediction foretells the outcome of conditional
branch instructions. Excellent branch handling
(b)

(c)
Predict always taken

Backward branch predict to be taken, forward


ic w
techniques are essential for today's and for future branch predict not to be taken
microprocessors. Requirements of high performance  Sometimes a bit in the branch opcode allows the
bl no

branch handling : compiler to decide the prediction direction.


 An early determination of the branch outcome (the
10.9.3.3 Branch-Target Buffer or Branch-
so-called branch resolution), Target Address Cache :
 Buffering of the branch target address in a BTAC (Branch
Pu K

The Branch Target Buffer (BTB) or Branch-Target


Target Address Cache),
Address Cache (BTAC) stores branch and jump addresses,
 An excellent branch predictor (i.e. branch prediction
ch

their target addresses, and optionally prediction information.


technique) and speculative execution mechanism.
The BTB is accessed during the IF stage.
 Often another branch is predicted while a previous
Branch
branch is still unresolved, so the processor must be able Target address Prediction bits
address
Te

to pursue two or more speculation levels, and


 An efficient rerolling mechanism when a branch is
mispredicted (minimizing the branch misprediction
penalty).

10.9.3.1 Misprediction Penalty : … … …

 The performance of branch prediction depends on the


prediction accuracy and the cost of misprediction. Fig. 10.9.5
Misprediction penalty depends on many organizational
10.9.3.4 Dynamic Branch Prediction :
features :
 The pipeline length (favouring shorter pipelines over  The hardware influences the prediction while execution
longer pipelines), proceeds. Prediction is decided on the computation
 The overall organization of the pipeline, history of the program.

 The fact if miss peculated, instructions can be removed  During the start-up phase of the program execution,
from internal buffers, or have to be executed and can where a static branch prediction might be less effective,
only be removed in the retire stage, the history information is gathered and dynamic branch

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-35 Processor Instructions & Processor Enhancements

prediction gets more effective. In general, dynamic  Hence all the operations i.e. accessing the data or
branch prediction gives better results than static branch instructions from memory, accessing I/O devices and
prediction, but at the cost of increased hardware internal LAU operations can be done simultaneously. In
complexity. the 8086 processor, there are two separate units to
10.9.3.5 One-bit Dynamic Branch Predictor : perform the memory accesses and the LAU operations
named as Bus Interface Unit (BIU) and the Execution Unit
(EU).

10.11 Flynn’s Classifications :

ns e
10.11.1 Flynn’s Classification of Parallel

io dg
Computing :
Fig. 10.9.6
 A method introduced by Flynn, for classification of
 A one-bit predictor correctly predicts a branch at the parallel processors is most common. This classification is


at le
end of loop iteration, as long as the loop does not exist.

In nested loops, a one-bit prediction scheme will cause


based on the number of Instruction Streams (IS) and
Data Streams (DS) in the system. There may be single or
multiple streams of each of these. Hence accordingly,
ic w
two misprediction for the inner loop : Flynn classified the parallel processing into four
 One at the end of the loop, when the iteration exits the categories :
bl no

loop instead of looping again, and one when executing 1. Single Instruction Single Data (SISD)
the first loop iteration, when it predicts exit instead of 2. Single Instruction Multiple Data (SIMD)
looping.
3. Multiple Instruction Single Data (MISD)
 Such a double misprediction in nested loops is avoided 4. Multiple Instruction Multiple Data (MIMD)
Pu K

by a two-bit predictor scheme.


1. Single Instruction Single Data (SISD) :
10.9.3.6 Two-bit Prediction :
ch

 In this case there is a single processor that executes one


 A prediction must miss twice before it is changed when instruction at a time on single data stored in the
a two-bit prediction scheme is applied. memory.
Te

 In fact, this type of processing can be said to be unit


processing, hence unit processors fall into this category.

 Fig. 10.11.1 shows this type of system. You will notice


there is a Control Unit (CU) that accepts the instruction

Fig. 10.9.7
from the processor and decodes it.

 The Processing Element (PE) accesses the data from the


10.10 Multiprocessor Systems and
memory and performs the operation on this data as per
Multicore Processor (Intel Core i7
the signal given by control unit.
Processor) :
 The Memory Module (MM) is connected to the PE and
10.10.1 Overlapping the CPU and Memory or the CU for the data and the instruction streams
I/O Operations :
respectively.
 This is a very basic parallelism implemented in the Intel’s
8086, wherein we had instructions prefetched from the
memory before they are to be executed. Also for I/O
operations a special dedicated I/O processor can be
Fig. 10.11.1 : SISD computer
connected.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-36 Processor Instructions & Processor Enhancements

2. Single Instruction Multiple Data (SIMD) :  This system is not used much, but can be used in cases
where in a data has to undergo many computations to
get the result for e.g. to add two floating point numbers.
Fig. 10.11.3 shows the implementation of such a system.
4. Multiple Instruction Multiple Data (MIMD) :

 This is a complete parallel processing example. Here


each processing element is having a different set of data
and different instructions.

ns e
 Examples of this kind of systems are SMPs (Symmetric
Fig. 10.11.2 : SIMD organization

io dg
Multiprocessors), clusters and NUMA (Non-Uniform
 In this case the same instruction is given to multiple
Memory Access). Fig. 10.11.4 shows the structure of such
processing elements, but different data.
a system.
 This kind of system is mainly used when many data

at le
(array of data) have to be operated with same operation.
Vector processors and array processors fall into this
category.
ic w
 Fig. 10.11.2 shows the structure of a SIMD system

3. Multiple Instruction Single Data (MISD) :


bl no

 In case of MISD, there are multiple instruction streams


and hence multiple control units to decode these
instructions.
Pu K

 Each control unit takes a different instruction from the


different memory module in the same memory. Fig. 10.11.4 : MIMD computer
ch

 The data stream is single. In this case the data is taken  Intel brought its mainstream desktop CPU lineup into
by the first processing element. the Nehalem era today with the launch of the Core i7
 This processing element performs an operation on the 860 and 870, and the Core i5 750. Also launched the P55
chipset, which implements a new system architecture
data given to it and forwards the result to the next
Te

that represents a significant break with Intel's past.


processing element for further operation.
 This processing element performs a similar operation
and so on the final result reaches back to the same
memory module.

(C-8332) Fig. 10.11.5 : An annotated floor plan of Lynnfield


Source : Intel

 The floor plan above shows the main blocks in Nehalem,


there is no QuickPath Interconnect (QPI) interface.
Fig. 10.11.3 : MISD computer
Instead, in a significant twist that differentiates Intel's

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-37 Processor Instructions & Processor Enhancements

new PC system architecture from even AMD's offerings,  As the GPU gained in size and importance, the standard
there is now a PCIe interface that enables the GPU to PC system essentially took on a kind of hacked-together
attach directly to the processor socket. non-uniform memory architecture (NUMA) topology,

This latter move was made in anticipation of two things : with two main pools of DRAM (main memory and

(1) The GPU will migrate right into the processor socket at a graphics memory) attached to the two main processors

later point when Intel releases a CPU with an on-die (the CPU and the GPU).

GPU integrated into it, and  As the amount of graphics memory increased to the

ns e
(2) For a discrete GPU, Intel hopes you'll use Larrabee. point where the GPU became a second system on a
daughtercard, this topology began to get more and
To understand what all of this means, let's look at a few

io dg

more unbalanced and inefficient in its use of memory
figures.
and bandwidth.
The P55's new system architecture :
 In 2003, AMD made the obvious improvement by

at le
Fig. 10.11.6 is of a standard Core 2 Duo system, and it
represents the general layout of an Intel system up until
Nehalem or an AMD system up until the Opteron.
moving the memory controller hub up to the CPU
socket, so that main memory could attach directly to the
ic w
CPU the way that GDDR had been directly attached to
the GPU for some time. You can see the results below,
bl no

and, give or take an I/O bridge chip or so, this is


basically how AMD single-socket systems have looked
since the memory controller went on-die.
Pu K
ch
Te

(C-8329) Fig. 10.11.6 : A typical pre-Nehalem Intel system

 In the Fig. 10.11.6, the core logic chipset consists of two


primary chips :

Memory controller hub (MCH) :


(C-8330) Fig. 10.11.7 : A typical AMD single-socket system
 The memory controller hub, also called a “northbridge,”
links the CPU and GPU with main memory.  You can see from the Fig. 10.11.7 that, with the memory
controller moved onto the processor die, the
I/O controller hub (ICH) :
northbridge has become a kind of “graphics hub”—it
 The I/O controller hub, also called the “southbridge,”
hosts the discrete GPU via some PCIe graphics lanes, and
links the MCH to peripherals and mass storage.
it typically has an integrated graphics processor (IGP)
 The ICH typically hosts the USB and other expansion along with the requisite display ports.
ports, mass storage interfaces, network interfaces, and
 The ICH is still there, doing pretty much the same job it
the like.
always did.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-38 Processor Instructions & Processor Enhancements

4. Intel smart cache.

5. Processor integrated memory controller.

6. Intel HD graphics.

7. Intel express chipset.

8. Multi thread/multi-core.

 Intel core i7 is a high-end processor. The higher clock

ns e
speed, bigger cache and hyper-threading features in i7
(C-8331) Fig. 10.11.8 : Intel's P55 platform

io dg
processor make them more suitable than i5 processors
 Intel's P55 can be seen as an evolution of the AMD for certain applications like video encoding, data
topology shown previously, with the graphics hub and
crunching, graphic-intensive work, multitasking and
memory hub functionality all moved right onto the

at le
processor die. Because the northbridge is completely

gone, the southbridge/ICH has been rechristened the



high-end gaming.

Mobile version of i5/i7 processor have the following


ic w
features :
“platform controller hub,” and it's now the only chip in
1. i5 mobile processor are dual-core.
the core logic “chipset” (aside from the BIOS, which is
bl no

also typically included in the chipset count). 2. i7 mobile processor are quad-core.

 The PCH is connected to the processor socket by the 3. i5/i7 processors support hyper-threading.

relatively low-bandwidth (2GB/s) DMI bus that used to 4. i5/i7 processors come with direct media interface
Pu K

connect the MCH to the ICH. Disk I/O, network traffic, and integrated GPU.

and other types of I/O will have to share this link. This 5. i5/i7 processors come with Ivy bridge.
ch

shouldn't be a problem for single-socket systems, 6. i5/i7 processors come with turbo boost facility.
though.
 The basic block diagram of i5/i7 processor is given in
Te

 So with the advent of the P55, Intel's core logic has the Fig. 10.11.9.
gone from a two-chip to a one-chip implementation,
 i7 mobile processors are the next generation 64-bit,
pushing ahead of the comparable AMD platform. In
multi-core mobile processors built on 45-nanometer
theory, this very tight, direct coupling of the GPU +
process technology.
GDDR and CPU + DRAM systems should make for a
 It has quad-core.
performance boost vs. both earlier topologies.
 It has 32-KB instruction and 32-KB data first-level cache
10.11.2 i5/i7 Mobile Version :
(L1) for each core.
 The Intel i5 mobile processor has the following
 It has 256-KB shared instruction/data second-level
features :
cache (L2) for each core.
1. Intel hyper-threading technology.
 It can have upto 8-MB of shared instruction/data third-
2. Intel turbo boost technology.
level cache (L3), shared among all cores.
3. Number of simultaneous threads.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-39 Processor Instructions & Processor Enhancements

ns e
io dg
at le
ic w
bl no
Pu K
ch

Fig. 10.11.9 : Mobile processor

 It supports 1 or 2 channels of DDR3 memory.

 It has PCI express bus. It is an improved version of older PCI, PCI-X and AGP bus standards.
Te

 Intel core i7-900 supports one, 16-lane PCI express port configurable to two, 8-LAN PCI express port intended for
graphics.

 It supports direct media interface (DMI).

 It supports platform environment control interface (PECI).

 The second generation microprocessors of the Intel core i3, i5 and i7 processors are the ones we normally seen in the
computers today. The comparison of the same is given in the Table 10.11.1.

Table 10.11.1 : Comparison of i3, i5 and i7 processors

Sr. No. Feature i3 i5 i7 i7 Extreme

1. Number of 2 for desktop as 4 for Desktop 4 or 6 for Desktop 6 for desktop


cores well as for Liptop 2 for Liptop 2 or 4 for Liptop 4 for mobile

2. Processing 4 for desktop as 8 threads for 8 or 12 threads for 12 threads for


threads well as Liptop desktop 4 threads desktop 4 or 8 threads Desktop
for Leptop for Liptop 8 threads for Liptop

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 10-40 Processor Instructions & Processor Enhancements

Sr. No. Feature i3 i5 i7 i7 Extreme

3. Maximum base 3.4GHz 3.4GHz 3.2GHz 3.3GHz


clock frequency

4. Maximum turbo Not Applicable 3.8GHz 3.8GHz 3.9GHz


boost
frequency

5. Maximum 3MB 6MB 12MB 15MB


smart cache

ns e
size

io dg
6. Intel turbo Not present Present Present Present
boost 2.0

7. Intel Present Present only in Present Present


Hyperthreading Liptop processors

at le
8. Best Desktop
processor
Intel Core i3-2130
(3.4GHz, 3MB)
Intel Core i5-2550K
(3.4GHz, 6MB)
Intel Core i7-3930
(3.2GHz, 12MB)
Intel Core i7-3960
(3.3GHz, 15MB)
ic w
9. Best Mobile Intel Core i3-2370 Intel Core i5-2540M Intel Core i7-2860 Intel Core i7-
(Liptop) (2.4GHz, 3MB) (2.6GHz, 3MB) (2.5GHz, 8MB) 2960XM (2.7GHz,
bl no

processor 8MB)

Q. 7 Compare ON-chip register file and cache


Review Questions evaluation.
Q. 8 Write a short note on polling.
Pu K

Q. 1 What are the different addressing modesexplain Q. 9 Write a short note on pipeline processing.
any two ?
Q. 10 Explain six stage pipelining.
ch

Q. 2 With an example explain register indirect


Q. 11 State collision vector. Explain collision free
addressing relative addressing mode.
scheduling.
Q. 3 Compare RISC and CISC.
Q. 12 State : 1. Latency, 2. Throughput.
Te

Q. 4 Explain the instruction set of 8085.


Q. 13 Write a short note on pipeline hazards.
Q. 5 What are the different properties of RISC
Q. 14 What are the methods to resolve the data hazards ?
processor ?
Q. 15 Write a short note on dynamic instruction
Q. 6 List out features or advantages of RISC systems.
scheduling.




Powered by TCPDF (www.tcpdf.org)


Unit 6

Chapter

11
ns e
io dg
at le Memory & Input /
ic w
Output Systems
bl no
Pu K

Syllabus
ch

Memory Systems : Characteristics of memory systems, Memory hierarchy, Signals to connect memory
to processor, Memory read and write cycle, Characteristics of semiconductor memory : SRAM, DRAM
and ROM, Cache memory – Principle of locality, Organization, Mapping functions, Write policies,
Replacement policies, Multilevel caches, Cache coherence,
Te

Input / Output systems : I/O module, Programmed I/O, Interrupt driven I/O, Direct Memory Access
(DMA).
Case study : USB flash drive.

Chapter Contents
11.1 Introduction to Memory and Memory 11.8 Cache Memory : Concept, Architecture
Parameters (L1, L2, L3) and Cache Consistency
11.2 Memory Hierarchy : Classifications of 11.9 Cache Mapping Techniques
Primary and Secondary Memories
11.3 Types of RAM 11.10 Pentium Processor Cache Unit
11.4 ROM (Read Only Memory) 11.11 Input / Output System
11.5 Allocation Policies 11.12 I/O Modules and 8089 IO Processor
11.6 Signals to Connect Memory to Processor 11.13 Types of Data Transfer Techniques :
and Internal Organization of Memory Programmed I/O, Interrupt Driven I/O and DMA
11.7 Memory Chip Size and Numbers

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-2 Memory & Input / Output Systems

11.1 Introduction to Memory and 3. Unit of transfer :

Memory Parameters : This refers to the size of the data that is transferred in
one clock cycle. It mainly depends on the data bus size.
 When a memory is taken then there are various The data as discussed earlier may be internal or external
characteristics of this memory that are considered. The and accordingly will be the data to be transferred in one
characteristics of memory are based on the following : clock pulse :
(a) Internal : It is related to the communication of
data with the memory directly accessible. It is

ns e
usually governed by data bus width.
(b) External : This is the data communication with the

io dg
external removable memory or virtual memory. It is
usually a block which is much larger than a word.
4. Access method :

at le There are various methods of accessing the memory


based on the memory organization. These methods are
listed below with examples :
ic w
(a) Sequential access : The sequential access means start
from the beginning and read through in order until the
byte to be read is reached. Hence the access time
bl no

depends on location of data and previous location


accessed. For example, magnetic tape. If one wants to
listen to the third stanza of the fourth song stored in an
audio cassette, he has to go through the entire first
Pu K

song second and the third song, and then the first
stanza, second stanza of the third song and then
ch

reaches to the third stanza of that song.


(b) Direct access : Here individual blocks have unique
address and the access is done by jumping to vicinity
plus sequential search. Hence access time depends on
Te

location and previous location. For example magnetic or


optical disk. Let take the same example that a person
wants to listen to the third stanza of the fourth song on
Fig. 11.1.1 : Characteristics of memory system a CD, then he can directly reach to the fourth song, but
thereafter he has to access the stanzas of the song
1. Location :
sequentially to reach to the third stanza.
The memory can be located in one of the following :
(c) Random access : In case of random access individual
(a) CPU : This includes CPU registers and on-chip cache addresses identify locations exactly. Hence the access
memory. time is independent of location or previous access. For
(b) Internal : This includes the memory that the processor example RAM. In case of a RAM, any location to be
can directly access. accessed can be directly reached to without going
(c) External : This is normally removable or virtual memory through the locations sequentially.
and hence access is slower. (d) Associative access : Here the data is located by a
2. Capacity : comparison with contents of a portion of the stored
data(address). Hence the access time is independent of
It is measured in terms of the word size and the number
location or previous access. For example cache. In case
of words. Word size is the size of each location. Number
of cache memory, each location has a tag associated
of words is the number of locations.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-3 Memory & Input / Output Systems

with it, and to reach to the required location the tags many computer architectures. The size of the byte has
are to be compared with the location to be accessed. been hardware dependent and no definition exists.
There are techniques used to reach to the required  The fact is that standard of eight bits is also convenient
tagged location at a faster speed. power of two permitting the values from 0 to 255 for
5. Performance : one byte. With ISO/IEC 80000-13, this common
meaning was codified in a formal standard. Many types
The performance of the memory depends on its speed
of applications use variables representable in eight bits
of operation or the data transfer rate. The data transfer
or multiple of eight bits.
rate is the rate at which the data is transferred. The
speed of operation depends on two things : 11.2 Memory Hierarchy : Classifications of

ns e
(a) Access time : The time between providing the address Primary and Secondary Memories :

io dg
and getting the valid data from memory is called as its
 Memory Hierarchy explains that the nearer the memory
access time i.e. the address to data time.
to the processor, faster is its access. But costlier the
(b) Memory cycle time : The time that is required for the memory becomes as it goes closer to the processor. The
memory to “recover” before next access i.e. the time following sequence is in faster to slower or costlier to

6. at le
between two addresses is called as memory cycle time.
Physical type : 1.
cheaper memory :
Registers i.e. inside the CPU.
ic w
The physical material using which the memory is made 2. Internal memory that includes one or more levels of
can be different like : cache and the main memory. Internal memory is always
(a) Semiconductor : Memory can be made using RAM, SRAM for cache and DRAM for main memory. The
bl no

semiconductor material i.e. ICs, for example RAM. differences between the SRAM and DRAM will be seen

(b) Magnetic : Memory can also be made using magnetic in a later section in this chapter. This is also called as the

read and write mechanism, for example Magnetic disk primary memory.

and Magnetic tape. 3. External memory or removable memory includes the


Pu K

(c) Optical : Optical memories i.e. memories that use hard disk, CDs, DVDs etc. This is the secondary memory.

optical methods to read and write have become famous  Fig. 11.2.1 shows the memory hierarchy based on the
ch

these days, for example CD and DVD. closeness to the processor. The registers as discussed
(d) There are some other methods using which data was are the closest to the processor and hence are the
stored in early days like Bubble and Hologram. fastest while off-line storage like magnetic tape are the
farthest and also the slowest.
Te

7. Physical characteristics :

The physical characteristic of memory is also an


important aspect to be considered. This includes the
volatility, power consumption, erasable / not erasable,
etc.
8. Organisation :

It is not that always the memory will be organized


sequentially. There are some other types of memory
organization like interleaved memory, etc. Interleaved
memory is used in microprocessor 8086.

11.1.1 Bytes and Bits :


 The byte is a unit of digital information that mostly
consists of eight bits.
 Infact, a byte was the number of bits used to encode a
single character of text in a computer and for this
reason it has become the basic addressable element in Fig. 11.2.1 : Memory hierarchy

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-4 Memory & Input / Output Systems

 The list of memories from closest to the processor to be written the data is to be given on the data line and
the farthest is given as below : will be written on the capacitor.
1. Registers
2. L1 Cache
3. L2 Cache
4. Main memory
5. Magnetic Disk
6. Optical

ns e
7. Tape
 To have a large faster memory is very costly and hence

io dg
the different memory at different levels gives the
memory hierarchy. How does this memory hierarchy (a) DRAM cell structure
give faster operation and some other terms like cache
etc. will be understood in the subsequent sections.

at le
ROM or the read only memory is quite cheaper
compared to RAM and is mainly used
implementation of the secondary or the virtual memory.
for
ic w
Thus the application of ROM is for virtual or secondary
memories like hard disks, external storage like
bl no

CD / DVD and floppy disks etc.


 We will see different types of ROM in the subsequent
sub-section and thereafter some ROM memories used
in computers like magnetic disk, CD, DVD etc. (b) SRAM cell structure
Pu K

Fig. 11.3.1
11.3 Types of RAM :
 On the other hand, the SRAM has each cell made of a
ch

 RAM (Random Access Memory) is called so because any flip-flop, thus requires more components as compared
memory location in this IC can be accessed randomly. to the DRAM cell. Hence it occupies more space on the
 There are two types of RAM, namely, SRAM (Static RAM) silicon wafer, and is costlier.
and DRAM (Dynamic RAM).
Te

 Thus it is also costlier. But the advantage is that it


 SRAM is made up of flip flops while the DRAM is made doesn’t require any refreshing and is also very fast
up of capacitors.
compare to DRAM. Fig. 11.3.1(b) shows the structure of
 Since DRAM is made using capacitors, it requires less the DRAM cell.
number of components to make a one bit cell, hence
 The flip-flop is used to store the data. There is a AND
also requires less space on the silicon wafer. Thus it is
gate at the output of the flip-flop, which will be enabled
also comparatively cheaper.
by the select line (that works as an address line) and the
 But it is slower than SRAM, because capacitors require –––––
Read / Write operation (when data has to be read)
time for charging and discharging. Also the capacitors
when it is logic ‘1’. Thus when both i.e. the select line
loose charge in some time and hence need to be
–––––
recharged according to the data, this is called as and the Read / Write lines are ‘1’, the output of the flip-
refreshing the DRAM. Fig. 11.3.1(a) shows the structure flop will be available on the data line.
of a DRAM cell.  Similarly for the write operation, the select line must be
 The address line selects the particular location, it –––––
enabled by making it logic ‘1’ and the Read / Write line
enables the MOSFET that connects the capacitor to the
must be logic ‘0’. Thus the data from the input line will
data bus and hence if the data is to be read, simply the
be stored in the flip-flop.
data line gets the data to be read; while if the data is to

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-5 Memory & Input / Output Systems

 Table 11.3.1 shows the differences between the SRAM  The PROM (Programmable Read Only Memory) or also
and DRAM. sometimes referred to as OTP (One Time
Table 11.3.1 Programmable) memory, as it can be written onto only
once. When manufactured, it is blank, once written on it,
Sr.
SRAM DRAM it cannot be re-written. There are diodes that are used
No.
to store data, and they are fused or kept as it is to store
1. No refreshing required. Continuous refreshing the data in them. The internal diagram of the PROM is
required (disadvantage).
shown in Fig. 11.4.1.
2. It is faster for accessing It is slower in accessing

ns e
 The AND array is used as address lines and the OR array
data. data. as data lines. The AND array (on the left in

io dg
3. It takes more space on It takes less space on chip Fig. 11.4.1) comes as predefined connections as shown
chip as more number of as less number of in the Fig. 11.4.1 in sequence of binary, in this case from
components are required components are required “000” to “111”, as there are three bit address.
per bit. per bit.
 The OR array (on the right hand in Fig. 11.4.1) comes
4.

5.
at le
Hence is also costly.

Bit density is lesser.


Hence is cheaper.

Bit density is more.


with programmable link, the ones to be retained can be
retained while the remaining fused or opened.
ic w
6. The bit is stored in a flip- The bit is stored as a  Hence whenever a memory address is given to the
flop. charged or discharged address lines (a, b and c in Fig. 11.4.1) the specified
location will be selected and according to the fused
bl no

capacitor.

7. SRAM is mainly used or DRAM is mainly used or links, the data will be available on the OR gates output

selected for cache selected for semiconductor lines.


memory. main memory.
Pu K

11.4 ROM (Read Only Memory) :


ch

 ROM or the read only memory is quite cheaper


compared to ROM and is mainly used for
implementation of the secondary or the virtual memory.
 Thus the application of ROM is for virtual or secondary
Te

memories like hard disks, external storage like


CD / DVD and floppy disks etc.

 We will see different types of ROM in the subsequent


sub-section and thereafter some ROM memories used
in computers like magnetic disk, CD, DVD etc.

11.4.1 Types of ROM :


 There are various types of ROM available based on
whether or not it can be re-written; they are called as
Fig. 11.4.1 : PROM
ROM, PROM, EPROM and EEPROM. These types of
memories will be studied in this section.  The EPROM (Erasable Programmable Read Only
 There are some more ROMs available these days like Memory) although extinct today and is replaced by
2
flash memory, OTP etc. EEPROM or E PROM, but it used to be the only erasable
 The ROM is a memory wherein, the user cannot write memory available earlier.
anything. The data to be stored in the ROM is to be  In case of EPROM the data written can be erased by
given to the ROM manufacturer, who writes this data on keeping the EPROM IC in the UV box, as the UV rays
the ROM and provides the same. erase the previously written data on the EPROM.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-6 Memory & Input / Output Systems

ns e
io dg
at le
ic w
Fig. 11.4.2 : Read-Write mechanism

The EPROM as discussed earlier are these days replace The head may be single for both read and write
bl no

 
with EEPROM (Electrically Erasable Programmable Read operations or separate ones.
Only Memory). The EEPROMs are erased by giving an  During read/write operation, head is stationary while
extra supply voltage.
the platter (disk) rotates.
Pu K

11.4.2 Magnetic Memory :  Write operation is done by passing current through coil
that produces magnetic field and then the pulses are
ch

 Magnetic disks are very cheap and widely used as sent to head. Thus the magnetic pattern i.e. NS (North-
external storage and as hard disks. When used as hard South) or SN (South-North) is recorded on surface
disks, they are called as Winchester Disk. below.
 Initially magnetic tapes were used for storage. Magnetic
Te

 Read operation is done by magnetic field moving


tapes are used even today in some places because of its relative to coil that produces current. According to the
low cost and ease of data storage. When huge amount magnetic pattern the data is read by the head.
of data is to be stored, magnetic tape is used.
 The organization of data on the platter is in a special
 Let us see the construction of these magnetic memories. manner with concentric circles called as tracks as shown
 The magnetic disk substrate is coated with in Fig. 11.4.3(a). Further the tracks are divided into
magnetisable material. sectors.
 The aluminium substrate was used earlier but now glass  The data is also stored in a special manner such that
is used because of the following : first the data is stored in the first track of first platter
1. Improved surface uniformity (upper and lower sides) and then in the first track of the
2. Increase reliability second platter (upper and lower sides), then of the third
3. Reduction in surface defects and read/write errors (upper and lower sides) and so on. This is shown in the
4. Better stiffness and shock/damage resistance. Fig. 11.4.3(b).
 The reading and writing mechanism of the magnetic
 Thus when reading from one track of a platter, the head
memory is shown in the Fig. 11.4.3. Writing and reading
mechanism may not be moved and the other head will
of data is done with the help of a conductive coil called
start reading from the same track of another platter.
as head.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-7 Memory & Input / Output Systems

 There is gap between the two sectors as well as


between two tracks as shown in Fig. 11.4.3(a).
 The disk moves at constant angular velocity and hence
the data is read at the same speed, may be the
innermost track or the outermost track.
 Each data stored on the disk is stored in a special
manner with some ID information as shown in the
Fig. 11.4.4.

ns e
io dg
Fig. 11.4.4 : Data storage format in magnetic memory

11.4.3 Optical Memory :

at le (a) Data organization on a disk


 The memory devices like Compact Disc (CD) and Digital
Versatile Disc or Digital Video Disc (DVD) use the optical
ic w
method to read the data written on them.
 The following sub-sections discuss about the CD and
the DVD data storage and reading.
bl no
Pu K
ch

Fig. 11.4.5 : Devices for optical memory

A. CD-ROM :

 CD-ROM was originally built for audio and was of the


Te

capacity of 650Mbytes giving over 70 minutes audio.


 The construction of the CD was such that it used
(b) Data organization on multiple platters polycarbonate coating with highly reflective coat,
Fig. 11.4.3 usually aluminium.

 A floppy disk is single platter, while a hard disk or  In the CD-ROM the data stored as pits and lands as
winchester disk is multi platter as shown in the shown in Fig. 11.4.6(a).

Fig. 11.4.3(b).  These pits and lands are read by reflecting laser. The CD
has a constant packing density hence constant linear
 In this case one head for each side of the multiple
velocity across a track is required as against the
platters are mounted to form a head stack assembly.
constant angular velocity in case of magnetic discs.
 It is called as Winchester hard disk because it was  Fig. 11.4.6(a) shows that the CD is made up of three
developed by IBM in Winchester (USA). It is a sealed layers, namely the polycarbonate plastic, aluminium and
unit with the platters and the heads fly on boundary the protective material like acrylic.
layer of air as disk spins.  The laser beam incident on the highly reflective
 Also, there is very small head to disk gap making it substance like aluminium, returns back in some amount
more robust. Winchester hard disk is cheap and the of time. Based on this time gap, the optical disk reader
fastest external storage. can realize that there was a land or a pit.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-8 Memory & Input / Output Systems

 In case ifit is a land it will take more time to return while less time in case if it is a pit as seen in the Fig. 11.4.6(a).

ns e
io dg
Fig. 11.4.6(a) : Construction of CD

 The data format on a CD-ROM is shown in the Fig. 11.4.6(b). Initially a data 00H is stored, followed by 10 bytes of FFH


at le
and again a 00H, called as the synchronous 12 bytes.
The next is the 4 bytes ID (IDentity) about the time required for this data to be played (in MINutes and SEConds), the
ic w
sector in which the data is placed and the mode. There are three modes,
o Mode 0 indicates blank data field
bl no

o Mode 1 indicates 2048 byte data + error correction


o Mode 2 indicates 2336 byte data and no correction data
 Thus the remaining two fields contain data and error correction code (ECC) as defined by the mode bits.
Pu K
ch
Te

Fig. 11.4.6(b) : Data Format on CD

B. DVD :

 The major difference between a CD and DVD is that a DVD has multiple layers and hence very high capacity.
 Another major difference in a DVD with respect to CD is that the DVD has more denser data storage mechanism which
results in the data storage capacity of around 4.7G per layer of a DVD.
 There are DVDs available with single layer as well as multiple layers.
 Fig. 11.4.7 shows the constructional differences of a CD and DVD.

Fig. 11.4.7(Contd...)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-9 Memory & Input / Output Systems

ns e
io dg
(a) CD-ROM - Capacity 682 MB

at le
ic w
bl no
Pu K

(b) DVD-ROM, double-sided, dual-layer - Capacity 17 GB

Fig. 11.4.7 : Construction of (a) CD and (b) DVD


ch

 As seen in the Fig. 11.4.7(b), the double sided, two (1) Best fit : In this case the smallest available fragment is
layers DVD, has a reflective and semi-reflective layers on
searched and the required data is stored in that
both the sides.
fragment. The smallest fragment searched for should be
Te

 Hence in this case, the laser beam and receiver have to


greater than of equal to the size of data to be stored.
be on both the sides of the disc.
 Also there have to be two types of beam with low and (2) Worst fit : In this case the largest available block is

high intensity, the low intensity beam is reflected by the used to store the data.
semi-reflective substance, while the high intensity beam (3) Next fit : In this case immediate next empty block of a
is reflected by the highly reflective substance. Thus size equal to or greater than the size of data to be
giving a mechanism to read the data written on both
stored is searched sequentially and the required data is
the layers of each of the side.
stored there.
11.5 Allocation Policies :
Ex. 11.5.1 : Given the memory partitions of size 100 K, 500
 Partitioning refers to logical division of the memory into K, 200 K, 300 K and 600 K (in order). How
subparts so that they can be accessed individually by would each of the first fit, best-fit, worst fit
tasks fragmentation generally happens when memory
algorithms place the processes of 212 K, 417
blocks have been allocated and are freed randomly.
K, 112 K and 426 K (in order) ? Which
 This results in splitting of partition memory into small
algorithm makes the most efficient use of
non-continuous fragments.
memory ?
 There are 3 memory allocation policies :

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-10 Memory & Input / Output Systems

Soln. :  Similarly, partition no.3 is allocated to P3 and the


partition no.5 is allocated to P4.
I] First-fit :
Memory utilized by P1, P2, P3 and P4
Memory utilization =
Total memory
212 K + 417 K + 112 K + 426 K
=
1700 K
1167 K
= = 0.686
1700 K

Worst-Fit :

ns e
io dg
(S9.5)Fig. 11.5.1


at le
Partition number 2 of size 500 K is assigned to P1
(size = 212 K). It is the first partition that can
ic w
accommodate P1.
 Partition number 5 of size 600 K is assigned to
(S9.5) Fig. 11.5.1(b)
P2 (size = 417 K). It is the first empty partition that can
bl no

accommodate P2.  The largest free partition no.5 of size 600 K is allocated
to P1 (212 K).
 P3 is assigned to partition 3.
 P2 (size 417 K) is assigned to partition no.2. Partition no.
 P4 cannot be executed.
2 is the largest free partition and it can accommodate
Memory utilized
Pu K

Memory utilization = P2.


Total memory
 P3 (size 112 K) is assigned to partition no.4. Partition no.
Memory utilized by P1, P2 and P3
= 4 is the largest free partition.
ch

Total memory
 P4 cannot be executed as there is no free partition that
212 K + 417 K + 112 K
= can accommodate P4.
1700 K
Memory utilized by P1, P2, P3
741 Memory utilization =
= = 0.436 Total memory
Te

1700
212 K + 417 K + 112 K
II] Best-fit : =
1700 K
741
= = 0.436
1700

11.6 Signals to Connect Memory To


Processor and Internal
Organization of Memory :
 The read / write memories consist of an array of
registers wherein each register has a unique address.
Fig. 11.6.1 shows the block diagram of a memory device.
N : number of registers
(S9.5)Fig. 11.5.1(a)
M : word length.
 Partition no. 4 of size 300 K is allocated to P1 (212 K). It Example : If a memory is having 13 address lines and 7 data
is the smallest free partition that can accommodate P1. lines, then number of registers / memory locations
N 13
 Partition no. 2 of size 500 K is allocated to P2 of size = 2 = 2 = 7192 word length = M bit = 7 bit.
417 K. It is the smallest free partition that can  The number of address lines of a microprocessor
accommodate P2. depends on the size of the memory.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-11 Memory & Input / Output Systems

Number of address lines Size of memory in


required bytes
10 1024  1 k
11 2048  2 k
12 4096  4 k
13 8192  7 k
14 16384  16 k
15 32768  32 k

ns e
16 65536  64 k
Fig. 11.6.1 : Block diagram of a memory device

io dg
11.7 Memory Chip Size and Numbers :
Number of address lines Size of memory in
required bytes Table 11.7.1 : EPROM ICs available in the market

1 2 IC Memory size Address Number of

at le
2
3
4
8
number
2716
2732
data
2k7
4k7
pins
24
24
ic w
4 16
2764 8k7 28
5 32
27128 16 k  7 28
6 64
bl no

27256 32 k  7 28
7 128
27512 64 k  7 28
8 256
9 512
Pu K
ch
Te

Fig. 11.7.1(a) : Pin configuration


Pin Names
A0 – A13 Addresses
–– Chip enable
CE
–– Output enable
OE
O0 – O7 Outputs
–––– ProgROM
PGM
N.C. No connect
Fig. 11.7.1(b)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-12 Memory & Input / Output Systems

Table 11.7.1 : RAM ICs 0 1 0 Write

IC no. Type Memory size 0 0 0 Write

6116 SRAM 2k  7 Ex. 11.7.1 : Interface 6 kB of EPROM with starting address


from 000H and 2 KB of RAM with starting
6264 SRAM 8k  7
address followed by EPROM.
2114 SRAM 1k  4 Soln. :

Note : Let us assume the processor has 16 address lines.

ns e
Step 1 : Total EPROM required = 6 kB
Chip size available = 6 kB (assume) (IC 2732)

io dg
4 kB
 No. of chips required = =1
4 kB
Chip 1 : Starting address = 000H
Chip size = 4kB = 1FFFH

at le (Actually 6 kB = 1000H, but here we have taken

 Ending Address = 1FFFH


000H to 1FFFH)
ic w
bl no

Fig. 11.7.2 : Pin configuration

Table 11.7.2

Truth Table for 6264


Pu K

––– ––– ––– –––


WE CS 1 CS 2 OE Mode
ch

X 1 X X Not selected

X X 0 X (Power down)

1 0 1 1 Output disable
Te

1 0 1 0 Read Step 2 : Total RAM required = 2 kB

0 0 1 1 Write Chip size available = 2 kB (assume) (IC 6216)


2 kB
0 0 1 0 Write  No of chips required = =1
2 kB

Truth Table for 6116 Chip 1 : Starting address = ending address of EPROM + 1
––– ––– ––– = 1FFFH + 1 = 1000H
CS OE WE Mode
Chip size = 2 kB = 07FFH
1 X X Not Selected  Ending address = 17FFH
0 0 1 Read

Step 3 : Memory Map :

A15 A14 A13 A12 A11 A10 A9 A8 A7 A6 A5 A4 A3 A2 A1 A0

EPROM SA = 000H 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
chip 1 EA = 1FFFH 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1
– –
y0  y 1

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-13 Memory & Input / Output Systems

A15 A14 A13 A12 A11 A10 A9 A8 A7 A6 A5 A4 A3 A2 A1 A0

RAM SA = 1000H 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
chip 1 EA = 17FFH 0 0 0 1 0 1 1 1 1 1 1 1 1 1 1 1

y2

Full decoding logic : Ex. 11.7.2 : Interface 7K of EPROM and 7 KB of RAM


using 4KB devices.
 EPROM chip size = 4kB and RAM chip size = 2kB.

ns e
11 Soln. :
 Smaller chip size  RAM = 2 kB = 2
  Neglect lower 11 address lines i.e. A0 to A10 and Note : Let us assume the processor has 16 address lines.

io dg
consider remaining address lines A11 to A15 for
decoding. Hence 5:32 decoder is required. Hence circle
the five bits as shown in memory map.
 EPROM has two possibilities 00000b and 00001b, thus it

at le – –
requires y0 and y1 outputs of the 5:32 decoder. RAM

has 00010b and hence it requires y2 output of the
ic w
decoder.
bl no
Pu K

Step 1 : Total EPROM required = 8 kB


ch

Chip size available = 6 kB (IC 2732)


Fig. P. 11.7.1 8 kB
 No of chips required = =2
4 kB
Step 4 : Final implementation :
Chip 1 : Starting address = 000H
Te

Chip size = 6 kB = 1FFFH


 Ending address = 1FFFH
Chip 2 : Starting address = Previous ending + 1
= 1FFFH + 1 = 1000H
Chip size = 1FFFH
 Ending address = 1FFFH
Step 2 : Total RAM required = 8 kB
Chip size available = 6 kB ( IC 6232 )
8 kB
 No of chips required = =2
4 kB
Chip 1 : Starting address = EPROM ending address + 1
= 1FFFH + 1 = 2000H
Chip size = 6 kB = 1FFFH
 Ending address = 2FFFH
Chip 2 : Starting address = previous ending + 1
= 2FFFH + 1 = 3000H
Chip size = 6 kB = 1FFFH
 Ending address = 4FFFH
Fig. P. 11.7.1(a)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-14 Memory & Input / Output Systems

Step 3 : Memory Map :

A15 A14 A13 A12 A11 A10 A9 A8 A7 A6 A5 A4 A3 A2 A1 A0

EPROM SA = 000H 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
chip 1 EA = 1FFFH 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1

y0

EPROM SA = 1000H 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0

ns e
chip 2 EA = 1FFFH 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1

io dg
y1

RAM SA = 2000H 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
chip 1 EA = 2FFFH 0 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1

at le –
y2

RAM SA = 3000H 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0
ic w
chip 2 EA = 4FFFH 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1

y3
bl no

Full decoding logic : Step 4 : Final implementation :

EPROM chip size = 4kB and RAM chip size = 6 kB.


Pu K

 Smaller chip size (Both are same in this case)


ch

12
= 6 kB = 2

 Neglect lower 12 address lines i.e. A0 to A11 and


Te

consider remaining address lines i.e. A12 to A15 for decoding.

Hence 4:16 decoder is required. Hence circle the four bits as

shown in the memory map. EPROM chip one has 0000b, thus

it requires y0 ; while EPROM chip two has 0001b, thus it

requires y1 and so on.

Fig. Ex. 11.7.2


Fig. P. 11.7.2(a)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-15 Memory & Input / Output Systems

Ex. 11.7.3 : Interface 7 KB EPROM and 6 kB RAM to a Step 2 : Total RAM required = 6 kB
processor with 16-bit address and 7-bit data Chip size = 6 kB (Assume)
bus.  No. of chips = 1
Soln. : Chip 1 : Starting Address = 2000H
Step 1 : Total EPROM required = 8 KB Chip size = 6 kB  1FFFH
Chip size = 8 KB (Assume)  Ending Address = 2FFFH
 No of chips = 1
Chip 1 : Starting Address = 000H
Chip size = 8 KB 1FFF H
 Ending Address = 1FFF H

ns e
Step 3 : Memory Map :

io dg
A15 A14 A13 A12 A11 A10 A9 A8 A7 A6 A5 A4 A3 A2 A1 A0
EPROM Chip1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
SA = 000H
EA = 1FFFH 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1
– –
at le
y0  y1
RAM Chip1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
ic w
SA = 2000H
EA = 2FFFH 0 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1

y2
bl no
Pu K
ch

Fig. P. 11.7.3

Step 4 : Final Implementation :


Te

Fig. P. 11.7.3(a)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-16 Memory & Input / Output Systems

Ex. 11.7.4 : A computer system needs 128 Bytes of ROM Chip size available = 128B
and 128 Bytes of RAM. ROM chip available is  Total number of chips required = 4
of capacity 128 bytes and RAM chip available
Chip 1 : Starting address = Previous ending + 1 = 0200H
is of 128 Bytes. Draw memory address map for
a computer system and also draw a Chip size = 128B  007FH
connection structure.  Ending address = 027FH
Soln. :
Chip 2 : Starting address = Previous ending + 1
Assuming that the processor has 10 address lines to
= 027FH + 1 = 0280H

ns e
interface 1KB memory (128 B + 128 B)
Chip size = 128B  007FH

io dg
Step 1 : Total EPROM required = 128 B
 Ending address = 02FFH
Chip size = 128 B
Chip 3 : Starting address = Previous ending + 1 = 0300H
 Number of chips required = 1
Chip size = 128B  007FH

at le
Chip 1 : Starting address

Chip size
= 000H

= 128 B  01FFH
 Ending address = 037FH

Chip 4 : Starting address = Previous ending + 1 = 0380H


ic w
 Ending address = 01FFH
Chip size = 128B  007FH
Step 2 : Total RAM required = 128 B
 Ending address = 03FFH
bl no

Step 3 : Memory map :

A9 A8 A7 A6 A5 A4 A3 A2 A1 A0
Pu K

EPROM chip 1 SA = 000H 0 0 0 0 0 0 0 0 0 0

EA = 01FFH 0 1 1 1 1 1 1 1 1 1
– – – –
ch

y 0 .y1 .y2  y3

RAM chip 2 SA = 0200H 1 0 0 0 0 0 0 0 0 0

EA = 027FH 1 0 0 1 1 1 1 1 1 1
Te


y4

RAM chip 1 SA = 0280H 1 0 1 0 0 0 0 0 0 0

EA = 02FFH 1 0 1 1 1 1 1 1 1 1

y5

RAM chip 2 SA = 0300H 1 1 0 0 0 0 0 0 0 0

EA = 037FH 1 1 0 1 1 1 1 1 1 1

y6

RAM chip 3 SA = 0380H 1 1 1 0 0 0 0 0 0 0

EA = 03FFH 1 1 1 1 1 1 1 1 1 1

y7

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-17 Memory & Input / Output Systems

ns e
Fig. P. 11.7.4

io dg
Absolute (full) decoding logic :
7
 EPROM chip size = 128 B while RAM chip size = 128B. Thus smaller chip size = 128B = 2 .
 Therefore neglect lower 7 address lines i.e. A0 to A6. Now since three address lines are remaining, we need a 3 : 7
decoder.

at le
The remaining lines i.e. A7 to A9 will be inputs to the decoder, as shown by circles in memory map.

Step 4 : Final Implementation :


ic w
bl no
Pu K
ch
Te

Fig. P. 11.7.4(a)

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-18 Memory & Input / Output Systems

11.8 Cache Memory : Concept, cache memory by cache controller and it updates its
Architecture (L1, L2, L3) and Cache directory to track the information stored in cache
Consistency : memory.

Before going to the cache of Pentium processor, we will Assume the cache memory is empty, in the beginning
see some basics of the cache like its operation, advantage, (after reset). The following sequence takes place :
principles of locality of reference, cache architectures, write 1. The processor performs a memory read cycle to fetch
policies etc.
the first instruction from memory.
11.8.1 Cache Operation : 2. The cache controller uses the address issued by the

ns e
processor to determine if a copy of the requested

io dg
information is already in the cache memory. But a cache
miss occurs as the cache memory is empty.

3. The cache controller initiates a memory read cycle to


fetch the requested information from DRAM memory.

at le 4.
This will consume some wait states.

The information from DRAM memory is sent to the


ic w
processor. It is also copied into the cache memory and
the cache controller updates its directory to reflect the
bl no

presence of the new information. The information being


Fig. 11.8.1 sent is not just the required instruction, but a block

1. Implementation of cache memory subsystem is an (line) of data is sent to the cache. No performance gain

attempt to achieve almost all accesses with zero wait is achieved due to absence of information in cache
Pu K

state while accessing memory, but with an acceptable memory.

system cost. 5. After executing the first instruction, the processor’s


ch

prefetched requests a series of memory read bus cycles


2. The cache controller maintains a directory to keep a
to fetch the remaining instructions in program loop. But
track of the information and it has copied into the cache
since an entire line was brought earlier, most of the
memory.
Te

required instructions will be in cache and hence


3. When the processor initiates a memory read bus cycle,
resulting in faster access and performance gain. If the
the cache controller checks the directory to determine if cache is sufficiently large, all instructions in the program
it has a copy of the requested information in cache loop can become resident in cache memory.
memory.
6. The program has loop instruction to jump to the
4. If the copy is present, the cache controller reads the beginning of the loop start over again. The processor
information from the cache, sends it to the processors then requires the same program again.
data bus, and asserts the processor’s ready signal. This 7. When the processor requests for the first instruction in
is called as READ HIT. the loop, cache controller detects the presence of the
5. If the cache controller determines that it does not have instruction in the cache memory and hence provides it
a copy of the requested information in its cache, the to the processor with zero wait states.
information is now read from main memory (DRAM).
11.8.2 Principles of Locality of Reference :
This is known as READ MISS and causes wait states due
to slow access time of DRAM. 1. Locality of reference is the term used to explain the

6. The requested information is from the DRAM given to characteristics of programs that run in relatively small

the processor. The information is also copied into the loops in consecutive memory locations.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-19 Memory & Input / Output Systems

2. The locality of reference principle comprises of two 11.8.3 Cache Performance :


components :
1. Performance of cache subsystems depends on the
frequency of cache hits, usually termed as hit rate.
Cache Hits
% HIT RATE =  100 %
Total Memory Accesses

2. If a program requires a small area of memory and


consists of loops, then maximum cache hits are possible.
Fig. 11.8.2 : Two components of locality of reference 3. On the other hand, if the program has non-looping

ns e
(1) Temporal locality : code, many accesses will result in cache misses.

io dg
(a) Since the programs have loops, the same instructions 4. Fortunately, most programs run in loops and hence a
are required frequently, i.e. the programs tend to use high percentage of cache hit (85 – 95%) is experienced.
the most recently used information again and again. 5. Besides locality of reference various other factors
(b)

(c)
at le
If for a long time a information in cache is not used,
then it is less likely to be used again.

This is known as the principle of temporal locality.


contribute to cache hit rates like Cache’s architecture,
Size of cache memory and Cache memory organization.
ic w
11.8.4 Cache Architectures :
(2) Spatial locality : Two basic architectures are found in today’s systems :
bl no

(a) Programs and the data accessed by the processor 1. Look-through cache design
mostly reside in consecutive memory locations. 2. Look-aside cache design

(b) This means that processor is likely to need code or data


that are close to locations already accessed.
Pu K

(c) This is known as the Principle of Spatial Locality.


ch

3. The performance gains are realized by the use of cache


memory subsystem are because of most of the memory
accesses that require zero wait states due to principles Fig. 11.8.3 : Two basic cache architectures
of locality.
Te

1. Look-through cache designs :

Fig. 11.8.4

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-20 Memory & Input / Output Systems

(a) The performance of systems incorporating Look (a) In this case the processor is directly connected to the
Through Cache is typically higher than that of systems system bus or memory bus.
incorporating Look Aside Cache. (b) When the processor initiates a bus access, cache
(b) Data from main memory (DRAM) is not transferred to controller as well as main memory detects the bus
the processor using system bus hence system bus is free access address.
for other bus masters (like DMAC) to access the main (c) The cache controller sits aside and monitors each
memory. processor memory request to determine if the cache
(c) This system isolates the processor’s local bus from the contains the copy of the requested information.
system bus hence achieving bus concurrency.

ns e
(d) If it is a cache hit, the cache controller terminates the
(d) The major advantage is that two bus masters can bus cycle by instructing memory subsystem to ignore

io dg
operate simultaneously. One processor accesses look the request. If it is a cache miss, the bus cycle completes
through cache while another bus master such as DMA in normal fashion from memory (and wait states are
can access the system bus is possible. required).
(e) To expansion devices, a look-through cache controller is Advantages :

(f) at le
like a system processor.
During memory writes, look-through cache provides
zero wait state operation (using posted writes) for write
(a) Cache miss cycles complete faster in Look Aside Cache
as the bus cycle is already in progress to memory and
hence no look up penalty is incurred.
ic w
misses.
(b) Simplicity of designs because only one address is to be
Advantages : monitored by cache controller form processor and not
bl no

(a) It reduces the system and memory bus utilization, from I/O devices.
leaving them available for use by other bus master. (c) Lower cost of implementation due to their simplicity.

(b) It allows bus concurrency, where both the processor and Disadvantages :
Pu K

another bus master can perform bus cycles at the same (a) The processor requires system bus utilization for its
time. every access, to access both cache subsystem and
memory.
ch

(c) It also completes write operations in zero wait states


using posted writes. (b) Concurrent operations are not possible as all masters
reside on the same bus.
Disadvantages :
11.9 Cache Mapping Techniques :
Te

(a) In the event that the memory request is a cache miss,


the lookup process delays the request to memory. This  Mapping Function and replacement algorithm together
decides where a line from the main memory can reside
delay is called as lookup penalty.
in the cache.
(b) It is more complex, costly and difficult to design and
 The different mapping functions are Direct mapping,
implement.
Fully Associative mapping and Set associative mapping.
2. Look-aside cache designs :
11.9.1 Direct Mapping Technique :

 In this case each block of main memory can map to only


one cache line.
 A given block maps to any line (i mod j), where i is the
line number of the main memory to be mapped and j is
the total number of lines in the cache memory.
 The address is divided into three parts i.e. the word
selector, line selector and the tag.
 Least Significant w bits identify unique word of a
Fig. 11.8.5 particular line

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-21 Memory & Input / Output Systems

 Most Significant s bits specify one memory block to  Fig. 11.9.1 shows the method the access the data from
which the cache line corresponds. the cache implementing direct mapping technique
 The MSBs are split into a cache line field r and a tag of  In this case to search a line from the cache memory, the
s-r (most significant) line field selects the particular line, whose tag is to be
 Example : Let Cache be of 64kByte that is divided into compared with the tag of the address specified by the
14
blocks of 4 bytes hence cache is 16k (2 ) lines of 4 processor.
bytes. And let the main memory size be 16MBytes that
 The advantages of Direct Mapping are :
24
requires 24 bit address lines (2 =16M).
1. Simple implementation

ns e
Tag (s - r) (8 bits) Line (r) (14 bits) Word (w) (2 bits) 2. Inexpensive

io dg
 Hence the 24 bit address is divided as 2 bit word  The disadvantages of Direct mapping are :
identifier (4 byte block), 22 bit block identifier i.e. 8 bit 1. Fixed location for given block hence if a program
14
tag 14 bit slot or line(16K lines=2 )
accesses 2 blocks that map to the same line
Fig. 11.9.1 shows the organization of Direct Mapping

at le
Cache.
repeatedly, cache misses are very high.
ic w
bl no
Pu K
ch
Te

Fig. 11.9.1

11.9.2 Fully Associative Mapping :


 In this case a main memory block can load into any line of cache.
 There are only two fields in the address as tag and word
 The tag uniquely identifies block of memory from where the line has been copied into the cache memory.
 To search a particular data the tag of every line is to be examined for a match. Thus cache searching gets expensive in
terms of time required.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-22 Memory & Input / Output Systems
14
 Example: Let Cache be of 64k Byte that is divided into blocks of 4 bytes hence cache is 16k (2 ) lines of 4 bytes. And let
24
the main memory size be 16M Bytes that requires 24 bit address lines (2 = 16M).
 The associative Mapping Address Structure for this example considered: 22 bit tag stored with each 4-word block of data.
 Compare tag field with tag entry in cache to check for hit. Least significant 2 bits of address identify which word is
required from 4-word data block

Tag (22 bits) Word (2 bits)

 The organization of Fully Associative Cache mapping is shown in the Fig. 11.9.2.

ns e
io dg
at le
ic w
bl no
Pu K
ch
Te

Fig. 11.9.2

 The advantages of Associative Mapping are

1. If a program accesses 2 blocks (that would map to the same line in case of Direct mapping) repeatedly, cache misses
will not occur

 The disadvantages of Associative Mapping are

1. Complex design for many parallel comparisons of tag.

2. Expensive due to implementation of parallel comparator.

11.9.3 Set Associative Mapping :

 In this case cache is divided into a number of sets. Each set contains a number of lines.

 A given block maps to any line in a given set (i mod j), where i is the line number of the main memory to be mapped
and j is the total number of sets in the cache memory.

 For example, if there are 2 lines per set, it is called as 2 way associative mapping i.e. a given block can be in one of 2 lines
in only one set.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-23 Memory & Input / Output Systems

ns e
io dg
at le
ic w
bl no
Pu K

Fig. 11.9.3

 Example: Let Cache be of 64kByte that is divided into Ex. 11.9.1 : A block set associative cache consists of 64
ch

14
blocks of 4 bytes hence cache is 16k (2 ) lines of 4 blocks divided in 4 block sets. The main
bytes. And let the main memory size be 16MBytes that memory contains 4096 blocks, each 128
24 words of 16 bit length
requires 24 bit address lines (2 =16M).
 For this example for set associative mapping address (1) How many bits are there in main
Te

memory address ?
structure: 2 bits for one of the 4 words, 8K lines in each
13
of the 2 sets hence 13 bits to select a set (2 =8K) and (2) How many bits are there in cache
remaining (24 – 13 – 2=) 9 bits for tag. memory address (tag, set and word
fields) ?
Tag (9 bits) Set (13 bits) Word (2 bits)
Soln. :
 In this case the set field is used to determine cache set 12 7
(1) Main memory size = 4096 blocks x 128 word =2  2 =
to look in and Compare tag field to see if we have a hit. 19
2
 Fig. 11.9.3 shows an example of Two Way Set
Thus main memory address lines required is equal to 19.
Associative Cache Organization
(2) Cache memory has 64 blocks divided in 4 block sets,
 The advantages of Set Associative Mapping are:
4
thus each set has 16 blocks. Hence 16 = 2 ; 4 address
1. If a program accesses 2 blocks that map to the same set
lines for set
repeatedly, cache misses will not occur because they
7
would go into different lines of the set. Each block has 128 words; hence 128 = 2 , 7 address

2. Not very complex because of just 2, 4 or 8 parallel lines for word field

comparisons. Remaining lines i.e. 19 – 4 – 7 = 8 address lines for tag


3. Not much expensive again because of simple
Tag (7 bits) Set (4 bits) Word (7 bits)
implementation.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-24 Memory & Input / Output Systems

11.9.4 Write Policy : impact processor’s performance. This is assuming that


the posted write buffer is only one transaction deep.
1. When the write hit occurs, the cache memory is updated
4. But, if there are two back-to-back memory write bus
and it contains the latest data while memory contains
cycles, the cache controller will insert wait states into
stale data.
second bus cycle until the first write to memory has
2. Such a cache line is called as dirty or ‘modified’ because actually been completed. The bus controller will then
it has no longer mirrors of its corresponding line in post the second bus cycle and assert the processor’s
memory. ready line. But since processor typically writes only one
3. In order to correct this cache consistency problem, the write operation, the memory writes are completed in
zero wait states.

ns e
corresponding memory line must be updated to reflect
the change made in the cache; else another bus master 5. With this policy, another bus master is not permitted to
use the bus until the write-through is completed,

io dg
may get stale data if it reads from these lines.
thereby ensuring that the bus master will receive the
4. Three write policies are used to prevent this type of
latest information from memory.
consistency problem: Write-through, Buffered or posted
 The write-through operations use either system or
write-through and Write-back.
memory bus. Hence when write-through to memory is

at le in progress, bus masters are prevented from accessing


memory.
ic w
 But actual cache consistency problem occurs only when
the bus master reads from a location in memory that is
stale. The frequency of this type of occurrence is very
bl no

less. In fact, the memory line is likely to be updated


many times by the processor before another bus master
reads from that particular line.
Fig. 11.9.4 : Write policy  As a result the write-through and buffered write-
Pu K

A. Write-Through Cache Designs : through designs, update memory each time a memory
write is performed, although the need for such action
ch

1. In this write policy, the data is passed to the memory


may not be required immediately.
immediately, so that the memory has the updated data.
C. Write-Back Cache Designs :
2. Even on write hit operation, the cache controller
updates the line in the cache and the corresponding line 1. Write-back designs improve the overall system
Te

in memory, and hence ensuring that consistency is performance by updating a line in main memory only
maintained between cache and memory. when necessary, thereby keeping the system bus free
for use by other processors and bus masters and hence
3. Very simple and effective implementation.
ensuring bus concurrency.
4. But poor performance due to slow main memory writes
operation. 2. The memory is updated only when :

5. Also it doesn’t allow bus concurrency. (a) Another bus master initiates a read operation from a
memory line that contains stale data.
B. Buffered or Posted Write-Through Designs :
(b) Another bus master initiates a write operation from a
1. It has an advantage of providing zero wait state write
memory line that contains stale data.
operation for cache hits as well as cache misses.
(c) The cache line that contains modified information is
2. When a write occurs, buffered write through caches
about to be overwritten in order to store a line newly
tricks the processor into thinking that the information
was written to memory in zero wait states. In fact, the acquired from memory i.e. during line replacement.
write to main memory has not been performed yet. 3. Cache controller marks the cache lines as ‘modified’ in
3. The look-through cache controller stores the entire the cache directory when the processor updates them.
write operation in a buffer, and writes to the main Hence when read by another master or written into the
memory later. Hence the processor need not perform memory, the cache subsystem checks whether it is
slow write operation with wait states and hence doesn’t marked as ‘modified’ in cache.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-25 Memory & Input / Output Systems

4. They design of such cache controller is COMPLICATED 2. First In First Out (FIFO) : In this case the line which was
to implement because they must MAKE DECISIONS on brought into the cache first is replaced first. Thus the
when to write ‘modified’ lines back to memory to ensure line which has stayed the longest in the cache is
consistency. replaced.

11.9.5 Replacement Algorithms : 3. Least frequently used : In this case the line which is
used for the least number of times is replaced first.
 Replacement algorithm is required to replace a line
4. Random : In this case randomly any line is replaced.
from the cache memory with the new line as discussed
earlier. Ex. 11.9.2 : Assume that memory consists of three frames

ns e
 There are various replacement polices available. The and during execution of a program, following
widely used ones are LRU, FIFO, LFU and random as pages are referenced in the sequence :

io dg
discussed below : 23215245325

Show that behavior of the page replacement


using FIFO strategy.
Soln. :

at le
ic w
bl no

Fig. 11.9.5 : Types of replacement polices

1. Least Recently Used (LRU) : In this case the line which


is least recently used is replaced with the new line. Thus
Pu K

(co 5.48)Fig. P. 11.9.2 : Behavior of page replacement


the line which has not been used for longest time is
using FIFO
replaced with the new line.
ch

Ex. 11.9.3 : Find out page fault for following string using LRU and FIFO method. 6 0 12 0 30 4 2 30 32 1 20 15
(Consider page frame size = 3)
Soln. :
Te

Page address stream

6 0 12 0 30 4 2 30 32 1 20 15
FIFO 6 6 6 6 30 30 30 30 32 32 32 15
0 0 0 0 4 4 4 4 1 1 1
12 12 12 12 2 2 2 2 20 20
F F F F F F F

LRU 6 6 6 6 30 30 30 30 30 30 20 20
0 0 0 0 0 2 2 2 1 1 1
12 12 12 4 4 4 32 32 32 15
F F F F F F F

Page faults are indicated by ‘F’.

Ex. 11.9.4 : Consider a paging system in which M1 has a capacity of three frames. The page address stream formed by
executing a program is 2 3 2 1 5 2 4 5 3 2 5 2

Find the page hit using FIFO, LRU and OPT.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-26 Memory & Input / Output Systems

Soln. :

Time 1 2 3 4 5 6 7 8 9 10 11 12

Address space 2 3 2 1 5 2 4 5 3 2 5 2

FIFO 2 2 2 2 5 5 5 5 3 3 3 3

3 3 3 3 2 2 2 2 2 5 5

1 1 1 4 4 4 4 4 2

Hit Hit Hit

ns e
io dg
2 2 2 2 2 2 2 2 3 3 3 3

LRU 3 3 3 5 5 5 5 5 5 5 5

1 1 1 4 4 4 2 2 2

at le Hit Hit Hit Hit Hit


ic w
2 2 2 2 2 2 4 4 4 2 2 2

OPT 3 3 3 3 3 3 3 3 3 3 3
bl no

1 5 5 5 5 5 5 5 5

Hit Hit Hit Hit Hit Hit


Pu K

Ex. 11.9.5 : Find out page fault for following string using Pages accessed Frames
LRU method. Consider page frame size = 3
2 1 3 2
ch

7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1.
0 1 0 2 F
Soln. :
1 1 0 2
Pages accessed Frames
7 1 0 7 F
Te

7 7 – –
0 1 0 7
0 7 0 –
1 1 0 7
1 7 0 1

2 2 0 1 F

0 2 0 1 Ex. 11.9.6 : Consider the string 1, 3, 2, 4, 2, 1, 5, 1, 3, 2,


6, 7, 5, 4, 3, 2, 4, 2, 3, 1, 4. Find the page
3 2 0 3 F
faults for 3 frames using FIFO and LRU page
0 2 0 3 replacement algorithms. (Dec. 15, 10 Marks.
Soln. :
4 4 0 3 F
1. FIFO :
2 4 0 2 F

3 4 3 2 F 1 3 2 4 2 1 5 1 3 2 6 7 5 4 3 2 4 2 3 1 4

0 0 3 2 F 1 1 1 4 4 4 4 4 3 3 3 7 7 7 3 3 3 3 3 3 4

3 0 3 2 3 3 3 3 1 1 1 1 2 2 2 5 5 5 2 2 2 2 2 2

2 0 3 2 2 2 2 2 5 5 5 5 6 6 6 4 4 4 4 4 4 1 1

1 1 3 2 F M M M M H M M H M M M M M M M M HHH M M

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-27 Memory & Input / Output Systems

5 16 3. OPTIMAL :
% Hit = × 100 = 23.81 % % Miss = × 100
21 21
2 3 3 1 5 2 4 5 3 2 5 2
= 76.19 %

2. LRU : 2 2 2 2 2 2 4 4 4 4 4 4

3 3 3 3 3 3 3 3 2 2 2
1 3 2 4 2 1 5 1 3 2 6 7 5 4 3 2 4 2 3 1 4
1 5 5 5 5 5 5 5 5
1 1 1 4 4 4 5 5 5 2 2 2 5 5 5 2 2 2 2 2 4
M M H M M H M H H M H H
3 3 3 3 1 1 1 1 1 6 6 6 4 4 4 4 4 4 1 1

ns e
2 2 2 2 2 2 3 3 3 7 7 7 3 3 3 3 3 3 3
6
% Hit =  100 = 50 %
12

io dg
M M M M H M M H M M M M M M M M HHH M M
% Miss = 50%
5 16
% Hit = × 100 = 23.81 % % Miss = × 100 =
21 21 Ex. 11.9.8 : Calculate the number of page hits and faults
76.19 % using FIFO, LRU and OPTIMAL page

at le
Ex. 11.9.7 : Find out page hit and miss for the following
string using FIFO, LRU and OPTIMAL page
replacement algorithms for the following page
frame sequence :
2 , 3, 1, 2, 4, 3, 2, 5, 3, 6, 7, 9, 3, 7. (FRAME
ic w
replacement policies considering a frame size SIZE = 3). May 17, 10 Marks.

f three. Soln. :
1. FIFO :
bl no

2, 3, 3, 1, 5, 2, 4, 5, 3, 2, 5, 2.

May 16, 10 Marks. 2 3 1 2 4 3 2 5 3 6 7 9 3 7

Soln. : 2 2 2 2 4 4 4 4 3 3 3 9 9 9

1. FIFO : 3 3 3 3 3 2 2 2 6 6 6 3 3
Pu K

1 1 1 1 1 5 5 5 7 7 7 7
2 3 3 1 5 2 4 5 3 2 5 2
ch

M M M H M H M M M M M M M H
2 2 2 2 5 5 5 5 3 3 3 3
3
% Hit =  100 = 21.43% % Miss = 88.57%
3 3 3 3 2 2 2 2 2 5 5 14

1 1 1 4 4 4 4 4 2 2. LRU :
Te

M M K M M M M H M H M M 2 3 1 2 4 3 2 5 3 6 7 9 3 7

3 2 2 2 2 2 2 2 2 2 6 6 6 3 3
% Hit =  100 = 25%
12 3 3 3 4 4 4 5 5 5 7 7 7 7

% Miss = 75% 1 1 1 3 3 3 3 3 3 9 9 9
H H H H
2. LRU :
4
% Hit =  100 = 28.57 %
2 3 3 1 5 2 4 5 3 2 5 2 14
 % Miss = 81.43%
2 2 2 2 5 5 5 5 5 5 5 5
3. OPTIMAL :
3 3 3 3 2 2 2 3 3 3 3
2 3 1 2 4 3 2 5 3 6 7 9 3 7
1 1 1 4 4 4 2 2 2
2 2 2 2 2 2 2 5 5 5 7 7 7 7
M M H M M H M H H M H H 3 3 3 3 3 3 3 3 3 3 3 3 3
4 1 1 4 4 4 4 4 6 6 9 9 9
% Hit =  100 = 33 %
12
H H H H H H
% Miss = 67%

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-28 Memory & Input / Output Systems

6 tA2
% Hit =  100 = 42.86 % where r = = Speed Ratio
14 tA1
% Miss = 57.14% 11.9.7 Cache Consistency (Also Known as
11.9.6 Cost and Performance Measurement Cache Coherency) :
of Two Level Memory Hierarchy : 1. In order to work properly for the cache subsystems, the

 Any two level memory has to be analysed with its CPU and the other bus masters must be getting the
most updated copy of the requested information.
performance characteristics as per the following set of
characteristics. 2. There are several cases wherein the data stored in cache
or in main memory may be altered whereas the

ns e
 The different group of two level memories can be cache
duplicate copy remains unchanged.
memory and main memory, main memory and virtual
Causes of cache consistency problems :

io dg
memory, internal and external cache memory etc.
1. When the copy of line in cache, no longer matches the
 Let us see the various parameters to be considered contents of line stored in memory, there is loss of cache
during the performance analysis. consistency. It can be either due to cache line being
updated while the memory line is not, or the memory

at le 2.
line being updated while the cache line is not.
In each of these instances the stale data must be
updated. It can be a result of cache write hit and hence
ic w
the caches write policy has to handle this problem for
the first case.
bl no

3. For the second case the coherency problem is due to


some other bus master changing the data in memory.
This change is to be updated in cache line by the cache
controller, hence the cache controller has to monitor the
system bus.
Pu K

Fig. 11.9.6 : Parameters considered for performance analysis


C1S1 + C2S2 11.9.8 Bus Master / Cache Interaction for
1. Average cost (C) =
S1 + S2 Cache Coherency :
ch

where, C1 and C2 are the costs per bit of memory 1 1. When another device in system uses the buses, it must
(faster memory) and memory 2 (slower memory) become bus master.
respectively. 2. In case of look-through cache design, the cache
Te

controller is requested for bus; while in case of look-


S1 and S2 are the sizes of memory 1 and memory 2
aside cache design, the processor is requested for same.
respectively.
In both cases HOLD and HLDA logic is used.
N1
2. Hit Ratio (H) = 3. In some cases, the request is to be given to bus arbiter
N1 + N2
like 8289.
where N1 is number of hits and N2 is number of misses.
Since bus masters can write to and read from memory,
3. Average access time ( tA ) = H tA1 + (1 – H) tA2 cache consistency problems may happen under three
where tA1 and tA2 are the time taken to access memory 1 circumstances :
(faster memory) and memory 2 (slower memory)
respectively.

tA = H tA1 + (1 – H) tA2

= H tA1 + (1 – H) (tA1 + tB)

where tA2 = tA1 + tB = tA1 + (1 – H) tB


tA1
4. Efficiency () =
tA

tA1 1 Fig. 11.9.7 : Conditions for consistence occurrence of


= = problem
H tA1 + (1 – H) tA2 H + (1 – H) r

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-29 Memory & Input / Output Systems

A. Writes to memory (with write-through cache) : 6. The cache controller then seizes the bus and performs a
1. When the bus masters write through memory, they memory write to update this stale line in the main
update locations that may also be cached by the cache memory. In the cache directory, the cache line is now
controller. invalidated because the bus master will update the
memory cache line immediately after the line is flushed.
2. In these cases, memory is updated and the line in the
The cache then removes back-off signal, permitting the
cache becomes stale. Hence cache controller must
bus master to reinitiate the memory write operation.
monitor the memory writes to avoid this coherency
When the bus master completes the write to memory,
problems. When the write is detected, the cache line is
the memory line will contain the most updated data.
invalidated because it will contain stale data after the

ns e
write to memory completes. Hence the cache controller 11.10 Pentium Processor Cache Unit :
has to monitor the system bus to find out what the

io dg
other bus master is doing on the system bus. So that if 1. Pentium implements separate data and code cache
another master is updating a line of the main memory, referred to as split cache.
the cache has to invalidate this line. This monitoring of 2. For each of these caches, line size = 32 bytes and data
the system bus is called as snooping. bus size is of 8 bytes (64 bits) in width.
B.

1.
at le
Reads from memory (with write-back cache) :

When the bus master reads from memory in a system


3. Hence, a burst of 4 consecutive transfers is required to
fill a cache line.
ic w
that has a write-back cache, it may read from a line 4. Implementing Pentium processor with L2 cache
containing stale data i.e. the location has been updated provides highest performance.
bl no

in cache but not in memory. 5. Since it uses write-back policy, the Pentium processor’s
2. To detect this coherency problem, write-back caches L1 data cache introduces additional complexity into
must also snoop reads from memory. cache consistency logic. The data cache can use write-
through policy or a write-back policy on a line-by-line
3. The system can be designed to back-off the bus master
basis.
Pu K

and write the cache line to memory, before releasing


back off and allowing the read continue. When the Pentium processor’s data cache utilizes a
write-through policy, its operation with an L2 cache is
ch

C. Write to main memory (with write-back cache) :


conceptually identical to that of 486. The following
1. This problem occurs when another bus master is descriptions assume that both the Pentium processor’s L1
performing a memory write to a line containing stale cache and its L2 cache are using write-back policies.
data. The bus master updates one or more locations in
Te

memory that are also contained within the cache.


11.10.1 Memory Reads Initiated by the
Pentium Processor :
2. Even if the cache line is not capable of data snarfing, it
could invalidate that cache line, causing a mistake. 1. When either of the execution units or prefetcher
3. Since the line has been marked ‘modified’, it indicates requests the information from an internal cache, the
that some or all of the information in the line is more request is immediately fulfilled if a L1 cache hit occurs.
current than the corresponding data in the memory. 2. But in case of internal cache miss, the Pentium
4. The memory write being performed by another bus processor performs a cache line fill to read the target
master will update some item within the memory line. line from external cache.
By invalidating the line in cache it would quite probably 3. The L2 cache gets memory read bus cycle and checks its
discard some data that is more current than that within directory for the copy of requested information.
the memory line. 4. If it’s a hit on L2 cache, the L2 cache notifies the
5. If the cache permits the bus master to complete the Pentium processor that the address is cacheable by
write, and then flushes the cache line to memory, the enabling the KEN#. The L2 cache (controller) then
data just written by the bus master may be over-written accesses its cache SRAM memory, satisfying the cache
by stale data in the cache line. The correct action would line fill in a burst of four consecutive 64-bit transfers.
be to back-off the bus master, before it is able to 5. If it results in a miss on the L2 cache also, the L2 cache
complete the write to memory. passes the bus cycle to the system bus.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-30 Memory & Input / Output Systems

6. The non-cacheable address logic (NCAL) determines 5. If the L2 cache does not have allocate-on-write
whether the address is cacheable or not. capability, the L2 cache will pass the data to main
7. If it is determined to be non-cacheable, the NCAL memory. When the main memory completes the write, it
notifies the L2 cache and the processor, that the activates ready to the L2 cache and the processor to
addressed location cannot be safely cached. Hence, the notify them that the cycle has completed.
bus cycle is not converted into a cache line fill and a 6. If the L2 cache supports allocate-on-write, it passes the
single-transfer bus cycle is run to fetch the requested data to the main memory as described above, but will
information directly from memory. then perform a L2 cache line fill from the same line.
8. If NCAL determines the address is cacheable, it indicates Write hits and Write-once policy :

ns e
L2 cache controller and the processor about the same,
 Assume that a new line has just been read from memory
and causes a read cycle to be converted into a cache

io dg
and the line of data is brought in the L1 cache. Since the
line fill for both the L2 and L1 caches.
line passes through the L2 cache, a copy of the line also
9. Since the access is from slow external memory, wait exists in the L2 write-back cache and in memory.
states will be inserted in the transfer. The L2 cache
 Now suppose one of the execution units writes to same
copies the first 64 bits into its cache line-fill buffer, while

at le
simultaneously forwarding it to processor and indicating
that valid data is present on the processor’s local data
line. The L1 data cache controller will check its directory
and find that the target line is the data cache. This L1
cache line will be modified. The L1 data cache line now
ic w
bus.
has modified data and its directory entry is updated to
10. The information requested for is contained in the first reflect the ‘modified’ state of the cache line.
quadword and hence the information is immediately
bl no

 Now if another bus master reads from the same line in


given to the internal requester, but three additional
memory. The L2 cache, unaware that the L1 data cache
transfers must occur to complete the cache line fill.
has modified this line, snoops the address given by the
Once the entire line has been received, both the L1 and
other bus master. The L2 cache checks its directory and
L2 caches copy the lines from their respective line-fill
Pu K

has finds a cache hit on a clean line. It takes no action


buffers into their respective caches.
because it feels that the other bus master will get valid
11.10.2 Memory Writes Initiated by PENTIUM data from memory.
ch

Processor :  The problem is that the L2 cache was not notified by the
L1 data cache that the line had been modified, and
1. The data cache can use either a write-through or a
hence the other bus master gets stale data from
write-back policy.
Te

memory.
2. When using write-back policy with an L2 cache, special
 The solution to the above problem is use of write once
care has to be taken when interacting between the
policy. This policy says to use write-through for the first
Pentium processor cache and external cache to ensure
time and write-back thereafter.
cache coherency.
1. When a line is initially placed in the L1 data cache, it is
Write Misses :
marked as ‘shared’ and makes the use of write-through
1. The L1 data cache controller checks the target address to the L2 cache on first write hit to the cache line in L1.
of the memory write to determine if the copy of the 2. Thus when the first time one of the execution units
target memory line exists in it. writes to the line, the line will be updated and the write

2. In case of a write miss, the Pentium processor initiates a operation written-through L2 cache. Hence, the L2
cache will also be updated and its directory entry
single transfer bus cycle to the target location. The L2
updated to indicate that the line has been ‘modified’.
cache controller checks its directory to determine if a
Now that the L2 cache directory indicates that this line
copy of the target line is resident in its cache.
has been ‘modified’, it will snoop correctly and will not
3. If L2 cache hit occurs, then write is performed in L2 permit any bus master to read the stale data from the
according to write-back or write-through policy. corresponding line in memory. After using write-
4. If L2 cache miss occurs, the action of L2 cache depends through, the L1 data cache changes the state of the line
on whether or not it supports allocate-on-write. from ‘shared’ to the ‘exclusive’ state.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-31 Memory & Input / Output Systems

3. Now, when an execution unit again writes to this line, 11.10.4 The MESI Model :
the L1 data cache finds the target line in the data cache
 Based on the discussion in the previous sections we can
and in exclusive state. The L1 line is modified and its show the state transition diagram of a line in cache as
directory entry is updated to the ‘modified’ state, shown in Fig. 11.10.1.
indicating that the line has been modified. All these  Besides the basic Pentium read/write and another bus
subsequent writes to this line are performed using write master read/write operations, there are some more
back policy. cases shown in the MESI model.
1. Internal snoop hit is to be considered in case when the

ns e
11.10.3 Memory Reads Initiated by Another data required by the processor is there in code cache or
Bus Master :

io dg
vice versa. In such cases the modified line is written
back to the main memory and then invalidated, while a
1. If a snoop read hit occurs to a line that has been
non modified line is simply invalidated.
modified, the L2 cache causes the bus master to back-
2. Flush# signal indicates the L1 cache to clear the entire

at le
off and simultaneously passes the address to the
Pentium processor L1 cache so that it can also snoop
the address.
cache. WBINVD indicates to clear a particular line in the
L1 cache. In case of a Flush# signal or WBINVD, also the
modified line is first written back to the main memory
ic w
and then invalidated while non modified line is simply
2. If the L1 cache controller detects a snoop read hit to a
invalidated.
bl no

modified line, it performs a burst write-back cycle to


3. In case of a INVD signal that simply indicates to
update memory. After the write-back has completed,
invalidate a line without even writing it back for a
the L2 cache controller removes the back off signal and
modified line also, the processor simply invalidates the
allows the bus master to complete the memory read. corresponding line.
Pu K
ch
Te

Fig. 11.10.1 : State diagram of MESI transition states as in Pentium’s Data cache

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-32 Memory & Input / Output Systems

11.11 Input / Output System : 11.11.1 Parallel Versus Serial Interface :


 There are a wide variety of peripherals or I/O devices
 The word, communication specifies, data transfer
that deliver different amounts of data at different
between two points.
speeds and in different formats. All these devices are
slower than CPU and RAM and hence to interface these  The data may be a digital or analog in nature.
devices to the CPU there is a need of I/O modules.
 We will consider only digital data transfer because
 Input/output module is interface to CPU and memory
microprocessor is digital circuit. Suppose you want to
with one or more peripherals.
transfer data from Point A to Point B. There are two

ns e
 The general model of I/O module interfacing with
possible ways of doing it :
system bus is shown in Fig. 11.11.1.

io dg
(1) Parallel data transfer

(2) Serial data transfer.

 For parallel data transfer, we can use 8255. Two 8255’s

at le 
are connected, one at each side.

The Port A of 8255.


ic w
(1) At point A is connected to Port A of 8255.

(2) At point B. So the data transferred is of 8 bits at a


bl no

time. For implementing this communication, we


want 8 lines of PA interconnected and line will be
the common ground between two points.
Pu K

 In serial data transfer the data is transferred serially on a


Fig. 11.11.1 : General model of I/O module interface single line, the same hardware used for parallel data can
also be used to implement this. Instead of connecting
ch

 The various functions of the I/O module involve :


1. Issue of control and timing signals all 8 lines connect single line from Port A of 8255 :

2. Communication with CPU (1) To Port A of 8255.


3. Communication with peripheral
Te

(2) To implement above communication we require


4. Buffering of data between the CPU and peripheral
and one line of Port A interconnected and second line

5. Detection of errors i.e. common ground between two points.

 The internal block diagram of I/O module is shown in Now let’s compare the specified 2 methods of data
Fig. 11.11.2. transfer.

Sr.
Parallel Serial
No.

1. Parallel lines of 8/16/32 Only 1 bit is transmitted


bits. Hence 8/16/32 bits at a time.
can be transmitted
simultaneously.

2. The data transfer is The data transfer is


comparatively faster. comparatively slower.

3. Due to so many parallel No ‘crosstalk’ possible.


paths ‘crosstalk’ among
Fig. 11.11.2 : Internal block diagram of an I/O module different bits is possible.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-33 Memory & Input / Output Systems

1. Simplex :
Sr.
Parallel Serial
No.  The simplex is one way transmission.

4. This cannot be used for This can be used for  The connection exists such that data transfer takes place
distant communication. distant communication. only in one direction.

5. More parallel hardware is Less parallel hardware  There is no possibility of data transfer in the other
required. required. direction.

6. It is comparatively It is comparatively  System A is transmitter and system B is receiver only.


costlier. cheaper.
2. Duplex :

ns e
 In these two methods the cost of connecting two
The duplex is two way transmissions. It is further divided

io dg
distant points, is the main factor. So though the parallel
in 2 groups :
data transfer is faster, it is preferred for small distances
(a) Half duplex :
only. But for long distances, serial data transfer is
preferred.  It is a connection between two terminals such that, data

 at le
In serial data transfer the 8 bits of data is converted into
serial 8 bits; using shift register (parallel in serial out
may travel in both the directions, but transmission
activated in one direction at a time.
ic w
mode). These serial bits are transferred on single line  This indicates that the line has to turn around after
using serial I/O data transfer. communication is complete in one direction.
bl no

 To transfer 8 bits of data, it will require 8 clock pulses. (b) Full duplex :
On the other side exactly opposite process is done.
 It is a connection between two terminals such that, data
These serial 8 bits are accepted and converted to
may travel in both the directions simultaneously. So it
parallel form to get 8 bits of data. This process is shown
Pu K

will contain one way transmission or two way


in Fig. 11.11.3.
transmission at a time.
ch

11.12 I/O Modules and 8089 IO


Processor :

An Input/output device can never be connected directly


Te

to the processor. It always has to be interfaced using an I/O


module.
Fig. 11.11.3 : Serial I/O
I/O module is required for the following reasons :
11.11.2 Types of Communication Systems :
1. I/O devices are normally slower than the processor and
The communication systems are classified on the basis
also have different speeds. Hence if there is no
of transmission :
I/O module, the processor will have to wait for long

time for the I/O devices. Hence I/O module works as a

buffer between the processor and I/O device to hold the

data for the required time

2. Each I/O device has different data bus width.

I/O module does the required width conversion.

3. Each I/O device has different protocol to be followed.


Fig. 11.11.4 : Types of communication systems Some use serial communication, some use parallel,

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-34 Memory & Input / Output Systems

some have handshaking signals etc. Hence I/O module 11.13 Types of Data Transfer Techniques :
communicates according to the protocol required by Programmed I/O, Interrupt Driven
the I/O devices. I/O and DMA :
11.12.1 I/O Module :  There is yet another method of classifying the
interfacing of I/O devices based on how and when the
1. Need of input module : each output device operates at
data is transferred between the processor and I/O
a different speed, has different data format and
devices.

ns e
different protocol.
 There are three types under this method of classification

io dg
Also, most of the I/O devices are slower than the speed namely programmed I/O, interrupt driven I/O and DMA
of the processor. Hence, an IO module is used to (Direct Memory Access).
interface the IO device to the processor.
Definition of polling :

at le Polling is a mechanism, wherein the processor checks


each and every device for it needs a service or not.
ic w
11.13.1 Programmed I/O :
Fig. 11.12.1 : Input output module
bl no

 In the programmed I/O method of interfacing, CPU has


Block diagram of I/O module :
direct control over I/O.

 The processor checks the status of the devices and


Pu K

issues read or write commands and then transfers data.


During the data transfer, CPU waits for I/O module to
ch

complete operation and hence this system wastes the


CPU time.

 The sequence of operations to be carried out in


Te

programmed I/O operation are :

1. CPU requests for I/O operation.

2. I/O module performs the said operation.


Fig. 11.12.2 : Block diagram of I/O module
3. I/O module updates the status bits.
 Data register is used to store the data given by the
4. CPU checks these status bits periodically. Neither
input device to be forwarded to the processor OR given
the I/O module cannot inform CPU directly nor can
by the processor to be forwarded to an output device.
I/O module interrupt CPU.
 Address register is used to provide address of the IO 5. CPU may wait for the operation to complete or
device to be accessed. may continue the operation later.

 Mode and control unit indicates the mode of operation  IC 8255 is generally used as a I/O module for

for the I/O module, as well as controls the transfer of programmed I/O method of interfacing.

data between the IO module and IO device, as well as  A common programming task is the transfer of a block
IO module and CPU. of words between an Input/output device and memory.

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-35 Memory & Input / Output Systems

 Fig. 11.13.1 gives a flowchart for transferring a block of 4. On getting the interrupt, CPU requests data from
data. the I/O module.

5. I/O module transfers the data to CPU.

 After issuing the read command the CPU performs its


work, but checks for the interrupt after every instruction
cycle as seen earlier in this chapter.

 When CPU gets an interrupt, it performs the following

ns e
operation in sequence

io dg
 Save context i.e. the contents of the registers on the
stack

 Processes interrupt by executing the corresponding ISR

at le 


Restore the register context from the stack.

IC 8259 has 8 interrupt lines and is used as a I/O module


ic w
when Interrupt driven I/O is used.

 The interrupt driven Input/output mechanism for


bl no

transferring a block of data is shown in Fig. 11.13.2.


Pu K
ch

Fig. 11.13.1 : Transferring a block of data using


programmed Input/output
Te

11.13.2 Interrupt Driven I/O :


 Interrupt Driven I/O overcomes the disadvantage of

programmed I/O i.e. the CPU waiting for I/O device.

 This disadvantage is overcome by CPU not repeatedly

checking for the device being ready or not instead the

I/O module interrupts when ready.

 The sequence of operations for interrupt Driven I/O is as

below :

1. CPU issues the read command to I/O device.

2. I/O module gets data from peripheral while CPU

does other work.

3. Once the I/O module completes the data transfer


Fig. 11.13.2 : Transferring a block of data using interrupt
from I/O device, it interrupts CPU. driven Input/output

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-36 Memory & Input / Output Systems

Transferring a word of data


 The address register is used to hold the address of the
 CPU issues a ‘READ’ command to Input/output device memory location from which the data is to be
and then switches to some other program. CPU may be transferred. There may be multiple address registers to
working on different programs.
hold multiple addresses.
 Once the Input/output device is ready with the data in
 The address may be incremented or decremented after
its data register, Input/output device signals an interrupt
every transfer based on the mode of operation.
to the CPU.

 When the interrupt from Input/output device occurs, it  The data count register is used to keep a track of the

ns e
suspends execution of the current program, reads data number of bytes to be transferred. The counter register
from the port and then resumes execution of the is decremented after every transfer.

io dg
suspended program.
 The data register is used in a special case i.e. when the
11.13.3 DMA : transfer of a block is to be done from one memory
location to another memory location.
DMA stands for Direct Memory Access. The I/O module

at le
can directly access (read or write) the memory using this
method.
 Also you will note in the Fig. 11.13.3 the read and write
signals are bidirectional.
ic w
 Interrupt driven and programmed I/O require active  The DMA controller is initially programmed by the CPU,
operation of the CPU, hence transfer rate is limited and for the count of bytes to be transferred, address of the
bl no

CPU is also busy doing the transfer operation. DMA is memory block for the data to be transferred etc.
the solution for these problems.
 During this programming of the DMAC (DMA
 DMA controller takes over the control of the bus from
controller), the read and write lines work as inputs for
Pu K

CPU for I/O transfer.


the DMAC.
 The internal block diagram of a DMA controller of the
 This is because the CPU has to tell the DMAC whether it
ch

I/O module for DMA method of I/O interfacing is shown


is reading or writing from the DMAC.
in the Fig. 11.13.3.
 Once the DMAC takes the control of the system bus i.e.
transfers the data between the memory and I/O device,
Te

these read and write signals work as output signals.

 They are used to tell the memory that the DMAC wants
to read or write from the memory according to the
operation being data transfer from memory to I/O or
from I/O to memory.

 The speciality of DMA is that the CPU carries on with


other work while the DMA controller deals with transfer
of data. DMA controller sends a signal when finished.

11.13.4 DMA Transfer Modes :


Fig. 11.13.3 : Internal block diagram of DMA controller  There are various modes of operation used to transfer

 In Fig. 11.13.3, you will notice that there are various the data between the memory and I/O device by the

registers like data count, data register and address DMA controller.
register.  The four major methods used are discussed below :

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-37 Memory & Input / Output Systems

3. Demand transfer mode :

 In demand transfer mode, the device continues making


transfers until a Terminal Count or external EOP is
encountered, or until DREQ goes inactive.

 Thus, transfer may continue until the I/O device has


exhausted its data handling capacity.

 Thus this method is said to be a trade off between the


earlier two methods. If the I/O device is fast enough it

ns e
Fig. 11.13.4 : DMA transfer modes will keep on getting data and need not wait for extra
time for the request grant signals as in the single

io dg
1. Single transfer mode :
transfer method.
 In single transfer mode, the device is programmed to
 Also the CPU has not to wait for longer time in case if
make one byte transfer only after getting the control of
the I/O device is slower, because if the I/O device is
the system bus.

 at le
After transferring one byte the control of the bus will be
returned back to the CPU.
4.
slower the transfer terminate.
Hidden transfer mode :
ic w
 In hidden transfer mode, the DMA controller takes over
 The word count will be decremented and the address the charge on the system bus and transfers data when
decremented or incremented following each transfer. processor does not needs system bus.
bl no

 The disadvantage in this method is that the I/O device  The processor does not even realize of this transfer
has to wait for a long time after every transfer for the being taken place.

extra request grant signals.  The processor does not needs the system bus when it is
Pu K

performing some execution of an instruction in the ALU


 The advantage is that the CPU has not to remain out of
or certain instructions that do not need the system bus
the system or not having the access of system bus for
ch

access at all. It happens mostly between the machine


longer time, instead only for one transfer.
cycles.
2. Block transfer mode :  Hence these transfers are hidden from the processor.
Te

 In block transfer mode, the device is activated by DREQ


(DMA Request) or software request and continues Review Questions
making transfers during the service until a Terminal
Q. 1 What are different memory parameters ?
Count (i.e. the counter becomes zero), or an external
Q. 2 Give the classifications of primary and secondary
End of Process (EOP) is encountered.
memories.
 The disadvantage is that the CPU has to remain out of
Q. 3 Write a short note on memory hierarchy.
the system or not having the access of system bus for
Q. 4 Write a short note on memory allocation.
longer time, until all the bytes in the block are
Q. 5 What is the principle of locality of reference ?
transferred.
Q. 6 What are the two different types of cache
 The problem further increases in case if the I/O device is
architecture? Explain look-aside cache design.
slower and the system is waiting for the I/O device to
Q. 7 What are the advantages and disadvantages of
complete its operation, thus the CPU has to wait for very
look-through cache design ?
long period in this case.
Q. 8 Write a short note on cache mapping techniques.
 The advantage is that the I/O device gets the transfer of Q. 9 What are the parameters considered for
data at a very faster speed. performance analysis ?

Powered by TCPDF (www.tcpdf.org)


LD&CO (Sem. III / IT / SPPU) 11-38 Memory & Input / Output Systems

Q. 10 What are the conditions for consistence occurrence Q. 16 With neat block diagram explain I/O module.
of problem ? Q. 17 Write a short note on direct memory access.
Q. 11 Write a short note on pentium processor cache unit. Q. 18 What are the different transfer modes in DMA ?
Q. 12 Write a short note on write hits and write-once Explain.
policy. Q. 19 Explain various types of ROM : Magnetic as well as
Q. 13 With neat state diagram explain MESI model. optical.
Q. 14 Differentiate between parallel and serial interface. Q. 20 Interface 7 KB EPROM and 6 kB RAM to a
Q. 15 What are the different types of communication processor with 16-bit address and 7-bit data bus.

ns e
system ? Explain.

io dg



at le
ic w
bl no
Pu K
ch
Te

Powered by TCPDF (www.tcpdf.org)

You might also like