Module 3
Module 3
Semester 2/2024
Contents
1
1.5.2.1 Gray code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
1.5.2.2 Formation of a K-Map . . . . . . . . . . . . . . . . . . . . . . . . . . 40
1.5.2.3 Logic Minimization Using K-Maps (Sum of Products) . . . . . . . . . 43
1.5.2.4 Logic Minimization Using K-Maps (Product of Sums) . . . . . . . . . 48
1.5.2.5 Minimal Sum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
1.5.3 Don’t Cares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
1.5.4 Using XOR Gates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
1.6 Timing Hazards and Glitches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
1.7 Assignments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
2
Chapter 1
1.1 Introduction
In this week we cover the techniques to synthesize, analyze, and manipulate logic functions. The purpose
of these techniques is to ultimately create a logic circuit using the basic gates described in week 1 from
a truth table or word description. This process is called combinational logic design.
Combinational logic refers to circuits where the output depends on the present value of the inputs.
This simple definition implies that there is no storage capability in the circuitry and a change on
the input immediately impacts the output. To begin, we first define the rules of Boolean algebra, which
provide the framework for the legal operations and manipulations that can be taken on a two-valued
number system (i.e., a binary system). We then explore a variety of logic design and manipulation
techniques. These techniques allow us to directly create a logic circuit from a truth table and then to
manipulate it to either reduce the number of gates necessary in the circuit or to convert the logic circuit
into equivalent forms using alternate gates. The goal of this week is to provide an understanding of the
basic principles of combinational logic design.
• Analyze a combinational logic circuit to determine its logic expression, truth table, and timing
information.
• Synthesis a logic circuit in canonical form (sum of products or product of sums) from a functional
description including a truth table, minterm list, or maxterm list.
• Synthesize a logic circuit in minimized form (sum of products or product of sums) through alge-
braic manipulation or with a Karnaugh map.
• Describe the causes of timing hazards in digital logic circuits and the approaches to mitigate them.
3
for more powerful mathematics such as solving for unknowns and manipulating into equivalent forms.
The ability to manipulate into equivalent forms allows us to minimize the number of logic operations
necessary and also put into a form that can be directly synthesized using modern logic circuits.
In 1854, English mathematician George Boole presented an abstract algebraic framework for a sys-
tem that contained only two states, true and false. This framework essentially launched the field of
computer science even before the existence of the modern integrated circuits that are used to implement
digital logic today. In 1930, American mathematician Claude Shannon applied Boole’s algebraic frame-
work to his work on switching circuits at Bell Labs, thus launching the field of digital circuit design and
information theory. Boole’s original framework is still used extensively in modern digital circuit design
and thus bears the name Boolean algebra. Today, the term Boolean algebra is often used to describe not
only George Boole’s original work but all of those that contributed to the field after him.
1.2.1 Operations
In Boolean algebra there are two valid states (true and false) and three core operations. The operations
are:
From these three operations, more sophisticated operations can be created including other logic func-
tions (i.e., BUF, NAND, NOR, XOR, XNOR, etc.) and arithmetic. Engineers primarily use the terms
AND, OR, and NOT instead of conjunction, disjunction, and negation. Similarly, engineers primarily
use the symbols for these operators described in this week (e.g., ., +, and ’) instead of ∧, ∨, and ¬.
1.2.2 Axioms
An axiom is a statement of truth about a system that is accepted by the user. Axioms are very simple
statements about a system but need to be established before more complicated theorems can be proposed.
Axioms are so basic that they do not need to be proved in order to be accepted. Axioms can be thought of
as the basic laws of the algebraic framework. The terms axiom and postulate are synonymous and used
interchangeably. In Boolean algebra there are five main axioms. These axioms will appear redundant
with the description of basic gates but must be defined in this algebraic context so that more powerful
theorems can be proposed.
4
1.2.2.3 Axiom #3: Definition of a Logical Product
This axiom defines a logical product or multiplication. Logical multiplication is denoted using either
a dot (.), an ampersand (&), or the conjunction symbol (∧). The result of logical multiplication is true
when both inputs are true and false otherwise.
Axiom #3 – Definition of a logical product: A.B = 1 if A = B = 1 and A.B = 0 otherwise.
1.2.3 Theorems
A theorem is a more sophisticated truth about a system that is not intuitively obvious. Theorems are
proposed and then must be proved. Once proved, they can be accepted as a truth about the system going
forward. Proving a theorem in Boolean algebra is much simpler than in our traditional decimal system
due to the fact that variables can only take on one of two values, true or false. Since the number of input
possibilities is bounded, Boolean algebra theorems can be proved by simply testing the theorem using
every possible input code. This is called proof by exhaustion. The following theorems are used widely
in the manipulation of logic expressions and reduction of terms within an expression.
5
Figure 1.1: Proving De Morgan’s theorem of duality using proof by exhaustion.
Duality is important for two reasons. First, it doubles the impact of a theorem. If a theorem is proved
to be true, then the dual of that theorem is also proved to be true. This, in essence, gives twice the
theorem with the same amount of proving. Boolean algebra theorems are almost always given in pairs,
the original and the dual. That is why duality is covered as the first theorem.
The second reason that duality is important is because it can be used to convert between positive and
negative logic. Until now, we have used positive logic for all of our examples (i.e., a logic HIGH = true
= 1 and a logic LOW = false = 0). As mentioned earlier, this convention is arbitrary, and we could have
easily chosen a HIGH to be false and a LOW to be true (i.e., negative logic). Duality allows us to take
a logic expression that has been created using positive logic (F) and then convert it into an equivalent
expression that is valid for negative logic (FD ). Example in Fig. 1.2 shows the process for how this
works.
6
Figure 1.2: Converting between positive and negative logic using duality.
One consideration when using duality is that the order of precedence follows the original function.
This means that in the original function, the axiom for precedence states the order as NOT-AND-OR;
however, this is not necessarily the correct precedence order in the dual. For example, if the original
function was F = A . B + C, the AND operation of A and B would take place first, and then the result
would be OR’d with C. The dual of this expression is FD = A + B . C. If the expression for FD was
evaluated using traditional Boolean precedence, it would show that FD does NOT give the correct result
per the definition of a dual function (i.e., converting a function from positive to negative logic).
The order of precedence for FD must correlate to the precedence in the original function. Since in the
original function A and B were operated on first, they must also be operated on first in the dual. In order
to easily manage this issue, parentheses can be used to track the order of operations from the original
function to the dual. If we put parentheses in the original function to explicitly state the precedence of
the operations, it would take the form F = (A . B) + C. These parentheses can be mapped directly to the
7
dual yielding FD = (A + B) . C. This order of precedence in the dual is now correct.
Now that we have covered the duality operation, its usefulness and its pitfalls, we can formally define
this theorem as:
De Morgan’s Duality: An algebraic equality will remain true if all 0’s and 1’s are interchanged
and all AND and OR operations are interchanged. Furthermore, taking the dual of a positive logic
function will produce the equivalent function using negative logic if the original order of precedence is
maintained.
1.2.3.2 Identity
An identity operation is one that when performed on a variable will yield itself regardless of the vari-
able’s value. The following is the formal definition of identity theorem. Fig. 1.3 shows the gate-level
depiction of this theorem.
Identity: OR’ing any variable with a logic 0 will yield the original variable. The dual: AND’ing any
variable with a logic 1 will yield the original variable.
The identity theorem is useful for reducing circuitry when it is discovered that a particular input will
never change values. When this is the case, the static input variable can simply be removed from the
logic expression making the entire circuit a simple wire from the remaining input variable to the output.
8
Figure 1.4: Gate-level depiction of the null element theorem.
1.2.3.4 Idempotent
An idempotent operation is one that has no effect on the input, regardless of the number of times the
operation is applied. The following is the formal definition of idempotence. Fig. 1.5 shows the gate-level
depiction of this theorem.
Idempotent: OR’ing a variable with itself results in itself. The dual: AND’ing a variable with itself
results in itself.
This theorem also holds true for any number of operations such as A + A + A + ... + A = A and
A . A . A . ... . A = A.
1.2.3.5 Complements
This theorem describes an operation of a variable with the variable’s own complement. The following is
the formal definition of complements. Fig. 1.6 shows the gate-level depiction of this theorem.
Complements: OR’ing a variable with its complement will produce a logic 1. The dual: AND’ing a
variable with its complement will produce a logic 0.
9
Figure 1.6: Gate-level depiction of the complements theorem.
The complement theorem is again useful for reducing circuitry when these types of logic expressions
are discovered.
1.2.3.6 Involution
An involution operation describes the result of double negation. The following is the formal definition
of involution. Fig. 1.7 shows the gate-level depiction of this theorem.
Involution: Taking the double complement of a variable will result in the original variable.
This theorem is not only used to eliminate inverters but also provides us a powerful tool for inserting
inverters in a circuit. We will see that this is used widely with the second of De Morgan’s laws that will
be introduced at the end of this section.
10
Figure 1.8: Gate-level depiction of the commutative property.
One practical use of the commutative property is when wiring or routing logic circuitry together.
Example in Fig. 1.9 shows how the commutative property can be used to untangle crossed wires when
implementing a digital system.
11
Figure 1.9: Using the commutative property to untangle crossed wires.
12
in the operation has no impact on the result. The following is the formal definition of the associative
property. Fig. 1.10 shows the gate-level depiction of this theorem.
Associative Property: The grouping of variables doesn’t impact the result of an OR operation. The
dual: The grouping of variables doesn’t impact the result of an AND operation.
One practical use of the associative property is addressing fan-in limitations of a logic family. Since
the grouping of the input variables does not impact the result, we can accomplish operations with large
numbers of inputs using multiple gates with fewer inputs. Example in Fig. 1.11 shows the process of
using the associative property to address a fan-in limitation.
13
1.2.3.9 Distributive Property
The term distributive describes how an operation on a parenthesized group of operations (or higher
precedence operations) can be distributed through each term. The following is the formal definition of
the distributive property. Fig. 1.12 shows the gate-level depiction of this theorem.
Distributive Property: An operation on a parenthesized operation(s), or higher precedence operator,
will distribute through each term.
The distributive property is used as a logic manipulation technique. It can be used to put a logic
expression into a form more suitable for direct circuit synthesis or to reduce the number of logic gates
necessary. Example in Fig. 1.13 shows how to use the distributive property to reduce the number of
gates in a logic circuit.
Figure 1.13: Using the distributive property to reduce the number of logic gates in a circuit.
14
1.2.3.10 Absorption
The term absorption refers to when multiple logic terms within an expression produce the same results.
This allows one of the terms to be eliminated from the expression, thus reducing the number of logic op-
erations. The remaining terms essentially absorb the functionality of the eliminated term. This theorem
is also called covering because the remaining term essentially covers the functionality of both itself and
the eliminated term. The following is the formal definition of the absorption theorem. Fig. 1.14 shows
the gate-level depiction of this theorem.
Absorption: When a term within a logic expression produces the same output(s) as another term,
the second term can be removed without affecting the result.
This theorem is better understood by looking at the evaluation of each term with respect to the original
expression. Example in Fig. 1.15 shows how the absorption theorem can be proven through proof by
exhaustion by evaluating each term in a logic expression.
1.2.3.11 Uniting
The uniting theorem, also called combining or minimization, provides a way to remove variables from an
15
expression when they have no impact on the outcome. This theorem is one of the most widely used tech-
niques for the reduction of the number of gates needed in a combinational logic circuit. The following
is the formal definition of the uniting theorem. Fig. 1.16 shows the gate-level depiction of this theorem.
Uniting: When a variable (B) and its complement (B’) appear in multiple product terms with a
common variable (A) within a logical OR operation, the variable B does not have any effect on the result
and can be removed.
This theorem can be proved using prior theorems. Example in Fig. 1.17 shows how the uniting
theorem can be proved using a combination of the distributive property, the complements theorem, and
the identity theorem.
16
De Morgan’s Theorem: An OR operation with both inputs inverted is equivalent to an AND opera-
tion with the output inverted. The dual: An AND operation with both inputs inverted is equivalent to an
OR operation with the output inverted.
This theorem is used widely in modern logic design because it bridges the gap between the design of
logic circuitry using Boolean algebra and the physical implementation of the circuitry using CMOS.
Recall that Boolean algebra is defined for only three operations, the AND, the OR, and the inversion.
CMOS, on the other hand, can only directly implement negative-type gates such as NAND, NOR, and
NOT. De Morgan’s theorem allows us to design logic circuitry using Boolean algebra and synthesize
logic diagrams with AND, OR, and NOT gates and then directly convert the logic diagrams into an
equivalent form using NAND, NOR, and NOT gates. Boolean algebra produces logic expressions in
two common forms. These are the sum of products (SOP) and the product of sums (POS) forms.
Using a combination of involution and De Morgan’s theorem, SOP and POS forms can be converted into
equivalent logic circuits that use only NAND and NOR gates. Example in Fig. 1.19 shows a process to
convert a sum of products form into one that uses only NAND gates.
17
Figure 1.19: Converting a sum of products form into one that uses only NAND gates.
Example in Fig. 1.20 shows a process to convert a product of sums form into one that uses only NOR
gates.
18
Figure 1.20: Converting a product of sums form into one that uses only NOR gates.
De Morgan’s theorem can also be accomplished algebraically using a process known as breaking
the bar and flipping the operator. This process again takes advantage of the involution theorem, which
allows double negation without impacting the result. When using this technique in algebraic form,
involution takes the form of a double inversion bar. If an inversion bar is broken, the expression will
remain true as long as the operator directly below the break is flipped (AND to OR, OR to AND).
19
Example in Fig. 1.21 shows how to use this technique when converting an OR gate with its inputs
inverted into an AND gate with its output inverted.
Example in Fig. 1.22 shows how to use this technique when converting an AND gate with its inputs
inverted into an OR gate with its output inverted.
20
Figure 1.22: Using De Morgan’s theorem in algebraic form (2).
Table in Fig. 1.23 gives a summary of all the Boolean algebra theorems just covered. The theorems
are grouped in this table with respect to the number of variables that they contain. This grouping is the
most common way these theorems are presented.
21
1.2.4 Functionally Complete Operation Sets
A set of Boolean operators is said to be functionally complete when the set can implement all possible
logic functions. The set of operators AND, OR, NOT is functionally complete because every other oper-
ation can be implemented using these three operators (i.e., NAND, NOR, BUF, XOR, XNOR). The De
Morgan’s theorem showed us that all AND and OR operations can be replaced with NAND and NOR
operators. This means that NAND and NOR operations could be by themselves functionally complete if
they could perform a NOT operation. Fig. 1.24 shows how a NAND gate can be configured to perform a
NOT operation. This configuration allows a NAND gate to be considered functionally complete because
all other operations can be implemented.
This approach can also be used on a NOR gate to implement an inverter. Fig. 1.25 shows how a
NOR gate can be configured to perform a NOT operation, thus also making it functionally complete.
22
Figure 1.25: Configuration to use a NOR gate as an inverter.
In-class Question 1: If the logic expression F = A.B.C.D.E.F.G.H is implemented with only 2-input
AND gates, how many levels of logic will the final implementation have? Hint: Consider using the
associative property to manipulate the logic expression to use only 2-input AND operations.
1. 2
2. 3
3. 4
4. 5
23
until the output of the system is reached and the final logic expression of the circuit has been found.
Consider the example of this analysis in Example 1.
Solution: First, let’s label each of the internal nodes of the circuit. We’ll call these nodes n1, n2,
and n3. Next, let’s insert the logic expression for each node working from the left to the right. Finally,
we can write the final output logic expression for F based on all of the prior internal node expressions.
Substitutions can be made within each expression to put the logic in terms of only the input variable
names (i.e., A, B, and C).
24
Example 2: Determining the truth table from a logic diagram:
Given: The following combinational logic diagram.
Solution: First, we label each internal node and record the intermediate logic expressions.
Next, we evaluate each node for all possible input codes working from the left to the right. This
allows us to keep a record of the values of each intermediate node that can be used in the subsequent
evaluations. We continue this process until we reach the final output F.
25
1.3.3 Timing Analysis of a Combinational Logic Circuit
Real logic gates have a propagation delay (tpd , tPHL , or tPLH ) as discussed in week 1. Performing a
timing analysis on a combinational logic circuit refers to observing how long it takes for a change in
the inputs to propagate to the output. Different paths through the combinational logic circuit will take
different times to compute since they may use gates with different delays. When determining the delay
of the entire combinational logic circuit, we always consider the longest delay path. This is because this
delay represents the worst-case scenario. As long as we wait for the longest path to propagate through
the circuit, then we are ensured that the output will always be valid after this time. To determine which
signal path has the longest delay, we map out each and every path the inputs can take to the output of
the circuit. We then sum up the gate delay along each path. The path with the longest delay dictates the
delay of the entire combinational logic circuit. Consider this analysis shown in Example 3.
Solution: We begin by mapping the route of each and every path from the inputs to the output. For
each path, we sum the delay through each gate that is used.
26
The longest delay path through this circuit is from B to F in which the signal traverses the inverter,
XOR gate, and OR gate (tdelay-3 ). This path takes 8ns to compute. Since we must always consider the
longest delay path when calculating how fast this circuit can operate, we can say that the delay of this
combinational logic circuit is 8ns.
In-class Question 2: Does the delay specification of a combinational logic circuit change based on
the input values that the circuit is evaluating?
1. Yes. There are times when the inputs switch between inputs codes that use paths through the
circuit with different delays.
3. Yes. The delay can vary between the longest delay path and zero. A delay of zero occurs when the
inputs switch between two inputs codes that produce the same output.
4. No. The output is always produced at a time equal to the longest delay path.
27
Figure 1.26: Definition and gate-level depiction of a minterm
For an arbitrary truth table, a minterm can be used for each row corresponding to a true output. If each
of these minterms’ outputs are fed into a single OR gate, then a sum of products logic circuit is formed
that will produce the logic listed in the truth table. In this topology, any input code that corresponds to
an output of 1 will cause its corresponding minterm to output a 1. Since a 1 on any input of an OR gate
will cause the output to go to a 1, the output of the minterm is passed to the final result. Example 4
shows this process. One important consideration of this approach is that no effort has been taken to min-
imize the logic expression. This unminimized logic expression is also called the canonical sum. The
canonical sum is logically correct but uses the most amount of circuitry possible for a given truth table.
This canonical sum can be the starting point for minimization using Boolean algebra.
Solution: Let’s first start by writing the minterms for the rows that correspond to a 1 on the output.
These can then be implemented using inverters and AND gates. The final step is to feed the outputs of
each minterm circuit into a single OR gate.
28
Let’s now check that this circuit performs as intended by testing it under each input code for A and
B and observing the output F.
29
Example 5: Creating a minterm list from a truth table:
Given: The following truth table.
An alternative form of a minterm list is shown below that does not use subscripts. This form is
sometimes used when a text editor does not support subscripts.
F (A, B) = Σ(1, 2)
A minterm list contains the same information as the truth table, the canonical sum and the canonical
sum of products logic diagram. Since the minterms themselves are formally defined for an input code,
it is trivial to go back and forth between the minterm list and these other forms. Example 6 shows how
a minterm list can be used to generate an equivalent truth table, canonical sum, and canonical sum of
products logic diagram.
F = ΣA,B,C (0, 3, 7)
Find: The truth table, canonical sum logic expression and the canonical sum of products logic
diagram.
Solution: First, let’s generate the truth table. From the minterm list subscripts, we know that there
are three input variables named A, B and C. These will be listed in the truth table with Ain the most
30
significant position and C in the least significant position. We can fill in the input codes as a binary count
and insert the row numbers. We can then list the output values that are true. From the minterm list we
know that the true outputs are on rows 0, 3 and 7. Since we know we will need the minterm expressions
for these rows in the canonical sum, we can also list them in the truth table.
The canonical sum is simply the minterm expressions corresponding to a true output OR’d together.
Since we already wrote the minterm expressions for rows 0, 3 and 7 (e.g., m0 , m3 , and m7 ) in the truth
table, we can write the canonical sum directly.
F = A0 .B 0 .C 0 + A0 .B.C + A.B.C
The canonical sum of products logic diagram is simply the gate level depiction of the canonical sum.
When logic diagrams get larger, it is acceptable to indicate a variable’s complement as a prime instead
of placing individual inverters and drawing connection wires that cross each other. It is implied that
multiple listings of a variable’s complement (e.g., A’ in m0 and m3 ) will come from the same inverter.
31
for one and only one input code. The maxterm must contain every literal in its expression. Complements
are applied to the input variables as necessary in order to produce a false output for the individual input
code. Fig. 1.27 shows the definition and gate-level depiction of a maxterm expression. Each maxterm
can be denoted using the upper case “M” with the row number as a subscript.
For an arbitrary truth table, a maxterm can be used for each row corresponding to a false output.
If each of these maxterms outputs are fed into a single AND gate, then a product of sums logic circuit
is formed that will produce the logic listed in the truth table. In this topology, any input code that
corresponds to an output of 0 will cause its corresponding maxterm to output a 0. Since a 0 on any input
of an AND gate will cause the output to go to a 0, the output of the maxterm is passed to the final result.
Example 7 shows this process. This approach is complementary to the sum of products approach. In the
sum of products approach based on minterms, the circuit operates by producing 1’s that are passed to the
output for the rows that require a true output. For all other rows, the output is false. A product of sums
approach based on maxterms operates by producing 0’s that are passed to the output for the rows that
require a false output. For all other rows, the output is true. These two approaches produce the equivalent
logic functionality. Again, at this point no effort has been taken to minimize the logic expression. This
unminimized form is called a canonical product. The canonical product is logically correct but uses the
most amount of circuitry possible for a given truth table. This canonical product can be the starting point
for minimization using the Boolean algebra theorems.
32
Example 7: Creating a product of sums logic circuit using maxterms:
Given: The following truth table.
Solution: Let’s first start by writing the maxterms for the rows that correspond to a 0 on the output.
These can then be implemented using inverters and OR gates. The final step is to feed the outputs of
each maxterm circuit into a single AND gate.
Let’s now check that this circuit performs as intended by testing it under each input code for A and
B and observing the output F.
33
The circuit operates as intended
34
Example 8: Creating a maxterm list from a truth table:
Given: The following truth table.
Solution:
An alternative form of a maxterm list is shown below that does not use subscripts.
F (A, B) = Π(0, 3)
A maxterm list contains the same information as the truth table, the canonical product, and the
canonical product of sums logic diagram. Example 9 shows how a maxterm list can be used to generate
these equivalent forms.
F = ΠA,B,C (1, 2, 4, 5, 6)
Find: The truth table, canonical product logic expression and the canonical product of sums
logic diagram.
Solution: First, let’s generate the truth table. From the maxterm list subscripts, we know that there
are three input variables named A, B and C that will be used in the truth table in that order. We can fill in
the input codes as a binary count and insert the row numbers. We then can list the output values that are
false. From the maxterm list we know that the false outputs are on rows 1, 2, 4, 5 and 6. Since we know
we will need the maxterm expressions for these rows in the canonical product, we can also list them in
the truth table.
35
The canonical product is simply the maxterm expressions corresponding to a false output AND’d
together. Since we already wrote these maxterm expressions in the truth table (M1 , M2 , M4 , M5 and
M6 ) we can write the canonical product directly.
The canonical product of sums logic diagram is simply the gate level depiction of the canonical
product.
36
1.4.5 Minterm and Maxterm List Equivalence
The examples in Examples 6 and 9 illustrate how minterm and maxterm lists produce the exact same
logic functionality but in a complementary fashion. It is trivial to switch back and forth between minterm
lists and maxterm lists. This is accomplished by simply changing the list type (i.e., min to max, max to
min) and then switching the row numbers between those listed and those not listed. Example 10 shows
multiple techniques for representing equivalent logic functionality as a truth table.
Find: All equivalent forms to describe the same functionality as the truth table.
Solution: Let’s start by writing the minterm list and the maxterm list. These two lists are equivalent
to each other. Remember that the minterm list provides the row numbers corresponding to an output of
true while the maxterm list provides the row numbers corresponding to an output of false.
Let’s write the minterm and maxterm expressions in the truth table. These will be used when creating
the canonical sum and product expressions.
Now let’s write the canonical sum and canonical product logic expressions using these minterms and
maxterms. Remember that a canonical sum is simply all of the minterms corresponding to an output of
true OR’d together, and a canonical product is simply all of the maxterms corresponding to an output of
false AND’d together.
F = A0 .B 0 + A.B = (A + B 0 ).(A0 + B)
Finally, let’s draw the canonical sum of products logic diagram and the canonical product of sums
logic diagram.
37
In-class Question 3: All logic functions can be implemented equivalently using either a canonical
sum of products (SOP) or canonical product of sums (POS) topology. Which of these statements is true
with respect to selecting a topology that requires the least amount of gates?
1. Since a minterm list and a maxterm list can both be written to describe the same logic functionality,
the number of gates in an SOP and POS will always be the same.
2. If a minterm list has over half of its row numbers listed, an SOP topology will require fewer gates
than a POS.
3. A POS topology always requires more gates because it needs additional logic to convert the inputs
from positive to negative logic.
4. If a minterm list has over half of its row numbers listed, a POS topology will require fewer gates
than SOP.
38
Example 11: Minimizing a logic expression algebraically:
Given: The following truth table.
Solution:
The primary drawback of this approach is that it requires recognition of where the theorems can be
applied. This can often lead to missed minimizations and human error. Computer automation is often
the best mechanism to perform this minimization for large logic expressions.
39
Out of Curriculum Information (Will not be covered in exam)
One of the best and also free logic minimizers is Espresso which was originally developed at IBM
by Robert K. Brayton et al. in 1982. You can download the latest version (2017) Espresso software
from Github.
Instructions (Assuming that you have Linux OS):
• $ cd espresso-logic-master/espresso-src
• $ cd spresso-logic-master/man
• $ man -l cd espresso.5
• Create text file design.in with the following content (This is the function in Example 11):
.i 3
.o 1
000 1
001 0
010 1
011 1
100 0
101 0
110 1
111 1
• .i 3 means there are 3 inputs and .o 1 means there is 1 output and .p 2 means there 2 product
terms. "0"’s represent inverted inputs and "1"’s represent non-inverted inputs, "-" represent
inputs which are not used in a given product term. For example, 0-0 is A’C’ and -1- is B,
therefore the minimized expression for the given function is A’C’+B.
40
1.5.2 Minimization Using Karnaugh Maps
A Karnaugh map (K-map) is a graphical way to minimize logic expressions. This technique is named
after Maurice Karnaugh, American physicist, who introduced the map in its latest form in 1953 while
working at Bell Labs. The K-map is a way to put a truth table into a form that allows logic minimization
through a graphical (visual) process.
This technique provides a graphical process that accomplishes the same result
as factoring variables via the distributive property and removing variables via the
complements and identity theorems. K-maps present a truth table in a form that
allows variables to be removed from the final logic expression in a graphical manner.
Figure 1.29: Formation of a 2-input K-map. Numbers in cells are truth table row numbers.
41
Figure 1.30: A 2-input function with its truth table and K-map.
When constructing a 3-input K-map, it is important to remember that each inputcode can only differ
from its neighbor by one bit. For example, the two codes 01 and 10 differ by two bits (i.e., the MSB
is different and the LSB is different); thus they could not be neighbors; however, the codes 01-11 and
11-10 can be neighbors. Consider the construction of a 3-input K-map shown in Fig. 1.31. The rows and
columns that correspond to the input literals can now span multiple rows and columns. The side edges of
the 3-input K-map are still considered neighbors because the input codes for these columns only differ
by one bit.
Figure 1.31: Formation of a 3-input K-map. Numbers in cells are truth table row numbers.
This is an important attribute once we get to the minimization of variables because it allows us to
examine an input literal’s impact not only within the obvious adjacent cells but also when the variables
wrap around the edges of the K-map. Fig. 1.32 shows the 3-input function’s truth table beside the
K-map.
42
Figure 1.32: A 3-input function with its truth table and K-map.
When constructing a 4-input K-map, the same rules apply that the input codes can only differ from
their neighbors by one bit. Consider the construction of a 4-input K-map in Fig. 1.33. In a 4-input
K-map, neighboring cells can wrap around both the top-to-bottom edges in addition to the side-to-side
edges. Notice that all 16 cells are positioned within the map so that their neighbors on the top, bottom,
and sides only differ by one bit in their input codes.
Fig. 1.34 shows the 4-input function’s truth table beside the K-map.
43
Figure 1.34: A 4-input function with its truth table and K-map.
44
Figure 1.35: Observing how K-maps visually highlight logic minimizations.
These observations can be put into a formal process to produce a minimized SOP logic expression
using a K-map. The steps are as follows:
45
• If the circle covers a region where the input variable is a 0, then include it in the product term
complemented.
• If the circle covers a region where the input variable is both a 0 and 1, then the variable is
excluded from the product term.
Let’s apply this approach to our 2-input K-map example. Example in Fig. 1.36 shows the process
of finding a minimized sum of products logic expression for a 2-input logic circuit using a K-map. This
process yielded the same SOP expression as the algebraic minimization and observations shown in Fig.
1.35, but with a formalized process.
Figure 1.36: Using a K-map to find a minimized sum of products expression (2-Input).
Let’s now apply this process to our 3-input K-map example. Example 4.24 shows the process of
finding a minimized sum of products logic expression for a 3-input logic circuit using a K-map. This
example shows circles that overlap. This is legal as long as one circle does not fully encompass another.
Overlapping circles are common since the K-map process dictates that circles should be drawn that group
the largest number of ones possible as long as they are in powers of 2. Forming groups of ones using
46
ones that have already been circled is perfectly legal to accomplish larger groupings. The larger the
grouping of ones, the more chance there is for a variable to be excluded from the product term. This
results in better minimization of the logic.
Figure 1.37: Using a K-map to find a minimized sum of products expression (3-input).
Let’s now apply this process to our 4-input K-map example. Example in Fig. 1.38 shows the process
of finding a minimized sum of products logic expression for a 4-input logic circuit using a K-map.
47
Figure 1.38: Using a K-map to find a minimized sum of products expression (4-input).
48
1.5.2.4 Logic Minimization Using K-Maps (Product of Sums)
K-maps can also be used to create minimized product of sums logic expressions. This is the same concept
as how a minterm list and maxterm list each produce the same logic function, but in complementary
fashions. When creating a product of sums expression from a K-map, groups of 0’s are circled.
For each circle, a sum term is derived with a negation of variables similar to when forming a maxterm
(i.e., if the input variable is a 0, then it is included uncomplemented in the sum term and vice versa). The
final step in forming the minimized POS expression is to AND all of the sum terms together. The formal
process is as follows:
2. Create a sum term for each prime implicant following the rules:
Let’s apply this approach to our 2-input K-map example. Example in Fig. 1.39 shows the process of
finding a minimized product of sums logic expression for a 2-input logic circuit using a K-map. Notice
that this process yielded the same logic expression as the SOP approach shown in Example Fig. 1.35.
This illustrates that both the POS and SOP expressions produce the correct logic for the circuit.
49
Figure 1.39: Using a K-map to find a minimized product of sums expression (2-input).
Let’s now apply this process to our 3-input K-map example. Example 4.27 shows the process of
finding a minimized product of sums logic expression for a 3-input logic circuit using a K-map. Notice
that the logic expression in POS form is not identical to the SOP expression found in Example 4.24;
however, using a few steps of algebraic manipulation shows that the POS expression can be put into a
form that is identical to the prior SOP expression. This illustrates that both the POS and SOP produce
equivalent functionality for the circuit.
50
Figure 1.40: Using a K-map to find a minimized product of sums expression (3-input).
Let’s now apply this process to our 4-input K-map example. Example 4.28 shows the process of
finding a minimized product of sums logic expression for a 4-input logic circuit using a K-map.
51
Figure 1.41: Using a K-map to find a minimized product of sums expression (4-input).
52
1.5.2.5 Minimal Sum
One situation that arises when minimizing logic using a K-map is that some of the prime implicants may
be redundant. Consider the example in Fig. 4.20.
53
We need to define a formal process for identifying redundant prime implicants that can be removed
without impacting the result of the logic expression. Let’s start with examining the sum of products
form.
First, we define the term essential prime implicant as a prime implicant that cannot be removed
from the logic expression without impacting its result. We then define the term minimal sum as a logic
expression that represents the most minimal set of logic operations to accomplish a sum of products form.
There may be multiple minimal sums for a given truth table, but each would have the same number of
logic operations.
In order to determine if a prime implicant is essential, we first put in each and every possible prime
implicant into the K-map. This gives a logic expression known as the complete sum. From this point we
identify any cells that have only one prime implicant covering them. These cells are called distinguished
one cells. Any prime implicant that covers a distinguished one cell is defined as an essential prime im-
plicant. All prime implicants that are not essential are removed from the K-map. A minimal sum is then
simply the sum of all remaining product terms associated with the essential prime implicants. Example
in Fig. 1.43 shows how to use this process.
54
This process is identical for the product of sums form to produce the minimal product.
Figure 1.44: Using don’t cares to produce a minimal SOP logic expression.
55
1.5.4 Using XOR Gates
While Boolean algebra does not include the exclusive-OR and exclusive-NOR operations, XOR and
XNOR gates do indeed exist in modern electronics. They can be a useful tool to provide logic cir-
cuitry with less operations, sometimes even compared to a minimal sum or product synthesized using
the techniques just described. An XOR/XNOR operation can be identified by putting the values from a
truth table into a K-map. The XOR/XNOR operations will result in a characteristic checkerboard pattern
in the K-map. Consider the following patterns for XOR and XNOR gates in Figs. 4.21, 4.22, 4.23, and
4.24. Anytime these patterns are observed, it indicates an XOR/XNOR gate.
Figure 1.45: XOR and XNOR checkerboard patterns observed in K-maps (2-input).
Figure 1.46: XOR and XNOR checkerboard patterns observed in K-maps (3-input).
56
Figure 1.47: XOR and XNOR checkerboard patterns observed in K-maps (4-input).
In-class Question 4: Logic minimization is accomplished by removing variables from the original
canonical logic expression that don’t impact the result. How does a Karnaugh map graphically show
what variables can be removed?
1. K-maps contain the same information as a truth table but the data is formatted as a grid. This
allows variables to be removed by inspection.
57
2. K-maps rearrange a truth table so that adjacent cells have one and only one input variable changing
at a time. If adjacent cells have the same output value when an input variable is both a 0 and a 1,
that variable has no impact on the interim result and can be eliminated.
3. K-maps list both the rows with outputs of 1’s and 0’s simultaneously. This allows minimization to
occur for a SOP and POS topology that each has the same, but minimal, number of gates.
4. K-maps display the truth table information in a grid format, which is a more compact way of
presenting the behavior of a circuit.
In-class Question 5: A "Don’t Care" can be used to minimize a logic expression by assigning the
output of a row to either a 1 or a 0 in order to form larger groupings within a K-map. How does the
output of the circuit behave when it processes the input code for a row containing a don’t care?
1. The output will be whatever value was needed to form the largest grouping in the K-map.
3. The output can toggle between a 0 and a 1 when this input code is present.
58
Figure 1.49: Examining the source of a timing hazard (or glitch) in a combinational logic circuit.
59
Figure 1.50: Timing hazard definitions.
Timing hazards can be addressed in a variety of ways. One way is to try to match the propagation
delays through each path of the logic circuit. This can be difficult, particularly in modern logic families
such as CMOS. In the example in Fig. 1.49, the root cause of the different propagation delays was due
to an inverter on one of the variables. It seems obvious that this could be addressed by putting buffers
on the other inputs with equal delays as the inverter. This would create a situation where all input codes
would arrive at the first stage of AND gates at the same time regardless of whether they were inverted
or not and eliminate the hazards; however, CMOS implements a buffer as two inverters in series, so it is
difficult to insert a buffer in a circuit with an equal delay to an inverter. Addressing timing hazards in this
way is possible, but it involves a time-consuming and tedious process of adjusting the transistors used
to create the buffer and inverter to have equal delays. Another technique to address timing hazards is to
place additional circuitry in the system that will ensure the correct output while the input codes switch.
Consider how including a nonessential prime implicant can eliminate a timing hazard in Example
shown in Fig. 1.51. In this approach, the minimal sum from Fig. 1.49 is instead replaced with the
complete sum. The use of the complete sum instead of the minimal sum can be shown to eliminate both
static and dynamic timing hazards. The drawback of this approach is the addition of extra circuitry in
the combinational logic circuit (i.e., nonessential prime implicants).
60
Figure 1.51: Eliminating a timing hazard by including nonessential product terms.
In-class Question 6: How long do you need to wait for all hazards to settle out?
1. The time equal to the delay through the nonessential prime implicants.
2. The time equal to the delay through the essential prime implicants.
3. The time equal to the shortest delay path in the circuit.
61
4. The time equal to the longest delay path in the circuit.
1.7 Assignments
1. Use proof by exhaustion to prove that an OR gate with its inputs inverted is equivalent to an AND
gate with its outputs inverted.
2. Use proof by exhaustion to prove that an AND gate with its inputs inverted is equivalent to an OR
gate with its outputs inverted.
3. For the logic diagram given in Fig. 1.52, give the logic expression for the output F.
4. For the logic diagram given in Fig. 1.52, give the truth table for the output F.
5. For the logic diagram given in Fig. 1.52, give the delay.
6. For the logic diagram given in Fig. 1.53, give the logic expression for the output F.
7. For the logic diagram given in Fig. 1.53, give the truth table for the output F.
8. For the logic diagram given in Fig. 1.53, give the delay.
9. For the 2-input truth table in Fig. 1.54, give the canonical sum of products (SOP) logic expression.
62
Figure 1.54: Table for Question 9.
10. For the 3-input truth table in Fig. 1.55, give the canonical sum of products (SOP) logic expression.
11. For the 3-input truth table in Fig. 1.55, give the minterm list.
12. For the 3-input truth table in Fig. 1.55, give the canonical product of sums (POS) logic expression.
13. For the following 3-input minterm list, give the canonical sum of products (SOP) and canonical
product of sums (POS) logic expression.
F = ΣA,B,C (2, 4, 6)
14. For the 4-input truth table in Fig. 1.56, give the canonical sum of products (SOP) logic expression.
63
Figure 1.56: Truth Table for Questions 14-18.
15. For the 4-input truth table in Fig. 1.56, give the canonical sum of products (SOP) logic diagram.
16. For the 4-input truth table in Fig. 1.56, give the minterm list.
17. For the 4-input truth table in Fig. 1.56, give the canonical product of sums (POS) logic expression.
18. For the 4-input truth table in Fig. 1.56, give the maxterm list.
19. For the 2-input truth table in Fig. 1.57, use a K-map to derive a minimized sum of products (SOP)
logic expression.
20. For the 2-input truth table in Fig. 1.57, use a K-map to derive a minimized product of sums (POS)
logic expression.
21. For the 3-input truth table in Fig. 1.58, use a K-map to derive a minimized product of sums (POS)
logic expression.
64
Figure 1.58: Truth Table for Questions 21-22.
22. For the 3-input truth table in Fig. 1.58, use a K-map to derive a minimized sum of products (SOP)
logic expression.
23. For the 4-input truth table in Fig. 1.59, use a K-map to derive a minimized sum of products (SOP)
logic expression.
24. For the 4-input truth table in Fig. 1.59, use a K-map to derive a minimized product of sums (POS)
logic expression.
65
25. For the 4-input truth table and K-map in Fig. 1.60, give the minimal sum of products (SOP) logic
expression by exploiting "don’t cares".
26. For the 4-input truth table and K-map in Fig. 1.60, give the minimal product of sums (POS) logic
expression by exploiting "don’t cares".
27. For the 3-input truth table in Fig. 1.61, give the product term that helps eliminate static-1 timing
hazards in this circuit.
66
28. For the 3-input truth table in Fig. 1.61, give the sum term that helps eliminate static-0 timing
hazards in this circuit.
67