We can always use the rules of Boolean algebra from Section
2.1 to simplify an expression, but this method has a few problems. For one
thing, there is no algorithm you can use to determine that you've obtained
a minimum solution. When do you stop looking for a simplification theorem
to apply? Another problem is that you often have to make the expression
more complicated before you can simplify it. In our examples, we sometimes
substituted for 1 to add more terms. Then we rearranged the terms
to obtain advantageous groupings that helped to simplify the expression
in a later step. It is against human nature to climb out of a "local
minimum" in the hope of finding a better global solution. But this
is exactly what we often have to do. And finally, it is just too cumbersome
Given that computer-based tools have been developed for Boolean simplification, why bother to learn any hand method, especially when these break down for problems with many variables? Certainly no hand method will be effective for equations involving more than six variables. But you still need knowledge of the basic approach. Observing the symmetries in a circuit's function helps to understand its behavior and to visualize what is going on inside the circuit. As CAD tools become ever more sophisticated, you need a deeper knowledge of algorithms they apply to use the tools effectively. And don't forget that CAD tools are written by mere mortals and do not always do the correct things! You must still be able to check the output of the tool.
The Essence of Boolean Simplification Let's look at what is really going on in simplification with the simple truth table of Figure 2.24
The function is We can simplify this equation by
applying one of the Boolean simplification theorems, called the uniting
theorem:
Notice that the two truth table rows that make up the on-set of F have A asserted, while one row has B
Now examine Figure 2.24
The essence of simplification is repeatedly to find two-element subsets of the on-set in which only one variable changes its value while the other variables do not. You can eliminate the single varying variable from the term by applying the uniting theorem.
Wouldn't it be nice if there was a way to arrange the truth table rows so that the entries to which this technique could be applied are obvious? We shall present just such a representation, the Boolean cube, next.
We can represent the truth table of an n-input
Boolean function as a "cube" in an n-dimensional Boolean
space. There is one axis for each variable, over which it can take on exactly
two values: 0 or 1.
Figure 2.25 shows the form of Boolean 1-, 2-, 3-, and 4-cubes. Each node
in the figure is labeled with its coordinates in the n-dimensional
space. The axes are listed in alphabetical order, from highest to lowest
order. The structure generalizes beyond four dimensions, but it is rather
hard to visualize these.
Figure 2.26 shows the mapping. Observe that the elements of the functions'
on-sets are next to each other in the truth table's Boolean cube. This tells
you that the uniting theorem can be used to eliminate a variable. In the
figure, we have circled elements of the on-set that are directly adjacent.
We call such a circled group of nodes an adjacency plane. Each adjacency
plane corresponds to a product term. In Figure 2.26
You can think about these adjacencies as distances between nodes in the Boolean cube. Two nodes are distance 1 apart if they are connected by an edge in the cube. They are distance 2 apart if they are separated by a path of two connected edges. If this is the case, the two nodes are on the same plane. Two nodes are distance 3 apart if they are separated by a path of three connected edges and no shorter path exists between the two nodes. Then the nodes are in the same three-dimensional cube.
In the on-set/adjacency plane of Figure 2.26
As an example of a three-variable function and its mapping onto a 3-cube, let's return to the full adder's carry-out function examined in Section 2.2.1. The truth table and its mapping onto a 3-cube are shown in Figure 2.27.
The on-set is arranged on 3 one-dimensional adjacency planes: the edges
containing the nodes 011-111, 111-101, and 110-111. In the first segment,
A varies between 0 and 1 while B and Cin remain
asserted and unvarying. This reduces to the term B Cin.
For the second segment, B varies, yielding the term A
Cin. In the final segment, Cin varies, and the resulting
term is AB. The final expression becomes
Adjacencies of Higher Dimensions What about adjacency planes of higher dimensionality than single edges, such as a two-dimensional plane within a 3-cube? Consider the function depicted in the 3-cube of Figure 2.28.
One entire face of the cube is included in the on-set. Intuitively, we
should expect this surface to reduce to the single literal A because
all other variables vary between 0 and 1 within the surface.
Let's verify that this is the case. The four line segments on the surface are denoted by the nodes 110-111, 101-111, 100-101, and 100-110. Applying the uniting theorem to each segment independently yields the terms AB, AC, A and A respectively. We can continue to apply the uniting theorem:
In the 3-cube, if the on-set is a two-dimensional plane, it contributes a single 1-variable product term to the expression for the function.
For the 3-cube, the relationship between the dimensionality of the adjacency plane and the term it represents is the following:
The fewer planes needed to include all of the 1's in the truth table, the fewer the terms in the function's final expression. Planes of higher dimensionality generate terms with fewer literals in them. Thus, -computer--aided design algorithms for minimization attempt to find the smallest number of the highest-dimensionality adjacency planes that contain all the nodes of the function's on-set
General Concept A K-map for a Boolean function specifies values of the function for every combination of the function's variables. Just like a truth table, an n-variable K-map has 2n entries. Rather than the linear enumeration of the truth table entries, a K-map has two and sometimes three dimensions, indexed by the values of the input variables.
Figure 2.29 shows K-map templates for two-, three-, and four-variable
Boolean functions. The entries are labeled with the decimal equivalent of
the binary number formed by joining the column with the row indices. For
example, entry 3 in the three-variable K-map is labeled by the column AB
The only surprising thing is the ordering of the indices: 00, 01, 11, 10. This is called a Gray code. It has the property that when advancing from one index to the next adjacent index, only a single bit changes value. This property is not true for the standard binary sequence 00, 01, 10, 11.
The structure of the K-map is chosen so that any two adjacent
This is shown in Figure 2.30 for a three-variable K-map and three-dimensional
Boolean cube. Note that K-map square 0 is adjacent to squares 1, 2, and
4. The K-map actually folds back on itself. The elements in the rightmost
column are adjacent to the elements in the leftmost column; the elements
in the top row are adjacent to the elements in the bottom row.
In Figure 2.31
The same analysis holds for Figure 2.31
Observe that the bar labeled B along the bottom of the K-map tells you that B remains unchanged within the middle two columns and in the two outer columns. The final adjacency is indexed by A
Let's revisit the function of Figure 2.28. Its K-map is given in Figure
2.33. The four elements of the on-set are adjacent, and we can circle them
all. Within this grouping, both B and C vary while
A remains asserted, reducing to the single literal A.
Another case of adjacency is illustrated by Figure 2.34, which shows
the K-map for the function F
You might be tempted to circle the terms m4 and m5, as they are also adjacent. But you obtain the most reduced solution only by finding the smallest number of the largest possible adjacency groups that completely cover the on-set. Recall that the number of groups equals the number of product terms, and larger groupings have a smaller number of literals. The term formed from m4 and m5 is redundant because both entries are already covered by other terms. We will become more formal about the process of obtaining the minimum solution a bit later on in this section.
K-maps provide a good mechanism for finding
the reduced form of the complement very quickly.
Figure 2.35 contains the K-map for the complement of Figure 2.34. All
we have done is to replace the 0's with 1's and vice versa. The complement
can be read out immediately as Contrast this method
with the method using Boolean algebra:
:
The K-map yields the result much more quickly!
The elements m2, m3, m6, m7, m10, m11, m14, m15 form an adjacency group of eight. This collapses to the single literal C.
Examining Figure 2.37 should make this clearer. In the figure, look for the min-term indices 0000, 0010, 1010, and 1000. The corner elements reduce to the term
Finding the Minimum Product of Sums Form The K-map
can also be used to find a function's minimum product of sums expression.
In this case we search for elements of the off-set, simply circling the
maximal adjacent groups of 0's. We interpret the indices in a fashion complementary
to the procedure for finding the minimum sum of products expression. If
the variable that is left unchanged in a grouping of 0's has index 0, then
that variable contributes an asserted literal to the term. If the index
is 1, it contributes a complemented literal.
This method works because we begin by solving for the function's complement in sum of products form, by circling the 0's. Then we apply DeMorgan's theorem to get a product of sums expression by interpreting the indices as complements. Let's look at an example.
Let's reconsider the K-map in Figure 2.36. in minimum sum of products form is found by circling the K-map's 0's rather than its 1's: . By applying DeMorgan's theorem, we get F in product of sums form:
Figure 2.38 shows the same K-map as Figure 2.36, but this time with the
0's circled. You can see that three groups of two 0's each can be found.
Since these are one-dimensional adjacency groups in a four-dimensional space,
there are three literals in each term. The term formed from M4
and M12 is
Don't Cares in K-maps The
last wrinkle we consider is the use of don't cares within the K-map. Figure
2.39 shows a K-map for the function F
In product of sums form, the shorthand expression is written as F
Figure 2.40 shows the groupings we use to find the minimum product of
sums form. We form a group of eight 0's
The first step in tackling any problem like this is to understand fully the behavior of the circuit being designed. You can do this best by drawing a block diagram showing inputs and outputs and creating a truth table for each output as a function of the inputs.
These are shown in Figure 2.41.
It is fairly straightforward to fill in the table. For example, the first
row compares the N1 input 00 to the N2 input 00. The F1
function
The next step is to prepare K-maps for each of the outputs. This is shown in Figure 2.42.
Let's start with the K-map for F1. There are no adjacencies
in this K-map! Each of the four elements of the on-set contributes a four-variable
term to the expression for F1:
This is the minimized sum of products form for the function. However, by using Boolean algebra, we can simplify this expression some more:
We can get a much simpler form,
The K-map for F2 has three adjacency groups, two with two elements and another with four elements. The former yield product terms with three literals, and , the latter a term with two literals, . The expression for F2 becomes
The K-map for F3 is a little more complicated. It consists of two groups of two elements each
Two-Bit Binary Adder The next function we examine is a 2-bit binary adder. It takes as input two 2-bit binary numbers, N1 and N2, and produces a 3-bit binary number, N3, as the result. The block diagram and truth table for these functions are shown in Figure 2.43.
Once again, N1 is represented by the inputs A and B,
N2 by C and D, and N3 by the Boolean
functions X, Y, and Z.
The K-maps for the outputs are shown in Figure 2.44.
The maps for the X and Z outputs are more straightforward
than for Y, and we will start with these. The function for X
reduces to two 2-element groups
Z exhibits two 4-element groups
By careful examination of the K-map, we can often spot opportunities for reduction using XOR and XNOR operators. We will show this by reducing the literal count for the function Y by good use of XOR/XNOR.
The two straightforward terms of Y are and . The remaining four single-element groups yield the terms: , , , and . We can further reduce and
Factoring yields the expressions
We can factor the latter two expressions:
Y becomes:
This expression has just seven literals. Compare it to the reduced form, assuming only AND, OR, and NOT gates are allowed:
This expression requires two 4-input AND gates, four 3-input AND gates, and a 6-input OR gate, for a total of 7 gates and 20 literals. The revised expression requires a 2-input OR gate, two 2-input XOR gates, and two 2-input AND gates, a total of 5 simpler gates and 7 literals to implement the function. The two alternative implementations are shown in Figure 2.45.
BCD Increment by 1 Function We
introduced the BCD increment by 1 function in Section 2.2.4 as an example
of a function with don't cares. The truth table of Figure 2.23 yields the
four 4-variable K-maps of Figure 2.46.
We attempt to form the largest adjacency groups we
can, taking advantage of don't cares to expand the group wherever possible.
The function W can be implemented by two terms: . These are formed
from adjacency groups of two elements and four elements, respectively. Notice
how we have taken advantage of adjacencies that wrap from the top of the
K-map to the bottommost row.
The function X is implemented by three terms: . Once again, we have attempted to take advantage of adjacencies that wrap from top to bottom or left to right in the K-map.
The function Y is implemented by two terms: . This is derived from groups of two and four entries, respectively. The final function Z is implemented by a group of eight nodes, which reduces to the single literal .
Once again, notice that adjacency groups are always formed by groups of 1
The four-variable K-map of Figure 2.47 contains six prime implicants:
, , , , , and . Only and are essential.
Adding the additional implicant covers the entire on-set.
Thus the minimized expression for the function becomes:
As another example, consider the function whose K-map is given in Figure 2.7.
It contains five prime implicants: , , , , and . All but the first implicant are essential. The minimized form
is
As a final example, consider the K-map of Figure 2.49.
It contains four prime implicants: , , , and . The implicant is inessential, since the
1's it covers are already covered by the remaining implicants, all of which
are essential. The minimized function is
Deriving a Minimized Expression from a K-map A procedure for finding a minimum sum of products expression from the K-map is the following:
Continuing down the column, we next come to m5. At this point we should add only new implicants that are not already contained within an implicant found so far. is the only implicant we can add under this rule, as shown in Figure 2.50
The next element of the on-set is m6, but no new implicant can be added because the set of implicants already contains . So we continue searching for the next element of the on-set, which is m13. We can now add the implicant . Note that the implicant is already covered by the prime implicant , so we do not add it to the K-map. The state of the process to this point is shown in Figure 2.50
The next 1 is m8, which contributes three additional prime implicants, , , and . This is shown in Figure 2.50
All the elements of the on-set are now covered, and we have obtained all prime implicants. The next step is to identify the essential prime implicants. The highlighted prime implicants of Figure 2.50
The last step is to cover the remaining 1's not already covered by the essential primes. This is accomplished by the single prime implicant . The process of enumerating prime implicants found six of them, yet three were sufficient to cover the entire on-set. The final minimized function is
Five-Variable K-Maps A five-variable map is shown in Figure 2.51
Let's consider the following Boolean function:
Six-Variable K-Map The six-variable K-map is to the five-variable map as the four-variable is to the three-variable: the number of four-variable planes is increased from two to four. This is shown in Figure 2.52
An example six-variable K-map is shown in Figure 2.52
(
and error prone)
to manipulate Boolean expressions
by hand.Given that computer-based tools have been developed for Boolean simplification, why bother to learn any hand method, especially when these break down for problems with many variables? Certainly no hand method will be effective for equations involving more than six variables. But you still need knowledge of the basic approach. Observing the symmetries in a circuit's function helps to understand its behavior and to visualize what is going on inside the circuit. As CAD tools become ever more sophisticated, you need a deeper knowledge of algorithms they apply to use the tools effectively. And don't forget that CAD tools are written by mere mortals and do not always do the correct things! You must still be able to check the output of the tool.
The Essence of Boolean Simplification Let's look at what is really going on in simplification with the simple truth table of Figure 2.24
(
a)
. Notice that the two truth table rows that make up the on-set of F have A asserted, while one row has B
=
0 and the other has B =
1. For a subset
of the on-set (
in this case, the whole on-set)
,
A's value stays the same while B's value changes. This allows us
to factor out B using the uniting theorem. Now examine Figure 2.24
(
b)
.
The function is Applying the uniting theorem again,
we obtain Once again, the on-set contains two rows in which B's
value does not change (
it is equal to 0)
and A's
does change. Thus we can factor out A, leaving The essence of simplification is repeatedly to find two-element subsets of the on-set in which only one variable changes its value while the other variables do not. You can eliminate the single varying variable from the term by applying the uniting theorem.
Wouldn't it be nice if there was a way to arrange the truth table rows so that the entries to which this technique could be applied are obvious? We shall present just such a representation, the Boolean cube, next.
Boolean Cubes
A cube is usually defined as a solid object bounded by six equal squares. This concept can be generalized to other than three dimensions. A two-dimensional cube is a square, a one-dimensional cube is a line, and a zero-dimensional cube is a point.Examples Mapping Truth Tables onto Cubes
Now
let's examine how to map a truth table onto a cube, using the simple examples
of Figure 2.24. The elements of the on-set are represented by black nodes
and those of the off-set by white nodes (
don't cares are represented
by X nodes, although we do not have any in this example)
. (
a)
,
the circled nodes yield the term A; in Figure 2.26(
b)
,
the term is You can think about these adjacencies as distances between nodes in the Boolean cube. Two nodes are distance 1 apart if they are connected by an edge in the cube. They are distance 2 apart if they are separated by a path of two connected edges. If this is the case, the two nodes are on the same plane. Two nodes are distance 3 apart if they are separated by a path of three connected edges and no shorter path exists between the two nodes. Then the nodes are in the same three-dimensional cube.
In the on-set/adjacency plane of Figure 2.26
(
a)
,
A's value stays 1 while B's varies between 0 and 1. This
is an immediate clue that the uniting theorem can reduce the function to
the single literal A. Similarly, in Figure 2.26(
b)
the adjacency plane is circled, and the nodes involved have B retaining
its value while A varies. The uniting theorem reduces the expression to
the term As an example of a three-variable function and its mapping onto a 3-cube, let's return to the full adder's carry-out function examined in Section 2.2.1. The truth table and its mapping onto a 3-cube are shown in Figure 2.27.
This method lets us obtain the final expression much more quickly than using Boolean algebra. Note that each adjacency plane contributes one product term to the final expression. Since each plane is an edge in a three-dimensional cube, the two 3-variable min-terms in the plane are reduced to a single 2-variable product term.Cout =
B Cin+
A Cin+
AB
Adjacencies of Higher Dimensions What about adjacency planes of higher dimensionality than single edges, such as a two-dimensional plane within a 3-cube? Consider the function depicted in the 3-cube of Figure 2.28.
Let's verify that this is the case. The four line segments on the surface are denoted by the nodes 110-111, 101-111, 100-101, and 100-110. Applying the uniting theorem to each segment independently yields the terms AB, AC, A and A respectively. We can continue to apply the uniting theorem:
In the 3-cube, if the on-set is a two-dimensional plane, it contributes a single 1-variable product term to the expression for the function.
For the 3-cube, the relationship between the dimensionality of the adjacency plane and the term it represents is the following:
- A zero-dimensional adjacency plane, a single node, yields a three--literal
product term. For example, 101
=
This is the same as a min-term. - A one-dimensional plane, an edge, yields a two-literal product term.
For example, 100-101
=
- A two-dimensional plane yields a one-literal product term. For example,
100-101-111-110
=
A. - A three-dimensional plane, the whole cube, yields a term with no literals in it; that is, it reduces to the constant logic 1.
The fewer planes needed to include all of the 1's in the truth table, the fewer the terms in the function's final expression. Planes of higher dimensionality generate terms with fewer literals in them. Thus, -computer--aided design algorithms for minimization attempt to find the smallest number of the highest-dimensionality adjacency planes that contain all the nodes of the function's on-set
(
perhaps including
elements of the don't-care set as well)
. This process is called
finding a minimum cover of the function. K-Map Method
The cube notation makes obvious the adjacencies among truth table rows. Adjacencies provide visual clues as to where to apply the uniting theorem to elements of the on-set that are distance 1(
a
line)
, 2 (
a square)
, 3 (
a
cube)
, etc. apart within the n-dimensional cube that
represents the function. The problem for humans is the difficulty of visualizing
adjacencies in more than three dimensions. To circumvent this prob-lem,
at least for expressions up to six variables, we introduce an alternative
reformulation of the truth table called the Karnaugh map or K-map.General Concept A K-map for a Boolean function specifies values of the function for every combination of the function's variables. Just like a truth table, an n-variable K-map has 2n entries. Rather than the linear enumeration of the truth table entries, a K-map has two and sometimes three dimensions, indexed by the values of the input variables.
=
01 and the row C =
1 (
ABC
=
0112 =
3)
. The labels are included
only for your convenience in filling in the K-map from the truth table:
they correspond to the row number of the associated truth table entry. The only surprising thing is the ordering of the indices: 00, 01, 11, 10. This is called a Gray code. It has the property that when advancing from one index to the next adjacent index, only a single bit changes value. This property is not true for the standard binary sequence 00, 01, 10, 11.
The structure of the K-map is chosen so that any two adjacent
(
horizontal or vertical, but not diagonal)
elements are distance 1 apart in the equivalent cube representation (
they
share a common edge)
. Example Two-Variable Maps
Figure 2.31 shows the two-variable
maps for the example functions F and G of Figure 2.24.
The K-map is filled in directly from the truth table. Each truth table row
corresponds to a K-map entry. The values of the variables are the indices,
and the truth table's output value is placed into the K-map's square with
the corresponding index. In Figure 2.31
(
a)
, the terms
of the function are as denoted by the 1's in the A
=
1, B =
0 and A =
1, B =
1 map entries. We can apply the uniting theorem
to reduce this to the single literal A. The K-map shows immediately
that the two entries are adjacent. The A variable value remains
unchanged while the B value varies from 0 to 1. Looking at this
group should tell you that the B term can be eliminated, leaving
us with A. The same analysis holds for Figure 2.31
(
b)
.
The function is , and its on-set is row adjacent in the
K-map. This demonstrates the advantage of the K-map over the truth table
for recognizing adjacencies. A varies from 0 to 1 while B
holds at 0 for this K-map row. We can reduce the function to the single
literal Example Three-Variable Maps
Figure 2.32 shows the three-variable
K-map for the full adder carry-out function of Figure 2.27. You can see
that three different two-element adjacencies cover the on-set (
recall
that adjacency is not defined for diagonal entries)
. The first
is the column indexed by A =
1, B =
1. Since Cin varies from 0 to 1, it can be eliminated, yielding
the term AB. The second is the adjacency indexed by A
=
0, B =
1, Cin =
1 and A =
1, B =
1, Cin
=
1. A varies while B and Cin remain
unchanged, yielding the term B Cin. Observe that the bar labeled B along the bottom of the K-map tells you that B remains unchanged within the middle two columns and in the two outer columns. The final adjacency is indexed by A
=
1, B =
1, Cin =
1 and A =
1, B
=
0, Cin =
1. B varies and A
and Cin remain unchanged, resulting in the term A Cin.
Once again, the labeled bar at the top of the K-map reminds us that A
remains unchanged within the last two columns and in the first two
columns. The final expression is AB +
B Cin
+
A Cin. There is one term in the reduced
expression for each circled adjacency group in the K-map. (
A,B,C)
=
m0 +
m4 +
m5
+
m7. Recall that the leftmost and rightmost columns
of the K-map are adjacent. Thus we can combine m0 (
)
and m4 (
)
to form the term We can also combine m5
and m7 to form the term AC. Thus, You might be tempted to circle the terms m4 and m5, as they are also adjacent. But you obtain the most reduced solution only by finding the smallest number of the largest possible adjacency groups that completely cover the on-set. Recall that the number of groups equals the number of product terms, and larger groupings have a smaller number of literals. The term formed from m4 and m5 is redundant because both entries are already covered by other terms. We will become more formal about the process of obtaining the minimum solution a bit later on in this section.
Example Four-Variable Maps
Now let's consider a four-variable
function F(
A,B,C,D)
=
S m(
0,2,3,5,6,7,8,10,11,14,15)
. The K-map
is shown in Figure 2.36. Remember that the strategy is to cover the on-set
with as few groups as possible, so we must try to find large groups of adjacency.
Also, don't forget that the number of elements in an adjacency group is
always a power of 2. The elements m2, m3, m6, m7, m10, m11, m14, m15 form an adjacency group of eight. This collapses to the single literal C.
(
Recall
that a three-dimensional plane within a four-dimensional cube yields a term
with 4 - 3 =
1 literal.)
The elements m5
and m7 result in the term (
a
one-dimensional plane in a four-dimensional space results in a term with
4 - 1 =
3 literals)
. The final grouping involves
the corner terms: m0, m2, m8, m10. To
see this adjacency, you must recognize that the map folds back on itself.
Examining Figure 2.37 should make this clearer. In the figure, look for the min-term indices 0000, 0010, 1010, and 1000. The corner elements reduce to the term
(
a two-dimensional
plane within a four-dimensional space results in a term with 4 - 2 =
2 literals)
. The minimized form of the function is This method works because we begin by solving for the function's complement in sum of products form, by circling the 0's. Then we apply DeMorgan's theorem to get a product of sums expression by interpreting the indices as complements. Let's look at an example.
Let's reconsider the K-map in Figure 2.36. in minimum sum of products form is found by circling the K-map's 0's rather than its 1's: . By applying DeMorgan's theorem, we get F in product of sums form:
(
+
C
+
D)
: B, C, and D
remain unchanged and B's index is 1 while C's and D's
are 0. This is just shorthand for applying DeMorgan's theorem to the term of the complement. The terms formed from M1, M9
and M9, M13 are (
B +
C +
)
and (
+
C +
)
,
respectively. Each term is ANDed to form the final expression: (
A,B,C,D)
=
S m(
1,3,5,7,9)
+
S d(
6,12,13)
. The group of four elements
reduces to . If we assume that the X's are all 0, we can cover the
remaining member of the on-set with the term . However, if we
assume that the element d13 is 1, we can form a larger adjacency
group that yields the term . This has fewer literals.
So the values of don't cares should be selected to maximize the size of
the adjacency groups. The expression for F becomes +
.In product of sums form, the shorthand expression is written as F
(
A,B,C,D)
=
P M(
0,2,4,8,10,11,14,15)
P D(
6,12,13)
.
(
remember that the
top and bottom rows are adjacent)
and one of four 0's by judicious
use of don't cares (
D6, D12 =
0)
. The expression for F becomes . Design
Two-Bit Comparator Now we are ready to examine a fairly substantial design example using the methods we have seen so far. The goal is to design a circuit that takes as input two 2-bit numbers for comparison, N1 and N2, and generates three outputs: N1=
N2, N1 < N2,
and N1 > N2. We denote the numbers N1 and
N2 by the single-bit inputs A, B and C,
D, respectively. The first step in tackling any problem like this is to understand fully the behavior of the circuit being designed. You can do this best by drawing a block diagram showing inputs and outputs and creating a truth table for each output as a function of the inputs.
These are shown in Figure 2.41.
(=)
is true, while F2 (
<)
and F3 (
>)
are false. In the second
row, 00 < 01, so F1 and F3 are false while F2
is true. The rest of the table is filled in a similar way. The next step is to prepare K-maps for each of the outputs. This is shown in Figure 2.42.
This is the minimized sum of products form for the function. However, by using Boolean algebra, we can simplify this expression some more:
(
A
XNOR C)
(
B XNOR D)
,
but it is not in sum of products form. 1's on K-map diagonals provide a
good hint that the function can be expressed more economically in terms
of XOR or XNOR operations.The K-map for F2 has three adjacency groups, two with two elements and another with four elements. The former yield product terms with three literals, and , the latter a term with two literals, . The expression for F2 becomes
The K-map for F3 is a little more complicated. It consists of two groups of two elements each
(
three literals)
and one group of four elements (
two literals)
.
The minimum sum of products expression for F3 becomes Two-Bit Binary Adder The next function we examine is a 2-bit binary adder. It takes as input two 2-bit binary numbers, N1 and N2, and produces a 3-bit binary number, N3, as the result. The block diagram and truth table for these functions are shown in Figure 2.43.
The K-maps for the outputs are shown in Figure 2.44.
(
three literals each)
and one 4-element group (
two literals)
: Z exhibits two 4-element groups
(
two
literals each)
and reduces to the expression:By careful examination of the K-map, we can often spot opportunities for reduction using XOR and XNOR operators. We will show this by reducing the literal count for the function Y by good use of XOR/XNOR.
The two straightforward terms of Y are and . The remaining four single-element groups yield the terms: , , , and . We can further reduce and
(
to
and )
but for the
moment we do not do this. Factoring yields the expressions
We can factor the latter two expressions:
Y becomes:
This expression has just seven literals. Compare it to the reduced form, assuming only AND, OR, and NOT gates are allowed:
This expression requires two 4-input AND gates, four 3-input AND gates, and a 6-input OR gate, for a total of 7 gates and 20 literals. The revised expression requires a 2-input OR gate, two 2-input XOR gates, and two 2-input AND gates, a total of 5 simpler gates and 7 literals to implement the function. The two alternative implementations are shown in Figure 2.45.
The function X is implemented by three terms: . Once again, we have attempted to take advantage of adjacencies that wrap from top to bottom or left to right in the K-map.
The function Y is implemented by two terms: . This is derived from groups of two and four entries, respectively. The final function Z is implemented by a group of eight nodes, which reduces to the single literal .
Once again, notice that adjacency groups are always formed by groups of 1
(
4 literals)
, 2 (
3
literals)
, 4 (
2 literals)
, 8 (
1
literal)
, or 16 (
0 literals, a constant 0 or 1)
elements, always a power of 2. Also notice how the adjacencies are formed:
above, below, to the left, to the right of an element of the on-set, including
those that wrap around the edges of the K-map.Process of Boolean Minimization
We are now ready to be more precise about the process for obtaining a minimized expression. An implicant of a function F is a single element of the on-set or any group of elements that can be combined together in a K-map. We have been calling these adjacency groups or planes up to this point. A prime implicant is an implicant that cannot be combined with another one to eliminate a literal. In other words, you grow implicants to cover as much of the on-set as possible(
each prime implicant
is an implicant with as few literals as possible)
. Each prime
implicant corresponds to a product term in the minimum sum of products expression
for the function. The trick is to find the fewest prime implicants that
cover the elements of the on-set. If a particular element of the on-set
is covered by a single prime implicant, it is called an essential prime
impli-cant. All essential primes must be part of the minimized expression.
Example Illustrating the Definitions
Let's
look at some examples to make these concepts more concrete. As another example, consider the function whose K-map is given in Figure 2.7.
As a final example, consider the K-map of Figure 2.49.
Deriving a Minimized Expression from a K-map A procedure for finding a minimum sum of products expression from the K-map is the following:
- Step 1: Choose an element from the on-set. Find
all of the "maximal" groups of 1's and X's adjacent to that element.
Check for adjacency in the horizontal and vertical directions. Remember
that the K-map wraps from top row to bottom and leftmost column to rightmost.
The prime implicants
(
adjacency groups)
thus formed always contain a number of elements that is a power of 2. - Repeat step 1 for each element of the on-set to find all prime implicants.
- Step 2: Visit an element of the on-set elements. If it is covered by a single prime implicant, it is essential and will contribute a term to the final sum of products expression. The 1's covered by the implicant do not need to be visited again.
- Repeat step 2 until all essential prime implicants have been found.
- Step 3: If there remain 1's uncovered by essential
prime implicants, select a minimum number of prime implicants that cover
them. Try several alternative covers to find the one with the fewest possible
implicants.
Example Application of the Step-by-Step Algorithm
Figure 2.50 shows the algorithm applied to a complete example. The function represented in the K-map is F(
A,B,C,D)
=
 m(
4,5,6,8,9,10,13)
+
d(
0,7,15)
.
(
a)
gives the starting
configuration. We scan down the K-map's columns, top to bottom and left
to right, skipping 0's and X's, searching for a 1. The first 1 we encounter
is term m4. Expand in all directions, combining adjacent 1's and
X's into the largest implicant groups you can find. Two such groupings are
possible, represented by the terms and . These are circled in Figure 2.50(
b)
.
Continuing down the column, we next come to m5. At this point we should add only new implicants that are not already contained within an implicant found so far. is the only implicant we can add under this rule, as shown in Figure 2.50
(
c)
.The next element of the on-set is m6, but no new implicant can be added because the set of implicants already contains . So we continue searching for the next element of the on-set, which is m13. We can now add the implicant . Note that the implicant is already covered by the prime implicant , so we do not add it to the K-map. The state of the process to this point is shown in Figure 2.50
(
d)
.The next 1 is m8, which contributes three additional prime implicants, , , and . This is shown in Figure 2.50
(
e)
. The
process continues with m9 and m10, but these add no new
prime implicants.All the elements of the on-set are now covered, and we have obtained all prime implicants. The next step is to identify the essential prime implicants. The highlighted prime implicants of Figure 2.50
(
f)
,
and , are the essential primes
because they exclusively cover m6 and m10, respectively.
The last step is to cover the remaining 1's not already covered by the essential primes. This is accomplished by the single prime implicant . The process of enumerating prime implicants found six of them, yet three were sufficient to cover the entire on-set. The final minimized function is
K-Maps Revisited: Five- and Six-Variable Functions
The K-map method is not limited to four variables or less, although using it to visualize functions with more than four variables becomes more challenging. It is important to remember that within an n-variable map we must check for adjacencies in n directions. Fortunately, we can handle the adjacencies for up to six variables, simply by stacking two or four 4-variable K-maps.Five-Variable K-Maps A five-variable map is shown in Figure 2.51
(
a)
. The filled in K-map is shown in Figure 2.51F (
A,B,C,D,E)
=
 m(
2,5,7,8,10,13,15,17,19,21,23,24,29,31)
(
b)
.
We have omitted the 0 entries to reduce the clutter in the figure. When
searching for adjacencies, besides looking in the four horizontal and vertical
squares as we did in the four-variable K-map, we must also look either up
or down. The example's on-set is covered by the four prime implicants (
group of 8)
, (
group
of 4)
, (
group of 2)
,
and (
group of 2)
. Six-Variable K-Map The six-variable K-map is to the five-variable map as the four-variable is to the three-variable: the number of four-variable planes is increased from two to four. This is shown in Figure 2.52
(
a)
. (
b)
.
The function is In addition to horizontal and vertical adjacencies, the planes immediately above and below the element being examined must be checked. The top plane also wraps around onto the bottom plane. In the figure, the on-set is covered by three prime implicants:F (
A,B,C,D,E,F)
=
 m(
2,8,10,18,24,26,34,37,42,45,50,53,58,61)
(
a
group of 8)
, (
a group of
4)
, and (
a group of 4)
.
No comments:
Post a Comment