Sunday 29 September 2013

Techniques used in Hardware Manufacturing

Technology Metrics

When it comes to the performance aspects of the design, there are differences in the underlying technologies that may make one technology more attractive than another. Bipolar circuits come in a wide range of TTL (transistor-transistor logic) families, with different trade-offs in circuit speed and power, and the very high speed ECL (emitter-coupled logic) family. The most popular MOS family is CMOS (complementary MOS), consisting of both n-channel and p-channel devices (see Appendix B).
The main technology metrics, summarized in Figure 2.58, include gate delay, degree of integration, power dissipation, noise margin, component cost, fan-out, and driving capability. In general, faster gates consume more power, generate more heat, cannot be packaged as densely, and are more sensitive to noise problems.

Gate delay: If an input change causes an output change, the gate delay is the time delay between the changes. It is usually measured from the input reaching 90% of its final value to the output reaching 90% of its final value. In general, bipolar technologies are faster than MOS, with ECL achieving the fastest switching times, although the gap between MOS and bipolar is narrowing.
Degree of integration: This represents the area required to implement a given function in the underlying technology. MOS circuits pack much more densely than bipolar. For small-scale integrated (SSI) circuits, a package containing up to 10 logic gates, and for medium-scale integrated (MSI) circuits, a package containing up to 100 gates, this metric is probably not very important. However, in large-scale gate arrays and very large scale integrated circuits (VLSI), containing thousands of gates, MOS has a distinct integration advantage over bipolar.
Power dissipation: Gates consume power as they perform their logic functions, generating heat that must be dissipated. Bipolar circuits typically switch faster than MOS, thus generating more heat and consuming more power. ECL circuits consume the most power. MOS circuits, especially CMOS, can be designed to consume very little power, as evidenced by the lifetime of the battery powering a digital watch's CMOS circuitry. However, in MOS circuits, power is in part a function of the frequency with which the circuit changes outputs. Surprisingly, CMOS circuits may exceed the power dissipation of bipolar circuits at the same very high switching speeds.
Noise margin: This is the maximum voltage that can be added to or subtracted from the logic voltages and still have the circuit interpret the voltage as the correct logic value. Modern TTL and CMOS technologies have good noise margins. ECL gains some of its speed from reduced voltages for representing 0 and 1, and this reduced voltage swing requires tighter noise margins.
Component cost: In general, TTL components are very inexpensive, MOS components are medium in cost, and ECL components are the most expensive.
Fan-out: We defined fan-out in Chapter 1 as the number of gate inputs to which a given gate output could be connected because of electrical limitations. Fan-out is a metric related to the ease with which gates can be composed into more complex functions. MOS circuits can be cascaded with a large number of other MOS circuits without suffering degradation in the transmitted signals. However, signal delays do increase with fan-out. The speed of bipolar circuits does not vary with fan-out, but signal quality is dramatically reduced as the fan-out increases.
Driving capability: Discrete gates, as presented in this chapter, are usually placed with other gates in ready-to-use packages. Driving capability measures the speed of communications between packaged components. In general, MOS circuits have less circuit drive than bipolar circuits, making their package-to-package delay greater. Because of differences in the internal electronics, you must be very careful when mixing bipolar and MOS within the same system. Although MOS circuits can drive many other MOS circuits, they can typically drive only a single bipolar circuit.

TTL Packaged Logic

TTL is a family of packaged logic components that enjoys widespread use in industry. In fact, it is so well known by designers that even libraries for VLSI components, implemented in the radically different MOS technology, provide exactly the same kinds of logic functions from NAND gates to binary adders and beyond. Knowledge of the logic functions available in TTL carries over readily to just about every technology that can be used for digital design. And most students learning hardware design for the first time will work with TTL technology in their introductory hardware laboratories.

In this chapter we've seen the various kinds of simple logic gates. A TTL integrated circuit package typically contains several of these. The Texas Instruments 74-series components provide the standard numbering scheme used by industry. For example, the name for a package containing four 2-input NAND gates is "7400," while a "7404" contains six (hex) inverters.

Small- and Medium-Scale Integration Components containing a handful of primitive gates are called small-scale integration or SSI components. These components typically contain fewer than 10 logic gates in a single package. In Chapter 4 we shall meet medium-scale integration or MSI components. These implement more complex functions than simple two-input gates and typically contain between 10 and 100 logic gates.

In TTL technology, logic gates are available in rectangular dual in-line packages, also known as "dips." Pins that connect internal logic to the outside are placed along the two long edges of the package. A 14-pin package is shown in Figure 2.59, along with a diagram of its internal logic and pin connectivity.
A typical 14-pin package measures approximately 0.75 inch in one dimension and 0.3 inch in the other.

TTL packages come in a variety of pin configurations, from 14-pin packages (7 pins per side) all the way up to 64-pin packages (32 pins per side). In a TTL package, the top edge is usually distinguished by an indentation. The pin immediately to the left of this is labeled #1. You will often find a small bump next to this pin.

The pins are numbered counterclockwise starting with pin #1 at the upper left-hand corner. The pin at the lower left-hand corner (pin #7 in this case) is usually connected to ground (GND), while the pin at the upper right-hand corner (pin #14) is connected to the power supply (VCC). If in doubt, you should always consult the data book for the logic components you are using to verify the numbering scheme. Always make sure you under-stand which pins are to be connected to the power supply.

The figure also shows the pin connectivity for a 14-pin package containing four 2-input NAND gates. Notice how the pins are connected within the package to the gates' inputs and outputs. Usually, related input and output pins are adjacent on the package, but this is not always so. You should always consult the logic family's data book for component pin-out maps.

Subfamilies of TTL TTL is a logic family. This means that the components have been designed so they can be interconnected without too much concern about proper electrical operation. For example, all TTL components operate with a 5 V power supply. There are actually several subfamilies of TTL, all implementing the same logic functions but representing different trade-offs between speed of operation and the amount of power they consume. In general, the faster the component, the more power it consumes.

Standard TTL components are listed as 74XX, where XX is the component number. High-speed TTL components are denoted by 74HXX and low-power TTL components by 74LXX. H components are about one third faster than standard but use twice as much power. L components use one tenth of the power but experience four times the delay. These families were popular in the 1970s but are now considered obsolete.

A major innovation was the introduction of Schottky TTL in the mid-1970s. The internal design of the gates was changed to incorporate a faster kind of transistor structure. The 74SXX family was faster than H TTL but used about the same power as H. This led to the introduction of LS TTL, low-power Schottky TTL, which is one of the most popular families in use today. LS TTL is as fast as standard TTL but uses only 20% of its power.

Innovation did not stop there. Two additional TTL families, AS and ALS are now also available. AS TTL has twice the speed of S TTL at comparable power consumption, and ALS uses less power than LS while offering higher speed. Although the complete catalog of standard components is available in LS TTL, only a relatively small subset is available in these newer technologies.

Speed-Power Product From the preceding discussion, you may find it a little difficult to know which TTL family is best to use. It is always desirable to have a high-speed system, but usually the components are more expensive and the system consumes more power. Higher power consumption translates into a system that runs hotter and needs more expensive cooling and power supplies. An important figure of merit for the purpose of comparing the efficiency of logic families is the speed-power product. To obtain this metric, we multiply the delay through a gate by the power it consumes. Smaller numbers are better: reduced delay and reduced power are ideal.

A typical standard TTL gate has a delay of 9 nanoseconds (ns) and consumes 10 milliwatts (mW). Its speed-power product is 90. The same gate in LS TTL experiences the same delay but consumes only 2 mW. The speed-power product is a better 18. ALS TTL has a delay of 5 ns and consumes only 1.3 mW. Its speed-power product is an even better 6.5. This represents an extremely efficient technology. LS and ALS components are used when the goal is good speed with low power consumption.

Now let's look at higher-speed components. S TTL has a delay of 3 ns, a power consumption of 20 mW, and a speed-power product of 60. Not much faster than ALS, but it uses a lot more power! AS TTL has a delay of 1.6 ns, a power consumption of 20 mW, and a speed-power product of 32. This is the technology of choice for high-speed designs where power consumption need not be low.

So far, we have considered literal count as the primary way to determine the simplicity of a design. Integrated circuit package count is another critical design metric. In TTL technology, a single package typically contains six inverters, four 2-input gates, three 3-input gates, and two complex gates per integrated package.

Look back at the three implementations of the function Z in Figure 2.9. Implementing Z1 in TTL requires three packages: one package of inverters, one package of three 3-input AND gates, and one package of three 3-input OR gates. Z2 also uses three packages: one package of inverters, one package of four 2-input AND gates, one package of four 2-input OR gates. But Z3 needs only two packages. Once again, Z3 is the most area-efficient design.

Schematic Documentation Standards

Although it is frequently ignored, proper documentation is one of the most important skills of a designer. You must be able to describe your design to other designers.

This means that documentation is primarily about standard ways of doing things. It includes the following:
  • Representing the design on paper. This involves ways of drawing components and describing their compositions. A project team must develop and adhere to a standard convention for naming components and the signal wires that interconnect them.
  • Representing interfaces. An interface describes everything you need to know about a subsystem to use it without understanding all its internal workings. This is like the description of a subroutine in a computer program: the documentation describes what the subroutines do as well as the names, types, and functions of its parameters. Hard-ware interfaces are similar to software interfaces, but somewhat more complicated. A hardware interface describes the behavior of the design, as well as the names of signals and how to connect to them.
Standard Schematic Symbols Throughout this chapter, we have used standard gate symbols to represent common SSI components, such as AND, OR, NAND, NOR, XOR, and inverter gates. Gates always have two representations: normal logic and dual logic. The dual representation is used to conserve inversions when using active low signals.
Some examples are shown in Figure 2.60.

These are recognized by digital designers everywhere, so you should use them in your work as well. Do not use any other symbols for the same functions.

Conventions for MSI components are less rigid, but the following are typical. Functions are represented by blocks with input/output signals rather than discrete gates. Figure 2.61 contains a schematic symbol for the 74112 dual J-K flip-flop, a 16-pin TTL component that will be introduced in Chapter 6.

Inputs are drawn on the left, outputs on the right. The general flow of data is from left to right and top to bottom. All signals are labeled with meaningful names. Bubbles on pins or names that end in a slash ("/") indicate signals that are active low. The numbers identify package pin numbers. The connections to ground (pin #8) and the power supply (pin #16) are usually not shown in schematics. Every logic symbol must, without exception, have its part number written inside it.

At times, you will use different instances of the same set of gates in several places in your schematic. The best way to handle this is to draw the gates once, then box them in with a dashed line and label them with a detail letter, as in Figure 2.62.

When you use these gates in a particular place in your schematic, draw a single symbol for the function to be performed, like the single 4-bit wide inverter in the figure, labeling it with the detail letter. The idea is that you expand the detail with its definition.

Names All signals that are not entirely local to an individual schematic drawing must be given a name. If a signal connects to many places in one drawing, it is more convenient to name the signal once and label local wires with this name where it is used, rather than draw wires to connect the uses together.

Names are an important form of documentation, so it is a good idea to name any wire whose usage is not trivial. You should use names that are understandable and describe the function performed by the signal. For example, if a signal causes the B Register to be cleared, then name the signal CLEAR BREG, not 52 or CB. More than one word in a name is fine. Many people capitalize signal names so that they are easy to distinguish from text in documentation. Each signal name must be unique within the project.

Polarization and Bubbles To see an application of practical bubble matching, consider Figure 2.63.

The designer is trying to AND together two related data bits, identified as XA0 and XA1. Unfortunately, the NAND gate output has the opposite sense from what the designer wants: if both bits are 1's, then the NAND output is a 0. As signals pass through levels of logic, it is quite typical for the polarity to switch back and forth.

To make it easier to deal with inverting logic, we think about signals in two separate ways, both of which are reflected in the convention for signal names. The first element of a signal name indicates its function: LOAD PC, or XA<0> AND XA<1>. The second gives the signal's polarity as either high (.H) or low (.L). An .H or .L polarity indicator is added to every signal name to indicate whether its function occurs when the -signal is 1 or 0. For example, in Figure 2.63, the signal XA<0>_AND_XA<1>.L is at a low voltage ("true") when both XA<0> and XA<1> are at high voltages (also "true"). Thus, the signal is given an .L polarity. If an AND gate had been used instead of NAND, the polarity would be .H. If a signal is not marked with a polarity indicator, it defaults to positive logic. To be absolutely clear, it is a good idea to mark all signals explicitly.

A signal with positive (.H) polarity is asserted at a high voltage level, and a signal with negative (.L) polarity is asserted at a low voltage level. A bubble on a logic symbol indicates that an input or output is inverted. An input with a bubble means that the input signal is to be asserted low. A bubbled output is asserted when its voltage is low. A bubbled input should almost always match a bubbled output or another signal that is specified as being asserted active low. Inputs and outputs without bubbles match .H signals or other bubbleless inputs and outputs.


Because it makes drawings so much easier to read, you should match bubbles wherever possible. Where they match, you can simply ignore polarity. For example, although Figure 2.64 and Figure 2.65 are equivalent, in Figure 2.65 it is much easier to see that five active high input signals are being ANDed together.

In a few cases, bubbles cannot be made to match. Usually these cases involve some form of inhibition. That is, when the signal is asserted, something is NOT happening.

Figure 2.66 shows an example where a clear pulse (ClearReg) is being controlled by another signal (Enable). The actual clear signal is active low and should be asserted only when both Enable and ClearReg are asserted. When the output of the NAND gate is high, the Clear signal is inhibited. Note the use of the positive logic form of the output signal polarity.

Since the gate's output is active low but the signal label is active high, we have a mismatch. It would be clearer if the gate output were labeled with the action that takes effect when the signal is asserted active low. Careful renaming of a signal can make the mismatch go away. In this case, you simply replace InhibitClear.H with EnableClear.L.

Crossovers and Connections All connections between wires must be marked with "blobs," as shown in Figure 2.67.

Be careful to make it crystal clear when signals cross without connection and when connections are made.

Hierarchical Documentation and Cross-References Large designs cannot fit on a single sheet of paper. They must be suitably distributed over many pages, and signals will cross page boundaries. Even if everything did fit on one sheet, it probably wouldn't be practical to draw wires for every connection: the sheet would turn into a rat's nest of lines. Thus, you need to make proper use of hierarchy and abstraction in your presentation of the design.

The first pages should contain a coarse block diagram showing the main group of components. Only signals that leave or enter one of these blocks should be shown. Subsequent pages will expand these blocks hierarchically into more and more details. Thus the same signal may appear in several disconnected places. It is important to keep track of where signals are used, making it easier to scan the drawings and to verify signal fan-outs.

You are expected to observe several conventions in order to keep track of signal usage. All inputs to a sheet should enter at the top or left edge of the sheet; all outputs from the sheet should terminate at the -bottom or right edge. If a signal is both an input and an output, you may take your pick.

Each page of your project should be named in some conspicuous place and should also be numbered. Signals that leave a page should carry a unique label and an indication of other pages on which they can be found entering. Correspondingly, all entering signals should carry the proper name label and an indication of the page from which they come. Ideally, if each page corresponds to some block in a previous page, then that previous page is a block diagram that shows all the interconnections between the blocks. The inputs and outputs to any block should correspond to all the entering and exiting signals on the detailed page and should carry the same signal names.

Two-Level Simplification

We can always use the rules of Boolean algebra from Section 2.1 to simplify an expression, but this method has a few problems. For one thing, there is no algorithm you can use to determine that you've obtained a minimum solution. When do you stop looking for a simplification theorem to apply? Another problem is that you often have to make the expression more complicated before you can simplify it. In our examples, we sometimes substituted for 1 to add more terms. Then we rearranged the terms to obtain advantageous groupings that helped to simplify the expression in a later step. It is against human nature to climb out of a "local minimum" in the hope of finding a better global solution. But this is exactly what we often have to do. And finally, it is just too cumbersome (and error prone) to manipulate Boolean expressions by hand.

Given that computer-based tools have been developed for Boolean simplification, why bother to learn any hand method, especially when these break down for problems with many variables? Certainly no hand method will be effective for equations involving more than six variables. But you still need knowledge of the basic approach. Observing the symmetries in a circuit's function helps to understand its behavior and to visualize what is going on inside the circuit. As CAD tools become ever more sophisticated, you need a deeper knowledge of algorithms they apply to use the tools effectively. And don't forget that CAD tools are written by mere mortals and do not always do the correct things! You must still be able to check the output of the tool.

The Essence of Boolean Simplification Let's look at what is really going on in simplification with the simple truth table of Figure 2.24(a).

The function is We can simplify this equation by applying one of the Boolean simplification theorems, called the uniting theorem:

Notice that the two truth table rows that make up the on-set of F have A asserted, while one row has B = 0 and the other has B = 1. For a subset of the on-set (in this case, the whole on-set), A's value stays the same while B's value changes. This allows us to factor out B using the uniting theorem.

Now examine Figure 2.24(b). The function is Applying the uniting theorem again, we obtain Once again, the on-set contains two rows in which B's value does not change (it is equal to 0) and A's does change. Thus we can factor out A, leaving

The essence of simplification is repeatedly to find two-element subsets of the on-set in which only one variable changes its value while the other variables do not. You can eliminate the single varying variable from the term by applying the uniting theorem.

Wouldn't it be nice if there was a way to arrange the truth table rows so that the entries to which this technique could be applied are obvious? We shall present just such a representation, the Boolean cube, next.

Boolean Cubes

A cube is usually defined as a solid object bounded by six equal squares. This concept can be generalized to other than three dimensions. A two-dimensional cube is a square, a one-dimensional cube is a line, and a zero-dimensional cube is a point.
We can represent the truth table of an n-input Boolean function as a "cube" in an n-dimensional Boolean space. There is one axis for each variable, over which it can take on exactly two values: 0 or 1.

Figure 2.25 shows the form of Boolean 1-, 2-, 3-, and 4-cubes. Each node in the figure is labeled with its coordinates in the n-dimensional space. The axes are listed in alphabetical order, from highest to lowest order. The structure generalizes beyond four dimensions, but it is rather hard to visualize these.

Examples Mapping Truth Tables onto Cubes Now let's examine how to map a truth table onto a cube, using the simple examples of Figure 2.24. The elements of the on-set are represented by black nodes and those of the off-set by white nodes (don't cares are represented by X nodes, although we do not have any in this example).

Figure 2.26 shows the mapping. Observe that the elements of the functions' on-sets are next to each other in the truth table's Boolean cube. This tells you that the uniting theorem can be used to eliminate a variable. In the figure, we have circled elements of the on-set that are directly adjacent. We call such a circled group of nodes an adjacency plane. Each adjacency plane corresponds to a product term. In Figure 2.26(a), the circled nodes yield the term A; in Figure 2.26(b), the term is

You can think about these adjacencies as distances between nodes in the Boolean cube. Two nodes are distance 1 apart if they are connected by an edge in the cube. They are distance 2 apart if they are separated by a path of two connected edges. If this is the case, the two nodes are on the same plane. Two nodes are distance 3 apart if they are separated by a path of three connected edges and no shorter path exists between the two nodes. Then the nodes are in the same three-dimensional cube.

In the on-set/adjacency plane of Figure 2.26(a), A's value stays 1 while B's varies between 0 and 1. This is an immediate clue that the uniting theorem can reduce the function to the single literal A. Similarly, in Figure 2.26(b) the adjacency plane is circled, and the nodes involved have B retaining its value while A varies. The uniting theorem reduces the expression to the term

As an example of a three-variable function and its mapping onto a 3-cube, let's return to the full adder's carry-out function examined in Section 2.2.1. The truth table and its mapping onto a 3-cube are shown in Figure 2.27.

The on-set is arranged on 3 one-dimensional adjacency planes: the edges containing the nodes 011-111, 111-101, and 110-111. In the first segment, A varies between 0 and 1 while B and Cin remain asserted and unvarying. This reduces to the term B Cin. For the second segment, B varies, yielding the term A Cin. In the final segment, Cin varies, and the resulting term is AB. The final expression becomes

Cout = B Cin + A Cin + AB
This method lets us obtain the final expression much more quickly than using Boolean algebra. Note that each adjacency plane contributes one product term to the final expression. Since each plane is an edge in a three-dimensional cube, the two 3-variable min-terms in the plane are reduced to a single 2-variable product term.

Adjacencies of Higher Dimensions What about adjacency planes of higher dimensionality than single edges, such as a two-dimensional plane within a 3-cube? Consider the function depicted in the 3-cube of Figure 2.28.
One entire face of the cube is included in the on-set. Intuitively, we should expect this surface to reduce to the single literal A because all other variables vary between 0 and 1 within the surface.

Let's verify that this is the case. The four line segments on the surface are denoted by the nodes 110-111, 101-111, 100-101, and 100-110. Applying the uniting theorem to each segment independently yields the terms AB, AC, A and A respectively. We can continue to apply the uniting theorem:

In the 3-cube, if the on-set is a two-dimensional plane, it contributes a single 1-variable product term to the expression for the function.

For the 3-cube, the relationship between the dimensionality of the adjacency plane and the term it represents is the following:
  • A zero-dimensional adjacency plane, a single node, yields a three--literal product term. For example, 101 = This is the same as a min-term.
  • A one-dimensional plane, an edge, yields a two-literal product term. For example, 100-101 =
  • A two-dimensional plane yields a one-literal product term. For example, 100-101-111-110 = A.
  • A three-dimensional plane, the whole cube, yields a term with no literals in it; that is, it reduces to the constant logic 1.
This generalizes to cubes of higher dimensions. An m-dimensional adjacency plane within an n-dimensional cube will reduce to a term with n - m literals.

The fewer planes needed to include all of the 1's in the truth table, the fewer the terms in the function's final expression. Planes of higher dimensionality generate terms with fewer literals in them. Thus, -computer--aided design algorithms for minimization attempt to find the smallest number of the highest-dimensionality adjacency planes that contain all the nodes of the function's on-set (perhaps including elements of the don't-care set as well). This process is called finding a minimum cover of the function.

K-Map Method

The cube notation makes obvious the adjacencies among truth table rows. Adjacencies provide visual clues as to where to apply the uniting theorem to elements of the on-set that are distance 1 (a line), 2 (a square), 3 (a cube), etc. apart within the n-dimensional cube that represents the function. The problem for humans is the difficulty of visualizing adjacencies in more than three dimensions. To circumvent this prob-lem, at least for expressions up to six variables, we introduce an alternative reformulation of the truth table called the Karnaugh map or K-map.

General Concept A K-map for a Boolean function specifies values of the function for every combination of the function's variables. Just like a truth table, an n-variable K-map has 2n entries. Rather than the linear enumeration of the truth table entries, a K-map has two and sometimes three dimensions, indexed by the values of the input variables.
Figure 2.29 shows K-map templates for two-, three-, and four-variable Boolean functions. The entries are labeled with the decimal equivalent of the binary number formed by joining the column with the row indices. For example, entry 3 in the three-variable K-map is labeled by the column AB = 01 and the row C = 1 (ABC = 0112 = 3). The labels are included only for your convenience in filling in the K-map from the truth table: they correspond to the row number of the associated truth table entry.

The only surprising thing is the ordering of the indices: 00, 01, 11, 10. This is called a Gray code. It has the property that when advancing from one index to the next adjacent index, only a single bit changes value. This property is not true for the standard binary sequence 00, 01, 10, 11.

The structure of the K-map is chosen so that any two adjacent (horizontal or vertical, but not diagonal) elements are distance 1 apart in the equivalent cube representation (they share a common edge).

This is shown in Figure 2.30 for a three-variable K-map and three-dimensional Boolean cube. Note that K-map square 0 is adjacent to squares 1, 2, and 4. The K-map actually folds back on itself. The elements in the rightmost column are adjacent to the elements in the leftmost column; the elements in the top row are adjacent to the elements in the bottom row.

Example Two-Variable Maps Figure 2.31 shows the two-variable maps for the example functions F and G of Figure 2.24. The K-map is filled in directly from the truth table. Each truth table row corresponds to a K-map entry. The values of the variables are the indices, and the truth table's output value is placed into the K-map's square with the corresponding index.

In Figure 2.31(a), the terms of the function are as denoted by the 1's in the A = 1, B = 0 and A = 1, B = 1 map entries. We can apply the uniting theorem to reduce this to the single literal A. The K-map shows immediately that the two entries are adjacent. The A variable value remains unchanged while the B value varies from 0 to 1. Looking at this group should tell you that the B term can be eliminated, leaving us with A.

The same analysis holds for Figure 2.31(b). The function is , and its on-set is row adjacent in the K-map. This demonstrates the advantage of the K-map over the truth table for recognizing adjacencies. A varies from 0 to 1 while B holds at 0 for this K-map row. We can reduce the function to the single literal

Example Three-Variable Maps Figure 2.32 shows the three-variable K-map for the full adder carry-out function of Figure 2.27. You can see that three different two-element adjacencies cover the on-set (recall that adjacency is not defined for diagonal entries). The first is the column indexed by A = 1, B = 1. Since Cin varies from 0 to 1, it can be eliminated, yielding the term AB. The second is the adjacency indexed by A = 0, B = 1, Cin = 1 and A = 1, B = 1, Cin = 1. A varies while B and Cin remain unchanged, yielding the term B Cin.

Observe that the bar labeled B along the bottom of the K-map tells you that B remains unchanged within the middle two columns and in the two outer columns. The final adjacency is indexed by A = 1, B = 1, Cin = 1 and A = 1, B = 0, Cin = 1. B varies and A and Cin remain unchanged, resulting in the term A Cin. Once again, the labeled bar at the top of the K-map reminds us that A remains unchanged within the last two columns and in the first two columns. The final expression is AB + B Cin + A Cin. There is one term in the reduced expression for each circled adjacency group in the K-map.

Let's revisit the function of Figure 2.28. Its K-map is given in Figure 2.33. The four elements of the on-set are adjacent, and we can circle them all. Within this grouping, both B and C vary while A remains asserted, reducing to the single literal A.

Another case of adjacency is illustrated by Figure 2.34, which shows the K-map for the function F(A,B,C) = m0 + m4 + m5 + m7. Recall that the leftmost and rightmost columns of the K-map are adjacent. Thus we can combine m0 () and m4 () to form the term We can also combine m5 and m7 to form the term AC. Thus,

You might be tempted to circle the terms m4 and m5, as they are also adjacent. But you obtain the most reduced solution only by finding the smallest number of the largest possible adjacency groups that completely cover the on-set. Recall that the number of groups equals the number of product terms, and larger groupings have a smaller number of literals. The term formed from m4 and m5 is redundant because both entries are already covered by other terms. We will become more formal about the process of obtaining the minimum solution a bit later on in this section.

K-maps provide a good mechanism for finding the reduced form of the complement very quickly.
Figure 2.35 contains the K-map for the complement of Figure 2.34. All we have done is to replace the 0's with 1's and vice versa. The complement can be read out immediately as Contrast this method with the method using Boolean algebra:

:
The K-map yields the result much more quickly!

Example Four-Variable Maps Now let's consider a four-variable function F(A,B,C,D) = S m(0,2,3,5,6,7,8,10,11,14,15). The K-map is shown in Figure 2.36. Remember that the strategy is to cover the on-set with as few groups as possible, so we must try to find large groups of adjacency. Also, don't forget that the number of elements in an adjacency group is always a power of 2.

The elements m2, m3, m6, m7, m10, m11, m14, m15 form an adjacency group of eight. This collapses to the single literal C. (Recall that a three-dimensional plane within a four-dimensional cube yields a term with 4 - 3 = 1 literal.) The elements m5 and m7 result in the term (a one-dimensional plane in a four-dimensional space results in a term with 4 - 1 = 3 literals). The final grouping involves the corner terms: m0, m2, m8, m10. To see this adjacency, you must recognize that the map folds back on itself.

Examining Figure 2.37 should make this clearer. In the figure, look for the min-term indices 0000, 0010, 1010, and 1000. The corner elements reduce to the term (a two-dimensional plane within a four-dimensional space results in a term with 4 - 2 = 2 literals). The minimized form of the function is

Finding the Minimum Product of Sums Form The K-map can also be used to find a function's minimum product of sums expression. In this case we search for elements of the off-set, simply circling the maximal adjacent groups of 0's. We interpret the indices in a fashion complementary to the procedure for finding the minimum sum of products expression. If the variable that is left unchanged in a grouping of 0's has index 0, then that variable contributes an asserted literal to the term. If the index is 1, it contributes a complemented literal.

This method works because we begin by solving for the function's complement in sum of products form, by circling the 0's. Then we apply DeMorgan's theorem to get a product of sums expression by interpreting the indices as complements. Let's look at an example.

Let's reconsider the K-map in Figure 2.36. in minimum sum of products form is found by circling the K-map's 0's rather than its 1's: . By applying DeMorgan's theorem, we get F in product of sums form:

Figure 2.38 shows the same K-map as Figure 2.36, but this time with the 0's circled. You can see that three groups of two 0's each can be found. Since these are one-dimensional adjacency groups in a four-dimensional space, there are three literals in each term. The term formed from M4 and M12 is ( + C + D): B, C, and D remain unchanged and B's index is 1 while C's and D's are 0. This is just shorthand for applying DeMorgan's theorem to the term of the complement. The terms formed from M1, M9 and M9, M13 are (B + C + ) and ( + C + ), respectively. Each term is ANDed to form the final expression:

Don't Cares in K-maps The last wrinkle we consider is the use of don't cares within the K-map. Figure 2.39 shows a K-map for the function F(A,B,C,D) = S m(1,3,5,7,9) + S d(6,12,13). The group of four elements reduces to . If we assume that the X's are all 0, we can cover the remaining member of the on-set with the term . However, if we assume that the element d13 is 1, we can form a larger adjacency group that yields the term . This has fewer literals. So the values of don't cares should be selected to maximize the size of the adjacency groups. The expression for F becomes + .

In product of sums form, the shorthand expression is written as F(A,B,C,D) = P M(0,2,4,8,10,11,14,15) P D(6,12,13).
Figure 2.40 shows the groupings we use to find the minimum product of sums form. We form a group of eight 0's (remember that the top and bottom rows are adjacent) and one of four 0's by judicious use of don't cares (D6, D12 = 0). The expression for F becomes .

Design

Two-Bit Comparator Now we are ready to examine a fairly substantial design example using the methods we have seen so far. The goal is to design a circuit that takes as input two 2-bit numbers for comparison, N1 and N2, and generates three outputs: N1 = N2, N1 < N2, and N1 > N2. We denote the numbers N1 and N2 by the single-bit inputs A, B and C, D, respectively.

The first step in tackling any problem like this is to understand fully the behavior of the circuit being designed. You can do this best by drawing a block diagram showing inputs and outputs and creating a truth table for each output as a function of the inputs.

These are shown in Figure 2.41.
It is fairly straightforward to fill in the table. For example, the first row compares the N1 input 00 to the N2 input 00. The F1 function (=) is true, while F2 (<) and F3 (>) are false. In the second row, 00 < 01, so F1 and F3 are false while F2 is true. The rest of the table is filled in a similar way.

The next step is to prepare K-maps for each of the outputs. This is shown in Figure 2.42.

Let's start with the K-map for F1. There are no adjacencies in this K-map! Each of the four elements of the on-set contributes a four-variable term to the expression for F1:

This is the minimized sum of products form for the function. However, by using Boolean algebra, we can simplify this expression some more:

We can get a much simpler form, (A XNOR C) (B XNOR D), but it is not in sum of products form. 1's on K-map diagonals provide a good hint that the function can be expressed more economically in terms of XOR or XNOR operations.

The K-map for F2 has three adjacency groups, two with two elements and another with four elements. The former yield product terms with three literals, and , the latter a term with two literals, . The expression for F2 becomes

The K-map for F3 is a little more complicated. It consists of two groups of two elements each (three literals) and one group of four elements (two literals). The minimum sum of products expression for F3 becomes

Two-Bit Binary Adder The next function we examine is a 2-bit binary adder. It takes as input two 2-bit binary numbers, N1 and N2, and produces a 3-bit binary number, N3, as the result. The block diagram and truth table for these functions are shown in Figure 2.43.

Once again, N1 is represented by the inputs A and B, N2 by C and D, and N3 by the Boolean functions X, Y, and Z.

The K-maps for the outputs are shown in Figure 2.44.

The maps for the X and Z outputs are more straightforward than for Y, and we will start with these. The function for X reduces to two 2-element groups (three literals each) and one 4-element group (two literals):

Z exhibits two 4-element groups (two literals each) and reduces to the expression:

By careful examination of the K-map, we can often spot opportunities for reduction using XOR and XNOR operators. We will show this by reducing the literal count for the function Y by good use of XOR/XNOR.

The two straightforward terms of Y are and . The remaining four single-element groups yield the terms: , , , and . We can further reduce and (to and ) but for the moment we do not do this.

Factoring yields the expressions

We can factor the latter two expressions:

Y becomes:

This expression has just seven literals. Compare it to the reduced form, assuming only AND, OR, and NOT gates are allowed:

This expression requires two 4-input AND gates, four 3-input AND gates, and a 6-input OR gate, for a total of 7 gates and 20 literals. The revised expression requires a 2-input OR gate, two 2-input XOR gates, and two 2-input AND gates, a total of 5 simpler gates and 7 literals to implement the function. The two alternative implementations are shown in Figure 2.45.

BCD Increment by 1 Function We introduced the BCD increment by 1 function in Section 2.2.4 as an example of a function with don't cares. The truth table of Figure 2.23 yields the four 4-variable K-maps of Figure 2.46.

We attempt to form the largest adjacency groups we can, taking advantage of don't cares to expand the group wherever possible. The function W can be implemented by two terms: . These are formed from adjacency groups of two elements and four elements, respectively. Notice how we have taken advantage of adjacencies that wrap from the top of the K-map to the bottommost row.

The function X is implemented by three terms: . Once again, we have attempted to take advantage of adjacencies that wrap from top to bottom or left to right in the K-map.

The function Y is implemented by two terms: . This is derived from groups of two and four entries, respectively. The final function Z is implemented by a group of eight nodes, which reduces to the single literal .

Once again, notice that adjacency groups are always formed by groups of 1 (4 literals), 2 (3 literals), 4 (2 literals), 8 (1 literal), or 16 (0 literals, a constant 0 or 1) elements, always a power of 2. Also notice how the adjacencies are formed: above, below, to the left, to the right of an element of the on-set, including those that wrap around the edges of the K-map.


Process of Boolean Minimization

We are now ready to be more precise about the process for obtaining a minimized expression. An implicant of a function F is a single element of the on-set or any group of elements that can be combined together in a K-map. We have been calling these adjacency groups or planes up to this point. A prime implicant is an implicant that cannot be combined with another one to eliminate a literal. In other words, you grow implicants to cover as much of the on-set as possible (each prime implicant is an implicant with as few literals as possible). Each prime implicant corresponds to a product term in the minimum sum of products expression for the function. The trick is to find the fewest prime implicants that cover the elements of the on-set. If a particular element of the on-set is covered by a single prime implicant, it is called an essential prime impli-cant. All essential primes must be part of the minimized expression.

Example Illustrating the Definitions Let's look at some examples to make these concepts more concrete.
The four-variable K-map of Figure 2.47 contains six prime implicants: , , , , , and . Only and are essential. Adding the additional implicant covers the entire on-set. Thus the minimized expression for the function becomes:

As another example, consider the function whose K-map is given in Figure 2.7.

It contains five prime implicants: , , , , and . All but the first implicant are essential. The minimized form is

As a final example, consider the K-map of Figure 2.49.

It contains four prime implicants: , , , and . The implicant is inessential, since the 1's it covers are already covered by the remaining implicants, all of which are essential. The minimized function is

Deriving a Minimized Expression from a K-map A procedure for finding a minimum sum of products expression from the K-map is the following:
Step 1: Choose an element from the on-set. Find all of the "maximal" groups of 1's and X's adjacent to that element. Check for adjacency in the horizontal and vertical directions. Remember that the K-map wraps from top row to bottom and leftmost column to rightmost. The prime implicants (adjacency groups) thus formed always contain a number of elements that is a power of 2.
Repeat step 1 for each element of the on-set to find all prime implicants.
Step 2: Visit an element of the on-set elements. If it is covered by a single prime implicant, it is essential and will contribute a term to the final sum of products expression. The 1's covered by the implicant do not need to be visited again.
Repeat step 2 until all essential prime implicants have been found.
Step 3: If there remain 1's uncovered by essential prime implicants, select a minimum number of prime implicants that cover them. Try several alternative covers to find the one with the fewest possible implicants.

Example Application of the Step-by-Step Algorithm


Figure 2.50 shows the algorithm applied to a complete example. The function represented in the K-map is F(A,B,C,D) = Â m(4,5,6,8,9,10,13) + d(0,7,15).
Figure 2.50(a) gives the starting configuration. We scan down the K-map's columns, top to bottom and left to right, skipping 0's and X's, searching for a 1. The first 1 we encounter is term m4. Expand in all directions, combining adjacent 1's and X's into the largest implicant groups you can find. Two such groupings are possible, represented by the terms and . These are circled in Figure 2.50(b).

Continuing down the column, we next come to m5. At this point we should add only new implicants that are not already contained within an implicant found so far. is the only implicant we can add under this rule, as shown in Figure 2.50(c).

The next element of the on-set is m6, but no new implicant can be added because the set of implicants already contains . So we continue searching for the next element of the on-set, which is m13. We can now add the implicant . Note that the implicant is already covered by the prime implicant , so we do not add it to the K-map. The state of the process to this point is shown in Figure 2.50(d).

The next 1 is m8, which contributes three additional prime implicants, , , and . This is shown in Figure 2.50(e). The process continues with m9 and m10, but these add no new prime implicants.

All the elements of the on-set are now covered, and we have obtained all prime implicants. The next step is to identify the essential prime implicants. The highlighted prime implicants of Figure 2.50(f), and , are the essential primes because they exclusively cover m6 and m10, respectively.

The last step is to cover the remaining 1's not already covered by the essential primes. This is accomplished by the single prime implicant . The process of enumerating prime implicants found six of them, yet three were sufficient to cover the entire on-set. The final minimized function is

K-Maps Revisited: Five- and Six-Variable Functions

The K-map method is not limited to four variables or less, although using it to visualize functions with more than four variables becomes more challenging. It is important to remember that within an n-variable map we must check for adjacencies in n directions. Fortunately, we can handle the adjacencies for up to six variables, simply by stacking two or four 4-variable K-maps.

Five-Variable K-Maps A five-variable map is shown in Figure 2.51(a).
Let's consider the following Boolean function:

F(A,B,C,D,E) = Â m(2,5,7,8,10,13,15,17,19,21,23,24,29,31)
The filled in K-map is shown in Figure 2.51(b). We have omitted the 0 entries to reduce the clutter in the figure. When searching for adjacencies, besides looking in the four horizontal and vertical squares as we did in the four-variable K-map, we must also look either up or down. The example's on-set is covered by the four prime implicants (group of 8), (group of 4), (group of 2), and (group of 2).

Six-Variable K-Map The six-variable K-map is to the five-variable map as the four-variable is to the three-variable: the number of four-variable planes is increased from two to four. This is shown in Figure 2.52(a).
An example six-variable K-map is shown in Figure 2.52(b). The function is

F(A,B,C,D,E,F) = Â m(2,8,10,18,24,26,34,37,42,45,50,53,58,61)
In addition to horizontal and vertical adjacencies, the planes immediately above and below the element being examined must be checked. The top plane also wraps around onto the bottom plane. In the figure, the on-set is covered by three prime implicants: (a group of 8), (a group of 4), and (a group of 4).