Sunday, 29 September 2013

Logic Functions and Switches

Boolean Algebra

Boolean algebra is at the heart of understanding all digital systems. You are now ready to study its fundamental structure.

Axioms of Boolean Algebra A Boolean algebra consists of a set of elements B, together with two binary operations {+} and { and a unary operation { ' }, such that the following axioms hold:
The set B contains at least two elements a, b such that a not equal b.

Closure: For every a, b in B,

a + b is in B
a b is in B

Commutative laws: For every a, b in B,

a + b = b + a
a b = b a

Associative laws: For every a, b, c in B,

(a + b) + c = a + (b + c) = a + b + c
(a b) c = a (b c) = a b c

Identities: There exists an identity element with respect to {+}, designated by 0, such that a + 0 = a for every a in B.

There exists an identity element with respect to {, designated by 1, such that a 1 = a for every a in B.
Distributive laws: For every a, b, c in B,

a + (b c) = (a + b) (a + c)
a (b + c) = (a b) + (a c)

Complement: For each a in B, there exists an element a' in B (the complement of a) such that

a + a' = 1
a a' = 0
Note: We use A' and interchangeably to represent complement in this chapter.

It is easy to verify that the set B = {0, 1} and the logical operations OR, AND, and NOT satisfy all the axioms of a Boolean algebra. Simply substitute 0 and 1 for a and b, OR for +, AND for , and NOT for ', and show that the expressions are true. For example, to verify the commutative law for +:
0 + 1 = 1 + 0   0 1 = 1 0 1 = 1    0 = 0
A Boolean function uniquely maps some number of inputs over the set {0, 1} into the set {0, 1}. A Boolean expression is an algebraic statement containing Boolean variables and operators. A theorem in Boolean algebra states that any Boolean function can be expressed in terms of AND, OR, and NOT operations. For example, in Section 1.3.3 we saw one way to map a truth table into a Boolean expression in the sum (OR) of products (AND) form.

In fact, there are other ways to represent the function using new logical operations, to be introduced next. These operations are interesting because they are easier to implement with real transistor switches.

Boolean Operations Revisited Let's review the elementary Boolean operations and how these are represented as gates, truth tables, and switches. Figure 2.1, Figure 2.2, and Figure 2.3 summarize the repre-sentations for the COMPLEMENT/NOT, AND, and OR operations, respectively.
Take the Boolean expression . A version of the expression with parentheses would be .

Each pair of parentheses represents the expression generated by a single gate. Thus, the circuit is built up through a set of intermediate results, from the inside out:


The gate-level implementation is shown in Figure 2.4(a), using two input gates. The primitive gates need not be limited to two inputs, however. Figure 2.4(b) shows the same circuit implemented using a three-input AND gate. These implementations are equivalent because of the associative law of Boolean algebra.

Each appearance of a variable or its complement in an expression is called a literal. In the preceding expression, we can see that there are four variables and four literals. The following expression has 10 literals but only three variables (A, B, and C): Each literal represents the connection of a variable or its complement to a unique gate input.

Additional Kinds of Logic Gates

There are other functions of two Boolean variables besides AND, OR, and NOT. In fact, there are 16 possible functions, the number of different ways you can write down the different choices of 0 and 1 for the four possible truth table rows. A truth table representation of the 16 functions is shown in Figure 2.5.
The constant functions 0 and 1 and the functions X, , Y, X Y, and X + Y represent only half of the possible functions. We now introduce the remaining Boolean operators.

NAND and NOR Two of the most frequently encountered Boolean operators are NAND (not AND) and NOR (not OR). Their gate, truth table, and logical switch representations are shown in Figure 2.6 and Figure 2.7. The NAND operation behaves as if an AND is composed with a NOT: it yields a logic 0 in the truth table rows where AND is a 1, and it yields a 1 in the rows where AND is 0. The gate representation is an AND gate with a small circle or "bubble" at its output, denoting negation.

If you take a close look at the logic switch representation in Figure 2.6 and compare it to Figure 2.2, you will see that it looks like an AND function with the true and false inputs reversed. NAND is true, and a switching path exists from true to the output, when either X is 0 (the top normally closed switch is closed) or Y is 0 (the bottom normally closed switch is closed). Alternatively, it is false when X and Y are both true, closing the path from false to the output.

NOR behaves in a similar fashion, but now with respect to OR. Once again the truth table outputs are complemented, and we draw the NOR gate as an OR gate with a bubble at the output. The switch representation in Figure 2.7 shows a topology like the OR gate of Figure 2.3, with the true and false inputs reversed. Both X and Y must be zero to close the normally closed switches and establish the path from true to the output.

NAND and NOR gates far outnumber AND and OR gates in a typical digital design, even though these functions are less intuitive. For electrical reasons, revealed in more detail in Appendix B, normally open switches are better at passing low voltages than high voltages. For example, 0 V at one side of the switch will yield close to 0 V at the other, but 5 V will yield approximately 3.5 V at the other side. The opposite behavior is true about normally closed switches. Thus, the switch configurations of Figures 2.6 and 2.7 are better behaved electrically than those of Figures 2.2 and 2.3.

Since any Boolean expression can be represented in terms of AND, OR, and NOT gates, it is hardly surprising that the same statement can be made about NAND, NOR, and NOT gates. In fact, NOT gates are superfluous: if you carefully examine the truth tables of Figures 2.6 and 2.7, you'll see that NAND and NOR act like NOT when both inputs are identically 0 or 1. We shall see an efficient method for mapping Boolean expressions into NAND and NOR logic in Section 2.2.2.

XOR/XNOR This leaves six functions still unnamed in Figure 2.5. Two of these, frequently of use, are exclusive OR (XOR, also known as the inequality gate or difference function) and exclusive NOR (XNOR, also known as the equality gate or coincidence function). Their truth tables and gate representations are given in Figure 2.8.

XOR is true when its inputs differ in value. XNOR is true when its inputs coincide in value. The Boolean operator for XOR is Ý; XNOR is usually represented as the complement of XOR. As with any Boolean function, these can be implemented in terms of AND, OR, and NOT operations:

XOR:  XNOR: 
If you examine the truth table of Figure 2.8(a), you can see that XOR is precisely the function needed to implement the half adder sum of Chapter 1!

Implication The remaining four functions are based on a Boolean operator called implication. X implies Y (written X \xde Y) is true under two -conditions: either X is false or both X and Y are true. The four remaining functions become X \xde Y, Y \xde X, NOT (X \xde Y), and NOT (Y \xde X). These are not commonly found as primitives readily available for realizing digital systems, so they won't be of much use to you. The timing waveforms for the common functions, AND, OR, NAND, NOR, XOR, and XNOR, are shown in Figure 2.9.
The figure assumes a single-time-unit gate delay in computing new outputs from inputs.

Justification for Logic Minimization

Logic minimization uses a variety of techniques to obtain the simplest gate-level implementation of a Boolean function. But simplicity depends on the metric we use. We examine these metrics in this subsection.

Time and Space Trade-Offs One way to measure the complexity of a Boolean function is to count the number of literals it contains. Literals measure the amount of wiring needed to implement a function. For electrical and packaging reasons, gates in a given technology will have a limited number of inputs. While two-, three-, and four-input gates are common, gates with more than eight or nine inputs are rare. Thus, one of the primary reasons for performing logic minimization is to reduce the number of literals in the expression of the function, thus reducing the number of gate inputs.

An alternative metric is the number of gates, which measures circuit area. There is a strong correlation between the number of gates in a design and the number of components needed for its implementation. The simplest design to manufacture is often the one with the fewest gates, not the fewest literals.

A third metric is the number of cascaded levels of gates. Reducing the number of logic levels reduces overall delay, as there are fewer gate delays on the path from inputs to outputs. However, putting a circuit in a form suitable for minimum delay rarely yields an implementation with the fewest gates or the simplest gates. It is not possible to minimize all three metrics at the same time.

The traditional minimization techniques you will study in this chapter emphasize reducing delay at the expense of adding more gates. Newer methods, covered in the next chapter, allow a trade-off between increased circuit delay and reduced gate count.

Example To illustrate the trade-offs just discussed, consider the following three-variable Boolean function:

The truth table for this function is shown in Figure 2.9(a).
You would probably implement the function directly from the preceding equation, using three NOT gates, four 3-input AND gates, and a single 4-input OR gate. This is called a two-level implementation, with variables and their complements at the zeroth level, AND gates at the first level, and an OR gate at the second level.

You could implement the same truth table with fewer gates. An alternative two-level implementation is given in Figure 2.9(b) as function Z1:

It uses the same number of inverters and OR gates but only three AND gates. The original function has 12 literals. This alternative has only seven, thus reducing the wiring complexity.

The implementation of function Z2 is called multilevel:

The longest path from an input to an output passes through four gates. This contrasts with three gate delays in the two-level functions. In terms of gate counts, the circuit uses two rather than three inverters and only two-input AND and OR gates. Here you can see a trade-off between gate count and performance: Z2 uses simpler gates, but it is not as fast as Z1.

Z3 shows a third realization that uses an XOR gate:

XOR is sometimes called a complex gate, because you normally implement it by combining several NAND or NOR gates. Although this implementation has the lowest gate count, it is also likely to have the worst signal delay. An XOR gate tends to be slow compared with the implementations for Z based on simple AND and OR gates.

Figure 2.11 shows the timing waveforms for the three circuit alternatives, assuming (somewhat unrealistically) a single time unit delay per gate. All have equivalent behavior, although they exhibit slightly different propagation delays. All three circuits show a glitch on the transition ABC = 1 0 1 to ABC = 1 1 0.

No comments:

Post a Comment