
The Power of Bits
- The fundamental unit of memory inside a computer is called a bit—a term introduced in a paper by Claude Shannon as a contraction of the words binary digit.
- An individual bit exists in one of two states, usually denoted as
0
and1
. - More sophisticated data can be represented by combining larger numbers of bits:
- Two bits can represent four (2 × 2) values.
- Three bits can represent eight (2 × 2 × 2) values.
- Four bits can represent 16 ($2^4$) values, and so on. The laptop I am using has 16GB of main memory and can therefore exist in $2^{549,755,813,888}$ states. If you were to write that number out, it would contain more than fifty billion digits.
The laptop I am using has 16 GB of main memory, which is $2^{34}$ bytes or $2^{37}$ bits. This means it can exist in $2^{2^{37}}$ unique memory states. If you were to write out that number in decimal, it would have over 41 billion digits.
Leibniz and Binary Notation
- Binary notation is an old idea. It was described back in 1703 by the German mathematician Gottfried Wilhelm von Leibniz.
- Writing in the proceedings of the French Royal Academy of Science, Leibniz describes his use of binary notation in a simple, easy-to-follow style.
- Leibniz’s paper further suggests that the Chinese were clearly familiar with binary arithmetic 2000 years earlier, as evidenced by the patterns of lines found in the I Ching.

Using Bits to Represent Integers
Binary notation is similar to decimal notation but uses a different base. Decimal numbers use 10 as their base, which means that each digit counts for ten times as much as the digit to its right. Binary notation uses base 2, which means that each position counts for twice as much, as follows:

Numbers and Bases
- The calculation at the end of the preceding slide makes it clear that the binary representation 00101010 is equivalent to the number 42. When it is important to distinguish the base, the text uses a small subscript, like this:
$$00101010_2 = 42_{10}$$
-
Although it is useful to be able to convert a number from one base to another, it is important to remember that the number remains the same. What changes is how you write it down.
-
The number 42 is what you get if you count how many stars are in the pattern at the right. The number is the same whether you write it in English as forty-two, in decimal as 42, or in binary as 00101010.

- Numbers do not have bases; representations do.
Octal and Hexadecimal Notation
-
Because binary notation tends to get rather long, computer scientists often prefer octal (base 8) or hexadecimal (base 16) notation instead. Octal notation uses eight digits: 0 to 7. Hexadecimal notation uses sixteen digits: 0 to 9, followed by the letters A through F to indicate the values 10 to 15.
-
The following diagrams show how the number forty-two appears in both octal and hexadecimal notation:

Practice: Number Bases
What is the decimal value for each of the following numbers?
$10001_2$
$177_8$
$AD_{16}$
- As part of a code to identify the file type, every Java class file begins with the following sixteen bits:
1100101011111110
- How would you express that number in hexadecimal notation?
- Algorithm:
-
Break the number up into groups of four, starting from the right:
- 1100 1010 1111 1110
-
Because binary is base 2, and hexadecimal is base 16, that means that four bits represent one hex digit. So, you can then group them directly, using this table:
Binary Hexadecimal $0000$ $0$ $0001$ $1$ $0010$ $2$ $0011$ $3$ $0100$ $4$ $0101$ $5$ $0110$ $6$ $0111$ $7$ $1000$ $8$ $1001$ $9$ $1010$ $A$ $1011$ $B$ $1100$ $C$ $1101$ $D$ $1110$ $E$ $1111$ $F$ -
So, you can group the digits like this:
$1100$ $1010$ $1111$ $1110$ $C$ $A$ $F$ $E$ Or, $CAFE_{16}$
-
- Algorithm:
Bits and Representation
- Sequences of bits have no intrinsic meaning except for the representation that we assign to them, both by convention and by building particular operations into the hardware.
- As an example, a 32-bit word represents an integer only because we have designed hardware that can manipulate those words arithmetically, applying operations such as addition, subtraction, and comparison.
- By choosing an appropriate representation, you can use bits to represent any value you can imagine:
- Characters are represented using numeric character codes.
- Floating-point representation supports real numbers.
- Two-dimensional arrays of bits represent images.
- Sequences of images represent video.
- And so on . . .
Calculating with Logic
- How does a computer actually add numbers together (or do other arithmetic)?
- We have discussed how computers deal with 0s and 1s at the lowest level. Computers use transistors to manipulate the digital 0s and 1s, and certain transistor configurations can be used to build logic gates to perform boolean functions: AND, OR, XOR, NOT, etc.
- We are going to “build an adder” using logic and a set of logic gates. The symbols for the logic gates we will use look like this:

Alternative notation
- There is some common notation that we use when describing logic gates mathematically:
In Words In Symbols NOT A $\bar{A}$ A AND B $AB$ A OR B $A + B$ A XOR B $A \oplus B$
Determining a Circuit from a Truth Table
-
We often write A AND B as AB, and A OR B as A+B. We also write A XOR B as A⊕B
-
All circuits can be made from a combination of AND and OR gates, in the following way:
- For each set of inputs that produces a “1” output, AND together all of the inputs such that the result is “1”. For inputs that are 0, use the NOT operator. For example:
A B C Result $0$ $0$ $0$ $1$ $0$ $0$ $1$ $0$ $0$ $1$ $0$ $1$ $0$ $1$ $1$ $1$ $1$ $0$ $0$ $0$ $1$ $0$ $1$ $1$ $1$ $1$ $0$ $0$ $1$ $1$ $1$ $0$
- For each set of inputs that produces a “1” output, AND together all of the inputs such that the result is “1”. For inputs that are 0, use the NOT operator. For example:
-
There are three 1 outputs on the right. The first output is when all three inputs are 0, so it is true for A̅ B̅ C̅. The next 1 result would be from A̅ B C̅, etc. Altogether, we could get the result as follows:
A̅ B̅ C̅ + A̅ B C̅ + A̅ B C + A B̅ C
-
The above makes it possible to produce a logic diagram for this circuit, but the circut would actually be more complicated that it might need to be.
-
Before we dive into that, let’s start by looking at the simple logic gates as described above: NOT, AND, OR, and XOR. We will use the Digital Logic Activity to simulate these gates so we can play around with them.

Notice that A is connected to all of the gates, and B is connected to the AND, OR, and XOR gates. When A is flipped from 0 to 1, the output changes from 1 to 0 for the NOT gate, and also for the OR and XOR gates:

Once you build the circuit, you can try out all of the combinations.
- Let’s go back to the truth table outputs we had from before:
A̅ B̅ C̅ + A̅ B C̅ + A̅ B C + A B̅ C
- We could build this circuit with lots of NOTs, ANDs, and ORs, but it is complicated!

-
However, once we have the equivalent AND and OR circuit, we can use some mathematical properties to reduce the number of components we will need for our circuit.
-
Here is the original result:
A̅ B̅ C̅ + A̅ B C̅ + A̅ B C + A B̅ C
- We can use the distributive property to find the common C and C̅ terms, as follows:
C̅ (A̅ B̅ + A̅ B) + C (A̅ B + A B̅)
- There are a couple of nice substitutions, as well:
(A̅ B + A B̅) = A ⊕ B
(A̅ B̅ + AB) = $\overline{A\oplus B}$
- So, above, we can make the following substitution:
C̅ (A̅ B̅ + A̅ B) + C (A̅ B + A B̅) = C̅ (A̅ B̅ + A̅ B) + C (A ⊕ B)
- Now, we are left with, which we can reduce even further:
C̅ (A̅ B̅ + A̅ B) + C (A ⊕ B)
- For the first term, we can pull out the A̅ :
C̅A̅ (B̅ + B)
- But, (B̅ + B) is always true, so this further reduces to:
C̅A̅
- We now have:
C̅A̅ + C (A ⊕ B)
-
We can simplify the first term by using a NOR gate (“not OR”): $\bar{C}\bar{A} = \overline{A+C}$
-
So our final result is: $\overline{A+C} + C(A \oplus B)$
-
Which looks like this in a logic diagram:

A more formulaic way to determine circuits: the Karnaugh Map
-
The logic that we just went through is challenging, and takes some time to learn and master.
-
There is another way to simplify circuits, called the Karnaugh Map, or K-map. A Karnaugh map works best for small numbers of inputs (from one to four), and it allows someone to simplify circuits quickly and efficiently, once you understand the rules. Karnaugh map analysis may not produce the absolute minimum number of gates, but it will produce a minimal number of “Sum of Products” terms. Further analysis might be necessary to minimize the number of gates in a circuit.
-
The basic idea is to place our truth table values into a different type of table, and then group together values in the new table that allow us to determine what our simplified circuit is.
-
Let’s just start with our example from above:
A | B | C | Result |
---|---|---|---|
$0$ | $0$ | $0$ | $1$ |
$0$ | $0$ | $1$ | $0$ |
$0$ | $1$ | $0$ | $1$ |
$0$ | $1$ | $1$ | $1$ |
$1$ | $0$ | $0$ | $0$ |
$1$ | $0$ | $1$ | $1$ |
$1$ | $1$ | $0$ | $0$ |
$1$ | $1$ | $1$ | $0$ |
- This truth table has three inputs, A, B, and C. We can construct the following 3-variable K-map from the truth table:
A \ BC | 00 | 01 | 11 | 10 |
---|---|---|---|---|
0 | 1 | 0 | 1 | 1 |
1 | 0 | 1 | 0 | 0 |
-
To read this, the rows represent the A input value (0 and 1), and the columns represent the B and C values: 00, 01, 11, and 10. You might wonder why we wrote the two-digit values in the order we did (not ascending, which would be 00, 01, 10, and 11). This is because for this method to work, two columns must not differ by more than one digit (this is called a Gray code).
-
You’ll notice that we can read off the input values from the new table’s left column and top row, and the output is inside the table. For example, A=0, BC=11 corresponds to an output of 1, as highlighted in red below. This is, indeed, the output in the original table when A = 0, B = 1, and C = 1:
A \ BC | 00 | 01 | 11 | 10 |
---|---|---|---|---|
0 | 1 | 0 | 1 | 1 |
1 | 0 | 1 | 0 | 0 |
-
Once you have the K-map, you start looking for groupings of 1s. You must group in sets of powers of 2 (e.g., 1, 2, 4, 8), and you can “wrap” a group around the table. We want all groups to be as big as possible.
-
For this example, we are going to need three boxes, highlighted below:
One box: a 2-element box:
A \ BC | 00 | 01 | 11 | 10 |
---|---|---|---|---|
0 | 1 | 0 | 1 | 1 |
1 | 0 | 1 | 0 | 0 |
A second box: also a 2-element box that wraps around (and overlaps the other box, which is fine). By “wrapping,” we mean that the left is considered to be touching the right, and the top is considered to be touching the bottom:
A \ BC | 00 | 01 | 11 | 10 |
---|---|---|---|---|
0 | 1 | 0 | 1 | 1 |
1 | 0 | 1 | 0 | 0 |
A third box: a 1-element box:
A \ BC | 00 | 01 | 11 | 10 |
---|---|---|---|---|
0 | 1 | 0 | 1 | 1 |
1 | 0 | 1 | 0 | 0 |
-
Once we have our boxes, we can determine what inputs are necessary to produce the outputs. As we look at a box, we look to see whether the inputs change or stay the same to produce the output. Any inputs that change can be ignored – they don’t affect the output.
-
Let’s look at the first box:
A \ BC | 00 | 01 | 11 | 10 |
---|---|---|---|---|
0 | 1 | 0 | 1 | 1 |
1 | 0 | 1 | 0 | 0 |
-
Because we have a single row for the box, the “A” value does not change, so it is necessary. A’s value is 0 in that row, which corresponds to $\bar{A}$ for our logic (“not A”).
-
For the columns, B is 1 in both cases, so it does not change, and is necessary for our output. Therefore, we have $B$ because it a non-changing 1.
-
The C, however, does change, and therefore it does not contribute to that particular output, so we can ignore it.
-
We are left with $\bar{A}B$, which is an AND term.
We can do the same analysis for our second box:
A \ BC | 00 | 01 | 11 | 10 |
---|---|---|---|---|
0 | 1 | 0 | 1 | 1 |
1 | 0 | 1 | 0 | 0 |
- Again, A is 0, does not change, and is $\bar{A}$.
- This time, B changes from 0 to 1, does not contribute, and can be ignored.
- This time, C is fixed at 0, so it contributes as $\bar{C}$.
- So, we have the combined $\bar{A}\bar{C}$ for this box.
Finally, let’s look at our third box:
A \ BC | 00 | 01 | 11 | 10 |
---|---|---|---|---|
0 | 1 | 0 | 1 | 1 |
1 | 0 | 1 | 0 | 0 |
- Because this is a 1-element box, we do have to use all three inputs.
- A is 1, so we have $A$ contributed.
- B is 0, so we have $\bar{B}$ contributed.
- C is 1, so we have $C$ contributed.
- Overall, the logic is ${A\bar{B}C}$.
To finish off the analysis, we combine all three terms using OR:
$\bar{A}B + \bar{A}\bar{C} + {A\bar{B}C}$
Here is the final circuit:

Creating an Adder from logic!
-
When adding two one-bit numbers together, we have two outputs: a sum and a carry.
-
Let’s fill in the table below to determine these values based on adding two bits:
A B Sum Carry 0 0 0 1 1 0 1 1 A B Sum Carry 0 0 0 0 0 1 1 0 1 0 1 0 1 1 0 1 -
Can we determine what logic gates are necessary to produce this circuit, with inputs of A and B?
The Sum column is the XOR function, and the Carry column is the AND function. -
Now, we can use the logic simulator to produce this circuit, called a “half adder.” You can test the circuit below by clicking on the values next to A and B to toggle them.
The Full Adder
-
A half adder takes two one-bit numbers and adds them. But, if we want to do this for more than one bit, we have to include a carry-in as well, to propogate the result through whole addition. Let’s make a new table and fill in the Sum and Carry Out columns:
A B Carry In Sum Carry Out 0 0 0 0 0 1 0 1 0 0 1 1 1 0 0 1 0 1 1 1 0 1 1 1 A B Carry In Sum Carry Out 0 0 0 0 0 0 0 1 1 0 0 1 0 1 0 0 1 1 0 1 1 0 0 1 0 1 0 1 0 1 1 1 0 0 1 1 1 1 1 1
Determining the circuit for a full adder
-
The Full Adder has many circuit solutions, but let’s use our Karnaugh Map skills to determine a possible circuit for a full adder, and then work on simplifying it.
-
Here is the K-map for the circuit:
A \ B Cin 00 01 11 10 0 0 1 0 1 1 1 0 1 0 -
Unfortunately, there aren’t any groupings of more than 1! So, we just need to read off the inputs that produce the 1 outputs:
$$\bar{A}\bar{B}C + \bar{A}B\bar{C} + A\bar{B}\bar{C} + ABC$$
- This would not be particularly fun to create in a logic circuit. So, Let’s see what we can do about that. We can use some classic algebraic manipulation. First, let’s make two groups out of this: a group that contains the $\bar{A}$ term, and a group that contains the $A$ term:
$$(\bar{A}\bar{B}C + \bar{A}B\bar{C}) + (A\bar{B}\bar{C} + ABC)$$
- Now, let’s factor out a $\bar{A}$ from the first group, and a $A$ from the second group:
$$\bar{A} (\bar{B}C + B\bar{C}) + A (\bar{B}\bar{C} + BC)$$
-
Now, there are two identities that we can immediately use to simplify these expressions:
(1) $\bar{B}{C} + B\bar{C} = B \oplus C = B \text{ XOR } C$
(2) $\bar{B}\bar{C} + BC = \overline{B \oplus C} = \text{NOT } (B \text{ XOR } C)$
-
The first is a classic definition of XOR, where only one input is true, either $B$ or $C$ but not both. The second is NOT XOR.
-
This leaves us with the following:
$$\bar{A} (B \oplus C) + A (\overline{B \oplus C})$$
- But, we can use (1) again directly to simplify this to:
$$A \oplus (B \oplus C)$$
- We could rearrange this as follows, using the associative property:
$$(A \oplus B) \oplus C$$
-
So, we’ve very much simplified the Sum!
-
Now let’s look at Carry Out:
-
Here is a K-map for Cout:
A \ B Cin 00 01 11 10 0 0 0 1 0 1 0 1 1 1 -
We can group into three groups, the vertical 2-group of 1s, and two horizonal 2-groups taking first the left and middle 1, and then the middle and right 1. This leaves us with the following:
$$AB + AC + BC$$
-
This is actually quite nice, but based on what we already have from the Sum, we can actually do better.
-
Let’s first factor a C out of the $AC + BC$ group:
$$AB + C(A + B)$$
- There is another boolean identity that we can use:
$$A + B = (A \oplus B) + ab$$
- So,
$$C(A + B) = C((A \oplus B) + AB) = C(A \oplus B) + ABC$$
- We can then substitute this back into the original:
$$AB + C(A + B) = AB + C(A \oplus B) + ABC$$
- Now let’s rearrange and factor out an AB:
$$AB (1 + C) + C(A \oplus B)$$
- But, 1 OR’d with anything is always 1, so we can drop out the $(1 + C)$ altogether, and are left with:
$$AB + C(A \oplus B)$$
-
Why is this better? Because we already have the $A \oplus B$ from our Sum term!
-
Here are the final two Sum and Carry Out expressions we can create with logic gates:
$$\text{Sum }=(A \oplus B) \oplus C$$ $$\text{Carry Out }=AB + C(A \oplus B)$$
- Let’s use the logic simulator to create the following circuit, which you can test by clicking on the inputs:
Building a 4-bit adder
- Our full adder is great, but it only works for 1-bit numbers! What if we wanted to add bigger numbers? We can chain together our full adder to make an adder with as many bits as we want. Here is a (non-clickable) example of a 2-bit adder:

- We are adding the 2-digit B0A0 and B1A1 together. Here is the output when we add $01 + 10$. We get the output $11$, which is correct, becuase $1_d + 2_d = 3d$, which is $11$ in binary:

- Here is the output when we add $11 + 11$. We get the output $110$ (using the carry out), which is correct, becuase $3_d + 3_d = 6d$, which is $110$ in binary:

- The logic simulator has a built-in component for a full adder, meaning we can simplify our diagram considerably. The full adder component packages up a full adder into a single component:

- Here is an 8-bit adder, made from 8 full adders. This can add two numbers up to a total of 255 (with one bit carry-out):

- Here is the 8-bit adder adding $42_d + 103_d = 145_d$. In binary, that is $00101010 + 01100111 = 10010001$:

Digital Logic Memory
-
So far, we’ve seen how to use digital logic to build simple circuits whose outputs respond directly and immediately to their inputs. This type of behavior characterizes combinational logic, one of the great ideas in computer science. These circuits are completely stateless or transient—as we change the inputs, the outputs change accordingly, with no memory of past input values.
-
But, another great idea in computing is the idea of memory, which is a way to store information. We can use the idea of feedback to get a circuit to remember a state, and this opens up a huge new world of computational functionality.
The SR Latch
- Let’s start by looking at one of the most simple feedback memory circuits, the SR Latch. We can build one quite simply using two NOR (NOT OR) gates, which have inverted output from a NOR gate. Here is the truth table for an OR gate and a NOR gate:
A | B | A OR B | A NOR B |
---|---|---|---|
0 | 0 | 0 | 1 |
0 | 1 | 1 | 0 |
1 | 0 | 1 | 0 |
1 | 1 | 1 | 0 |
- We can use two NOR gates and “feed back” the output of each gate to the input of the other gate to create the SR latch (click on R and S to test). In the diagram, $Q$ is the output and $Q’$ is the compliment (or inverse) of the output. Latches provide both the output and the inverted output to cut down on the need for more gates:
The truth table for the SR latch:
S (Set) | R (Reset) | Q (Output) | Q̅ (Complement) | Description |
---|---|---|---|---|
0 | 0 | Q | Q̅ | No change (latch holds) |
0 | 1 | 0 | 1 | Reset |
1 | 0 | 1 | 0 | Set |
1 | 1 | 0 | 0 | Invalid (both outputs low) |
- ($\bar{Q}$ is the same as $Q’$)
- Notice that the normal state for the latch is $S=0$, $R=0$. This is the “saved” state.
- If $S$ is changed to 1 (when both are 0), this “sets” the latch to 1, and $Q$ becomes 1 ($Q’$ becomes 0)
- If $R$ is changed to 1 (when both are 0), this “resets” the latch to 0, and $Q$ becomes 0 ($Q’$ becomes 1)
Using an SR latch for something a bit more interesting
- Let’s use our SR latch to turn a light on and off based on two push buttons, one for “On” and one for “Off”. Below, we’ve rearranged the circuit so that we have two push buttons for “Light On” and “Light Off”. Notice that when one of the buttons is pressed, it changes to 1 and the circuit updates (if we are setting or resetting), and then it goes back to 0, but the light either stays on or off:
Flip-Flops
- SR latches have one issue that makes them unsuited for some memory tasks: they update continually, based on the level (1 or 0). We can modify this to more useful by adding the idea of an “edge-triggered” device – it only updates when a particular input changes from either low to high (“rising edge”) or from high to low (“falling edge”). Below you can see a signal that changes from low to high to low to high, and the edges are labeled:

-
The diagram above can be used as a “clock” for a circuit. Every time the value rises from 0 to 1 (for example), something can be triggered in the circuit. This allows for building more interesting logical blocks.
-
The logic simulator has a “D flip-flop” that is built from lower-level components:

-
The way the D flip-flop works is that the value at D is propogated through to Q when the clock (the > symbol) changes from low to high. We can use this to build a “shift register” that propogates values through the system using D and toggling the clock input:
-
Let’s say we want to store the 4-bit value $1011$ in this circuit. We first set D to the right-most bit (1), and then we set the shift value to 1. This puts the 1 value from D into the left-most flip-flop. It also shifts the bits already in the register to the right by one. If we do the same for the next bit (1) and then the next (0) and then the left-most bit (0), the circuit will have stored the entire 4-bit value:

-
Here is the shift register data to try the machine in the simulator:
To load the data, click the Download button below. Then, load the circuit in the simulator using the button shown here:
{ // JSON5 v: 6, opts: {name: '4-bit Shift Register'}, components: { ff0: {type: 'ff-d', pos: [315, 210], in: '0-3', out: [4, 5], state: 1}, in0: {type: 'in', pos: [210, 190], id: 7, name: 'Input', val: 1}, ff1: {type: 'ff-d', pos: [435, 210], in: '9-12', out: [13, 14]}, ff2: {type: 'ff-d', pos: [550, 210], in: '15-18', out: [19, 20], state: 1}, ff3: {type: 'ff-d', pos: [670, 210], in: '21-24', out: [25, 26], state: 1}, out0: {type: 'out', pos: [355, 115], orient: 'n', id: 6}, out1: {type: 'out', pos: [475, 115], orient: 'n', id: 27}, out2: {type: 'out', pos: [590, 115], orient: 'n', id: 28}, out3: {type: 'out', pos: [710, 115], orient: 'n', id: 29}, in1: {type: 'in', pos: [210, 230], id: 30, name: 'Shift'}, label0: {type: 'label', pos: [460, 70], text: '4-bit Shift Register'}, }, wires: [[7, 3], [30, 0], [19, 24], [13, 18], [4, 12], [30, 9, {via: [[325, 290]]}], [30, 15, {via: [[410, 295]]}], [30, 21, {via: [[500, 295]]}], [4, 6], [25, 29], [19, 28], [13, 27]] }