Computer Science GRE Study Guide, Digital logic design

Table of Contents

II.    Computer Organization and Architecture – 15%

A.    Digital logic design

1.      Implementation of combination and sequential circuits

a.       Definition: Sequential circuits have memory, and the output depends on the input and the memory. Combinational circuits do not have memory and the output is based only on the input.

b.      A flip-flop (or latch) is a simple memory element that we use to store a value. The flip-flop has two inputs and one output. (One of the inputs is usually the clock.) When the clock is asserted, then you can change the stored value of the flip-flop (the latch is said to be open).

2.      Optimization and analysis

a.       Definition: Whatever. (If you think I should put something here, email me.)

B.     Processors and control units

1.      Instruction sets

a.       Definition: An instruction set is all of the machine instructions that a processor understands.

b.      An instruction set is all of the machine language instructions that the designers of a processor provide for computation. For a general-purpose computer, the instruction set should include instructions to perform arithmetic, logical operations, data transfers, conditional branches, and unconditional jumps. An individual instruction is literally made up of a certain number of bits in a specific order. The Intel x86 architecture has a huge set of instructions and the instructions can have variable bit lengths. Other instruction sets keep the instruction length the same for all instructions to keep the hardware simple and fast. When the processor first receives an instruction, it must first decide what type of instruction it is. This can be based on the length of the instruction and/or certain flag bits in the instruction.

2.      Computer arithmetic and number representation

a.       Definition: Since computers use binary and cannot perform arithmetic on infinite numbers, it is important to understand the limitations and processes of a system.

b.      When humans write numbers, the smallest digit is the rightmost digit. Since computers do not need to represent numbers from right to left, we use the terms least significant bit and most significant bit instead.

c.       There are multiple methods to represent the sign of a given number. Sign and magnitude is the most obvious to use; we simply use a bit to represent positive or negative. This system has major problems; modern computers do not use it. Now imagine a computer that did not use any sign, and you try to subtract a large number from a small one. The result is that you will borrow from a string of zeros leaving a string of ones. This makes it natural to represent positive numbers with a bunch of leading zeros and negative numbers with a bunch of leading ones. We call this system two’s complement. Complementing a binary number means changing a 0 to a 1 and a 1 to a 0.

d.      One’s complement is not used very often. To use all negative numbers start with the positive version and then complement the entire number. This system causes some unusual problems like operating on negative zero.

e.       Computer addition works a lot like addition with pen and paper. The amount that is carried to the next operation is called the carry out. The amount carried from the previous operation is called the carry in. We use an end-around carry when performing some addition functions on one’s complement numbers, but not two’s complement numbers.

f.       If you add two positive numbers together and get a negative number, then overflow has occurred.

3.      Register and ALU organization

a.       Definition: Registers are extremely fast memory in the processor that are used during calculations performed by the instruction set.

b.      Definition: The arithmetic logic unit (ALU) is the part of the processor that actually performs addition, multiplication, and Boolean operations.

c.       Caveat: I am weak in the area of ALU and processor architecture. I find it boring and plan to skip any questions about this on the test. I am guessing that as many as three questions will relate to an understanding of exactly how instructions are processed. I recommend you look over your computer architecture texts.

4.      Data paths and control sequencing

a.       Definition: A processor has two general parts: the data path and the control. The data path moves the information around the processor and performs calculations; the control directs the actions of the data path.

b.      We implement job-scheduling queues at many levels in the machine. The simplest and most obvious is FIFO (first in/first out); this processes the oldest job first. LIFO is the opposite; it processes the youngest job first. Priority queuing processes the job with the highest priority first. You can also process the shortest job first. Shortest job first produces the smallest average turnaround time. Lastly, you can force the jobs to share the system resources by using round-robin. Round-robin uses an interval timer to force processing of a program to suspend and allow others to run.

⇒ Next section

Liked it? Take a second to support Hunter Hogan on Patreon!
Become a patron at Patreon!

I’m disabled & homeless.

I must earn money from advertisements.

Please whitelist my website and YouTube channel in your ad blocker or cookie blocker, such as Privacy Badger.

Thank you.