8 Great Ideas in Computer Architecture

Application of the following great ideas has accounted for much of the tremendous growth in computing capabilities over the past 50 years.

Design for Moore's Law

Gordon Moore, one of the founders of Intel made a prediction in 1965 that integrated circuit resources would double every 18–24 months. This prediction has held approximately true for the past 50 years. It is now known as Moore's Law.

When computer architects are designing or upgrading the design of a processor they must anticipate where the competition will be in 3-5 years when the new processor reaches the market. Targeting the design to be just a little bit better than today's competition is not good enough.

Use Abstraction to Simplify Design

Abstraction uses multiple levels with each level hiding the details of levels below it. For example:

Make the Common Case Fast

The most significant improvements in computer performance come from improvements to the common case areas where the current design is spending the most time.

This idea is sometimes called Amdahl's law, though it is preferable to use that term to refer to a mathematical law for analyzing improvements. The mathematical law also is closely related to the law of diminishing returns.

Performance via Parallelism

Doing different parts of a task in parallel accomplishes the task in less time than doing them sequentially. A processor engages in several activities in the execution of an instruction. It runs faster if it can do these activities in parallel.

Performance via Pipelining

This idea is an extension of the idea of parallelism. It is essentially handling the activities involved in instruction execution as an assembly line. As soon as the first activity of an instruction is done you move it to the second activity and start the first activity of a new instruction. This results in executing more instructions per unit time compared to waiting for all activities of the first instruction to complete before starting the second instruction.

Performance via Prediction

A conditional branch is a type of instruction determines the next instruction to be executed based on a condition test. Conditional branches are essential for implementing high-level language if statements and loops.

Unfortunately, conditional branches interfere with the smooth operation of a pipeline — the processor does not know where to fetch the next instruction until after the condition has been tested.

Many modern processors reduce the impact of branches with speculative execution: make an informed guess about the outcome of the condition test and start executing the indicated instruction. Performance is improved if the guesses are reasonably accurate and the penalty of wrong guesses is not too severe.

Hierarchy of Memories

The principle of locality states that memory that has been accessed recently is likely to be accessed again in the near future. That is, accessing recently accessed data is a common case for memory accesses. To make this common case faster you need a cache — a small high-speed memory designed to hold recently accessed data.

Modern processors use as many as 3 levels of caches. This is motivated by the large difference in speed between processors and memory.

Dependability via Redundancy

One of the most important ideas in data storage is the Redundant Array of Inexpensive Disks (RAID) concept. In most versions of RAID, data is stored redundantly on multiple disks. The redundancy insures that if one disk fails the data can be recovered from other disks.