[GTALUG] example of why RISC was a good idea

D. Hugh Redelmeier hugh at mimosa.com
Sat May 21 16:01:03 EDT 2016


| From: James Knott <james.knott at rogers.com>

| Many years ago, I used to maintain Data General Eclipse systems.  The
| CPU used microcode to control AMD bit slice processors and associated
| logic.  The microcode instructions were over 100 bits wide.  Now
| *THAT'S* RISC.  ;-)

Technically, that was called (horizontal) microcode.

With WCS, a customer could sweat bullets and perhaps get an important
performance improvement.  It wasn't easy.  Perhaps that is similar to
the way GPUs can be used very effectively for some computations.

My opinions:

Microcode made sense when circuits were significantly faster than core
memory and there was no cache: several microcode instructions could
be "covered" by the time it took to fetch a word from core.

Microcode can still make sense but only for infrequent things or for
powerful microcode where one micro-instruction does just about all the
work of one macro-instruction.  Even with these considerations, it
tends to make the pipeline longer and thus the cost of branches higher.

The big thing about RISC was that it got rid of microcode.  At just
the right time -- when caches and semiconductor memory were coming
onstream.  Of course UNIX was required because it was the only popular
portable OS.

The idea of leaving (static) scheduling to the compiler instead of
(dynamic) scheduling in the hardware is important but not quite right.
Many things are not known until the actual opperations are done.  For
example, is a memory fetch going to hit the cache or not?  I think
that this is what killed the Itanium project.  I think that both kinds
of scheduling are needed.

CISC losses: the Instruction Fetch Unit and the Instruction Decoder
are complex and potential bottlenecks (they add to pipeline stages).
CISC instruction sets live *way* past their best-before date.

RISC losses: instructions are usually less dense.  More memory is consumed.
More cache (and perhaps memory) bandwidth is consumed too.
Instruction sets are not allowed to change as quickly as the
underlaying hardware so the instruction set is not as transparent as
it should be.

x86 almost vanquished RISC.  No RISC worksations remain.  On servers,
RISC has retreated a lot.  SPARC and Power don't seem to be growing.
But from out in left field, ARM seems to be eating x86's lunch.  ATOM, x86's
champion, has been cancelled (at least as a brand).


More information about the talk mailing list