[GTALUG] reverse engineering

D. Hugh Redelmeier hugh at mimosa.com
Sat Mar 30 13:16:08 EDT 2019


| From: James Knott via talk <talk at gtalug.org>

| When designers design custom chips, they rely on logic libraries, which
| provide common functions, including CPUs.  So the designer would choose
| a CPU, add memory and I/O and "compile" the new chip.  The libraries
| contain many historic CPUs, as they provide all the function and
| performance needed.  If the job can be done with a 4 bit CPU, such as a
| 4040, there's no need for a 64 bit CPU.  One factor that's critical with
| chip design, is "real estate".  The smaller you can make it, the cheaper
| it will be and provide higher yield.  Several years ago, I used to
| repair point of sale terminals that had a custom chip built around a Z80
| core.

I'm not in that world, so I'm just what I'm about to say isn't
reliable.

The main costs of a core within a larger chip are probably:

- manufacturing costs

  + the larger the area, the worse

  + the more interconnects, especially off-chip, the worse

  + the more advanced the manufacturing technology, the worse
    (partly determined by clock speed)

- engineering costs

  - designing the chip

  - building the software

- licensing costs

According the Raspberry Pi designers, the original Pi's ARM core was a
miniscule part of the die.  So small that it added essentially nothing
to the manufacturing cost.  And using such an old core meant that the
licensing fees were small too.

If a 32-bit core reduces the software costs, it probably makes sense.

If you already have 4-bit or 8-bit software which already does most of
the job, or if you have engineers who already have deep skills only
with those old processors, that might justify using old cores.

As a programmer who has dealt with 8-bit, 16-bit, 32-bit, and 64-bit
processors, I can tell you that each step helped.  The only negative
of the larger systems is that they invited software bloat, and that
bloat really could be a drag on productivity.  OK, there is also
hardware bloat: there usually were more complex mechanisms to
actually get to the pins.

I would guess that ARM, RISC V, and MIPS each have small, cheap, and
useful 32-bit cores for most applications.  They can live with 16-bit
buses, and maybe even 8-bit ones.

Space-hardening is another matter.  I don't think that the space
market has been able to drive processor development so most projects
now use select COTS (common off-the-shelf) parts, try to shield them, and
add redundancy.  They often use very old parts since the projects'
design cycles are so very long.  Using COTS is meant to reduce the
cycle time.

====

The 8080 and z80 had ugly architectures.  They had an excuse: they
took the design of the 4004 and 8008 and kicked it forward.
Clean-sheet designs could have been much nicer.  But the market didn't
demand that.

What made the z80 a success is that the chip got rid of some ugliness
dealing with the i8080: the two phase clock, if I remember correctly.
Interrupts were easier to wire.  And it included some other things
that were external in the i8080.  So two or more (expensive) chips
became one in most designs.  The i8085 had most of those advantages
too.

The z80 added some instructions to the i8080's set.  As an assembler
programmer, I didn't find them compelling.  I avoided them so that my
code would be more portable.  This cost more lines of code but not
speed or code space.

Some of us cut our teeth programming on these things.  Usually in
assembly language.  Many of us fall in love with the first system we
deeply understood.  So there is a generation of defenders of the z80
(and the 6502).  This is nostalgia: they are indefensible for new
designs.


More information about the talk mailing list