[GTALUG] life expectancy of 32-bit x86 [was Re: Fedora Netinstall] [long]

D. Hugh Redelmeier hugh at mimosa.com
Sun Feb 11 13:06:21 EST 2018


| From: Bob Jonkman via talk <talk at gtalug.org>

| I'm already using IceCat, so the browser isn't my problem. But the
| lack of 32-bit Chrome is the thin edge of the wedge. There will be
| other packages that will no longer be distributed for 32-bit
| architecture. Then what?
| 
| But I guess we're not using 8-bit and 16-bit CPUs any more either.

The difference between 8, 16, and 32 were serious and a programmer had
to be aware of them.  64k limits would impinge fairly often.

The difference between 32 and 64 are important, but only for a small
number of cases.  Handling a lot of memory is best done with pointers
that are wide enough.

There are two kinds of pointers: ones that hold a physical address (to
real memory) (used by the OS), and ones that hold a virtual address
(used by the OS and by userland programs).

- on x86, the PAE feature allows the OS to use more than 3.5G of
  physical memory.  I don't know about other 32-bit architectures.

  (Early Atom processors only sent 32-bits worth of address lines
  off-chip, even though they had PAE.  I have an Atom desktop PC with
  4G of RAM that can only use 3.5G of it.  I consider this an example
  of monopolistic behaviour.  It was not easy to find this limitation
  documented.)

- on 32-bit machines, Linux processes cannot easily use more than 3G
  of address space.  If you want more, you write code to manage your
  address space, but that is intrusive and intricate.  Why not just go
  to 64-bit instead?

- irrelevant aside: many 64-bit ARM SOCs don't support more than 2G or
  3G of RAM.  This seems crazy to me since the first use-case of
  64-bit is to support wider pointers.

- wider external data busses improve performance but they are not tied
  to the instruction-set data-width.  x86 busses have been 64-bits
  wide long before AMD64.  I would guess that most ARM SOCs use 16-bit
  data busses, whether they have 32- or 64-bit instruction sets.

- when you switch from 32-bit to 64-bit, programs require more memory.
  Both for object code and for data.  The x86 => x86_64 bloat was
  remarkably modest.  For ARM it seems to be a lot worse.  On SPARC
  and Power, I understand that almost all userland code is still
  32-bit, probably for this reason.

  If the penalty is significant, it makes sense to keep most programs
  32-bit.  Most standard UNIX utilities were coded in a 16-bit world
  so 32-bits should not cramp their style.

- most programmers don't think enough about overflow.  And only a few
  programming languages help.  If you programmed much for 16-bit
  machines, you do think more about overflow.  On 64-bit machines, few
  things will overflow.  Summary: 64-bit machines are more forgiving
  of sloppy programming.

What really needs more than 3G of address space?

- programs that map the whole of a very large file into the address
  space

- programs that manage scads of buffering.  Perhaps database programs
  dealing with large databases at high speed.

- programs that grew very very very large.  Or problems that grew very
  very very large.

  It seems inexcusable that browsers are starting to tick these boxes.

Almost NO 32-bit x86 chips are in current production.  I think that
Intel has some goofy SoCs for IoT applications that are limited to
32-bit but they really don't matter.

So: I don't expect that we're going to see many programs that will
stop supporting 32-bit.  A greater risk is that 32-bit ports will
become less tested.  That may reduce reliability.

Some distros are surely going to drop 32-bit soon.  I would imagine
that debian won't be one of them.

In the Microsoft world, 32-bit could be turned off at any time, at the
whim of Microsoft.  It costs them a fair bit to support 32-bit SKUs.


More information about the talk mailing list