Droid issues - Legacy Bash IFS var clobbering VLIW offset?
James Knott
james.knott-bJEeYj9oJeDQT0dZR+AlfA at public.gmane.org
Fri Nov 25 22:03:42 UTC 2011
Lennart Sorensen wrote:
>> What Intel*REALLY* did wrong...
>> >
>> > * The original 8080 was a 16-bit processor that addressed 65,536 bytes
>> >
>> > * The 8086 (and the similar 8088) was the next version. To increase the
>> > adressable space, they used a 16-bit base register and a 16-bit offset
>> > register. The real stupidity was that the address was calculated as...
>> >
>> > (16 * base_register) + offset register
>>
> Wasn't the 8085 in between?
>
>
The 8085 was just an improved 8080, with a couple more instructions and
better integration. However, the 8080 was not a 16 bit CPU. It was an
8 bit, that had a few 16 bit instructions. It's 16 bit arithmetic
instructions were very limited and mainly for memory access. All the
regular arithmetic and logic instructions were 8 bit. The 8088 & 8086
were 16 bit CPUs, capable of 16 bit arithmetic and logic operations.
Both required the 8087 math co-processor, otherwise math had to be done
in software.
> Certainly expanding from 16 to 20 to 24 to 32 bits of address space with
> each generation was pretty stupid. How long until you start to see a
> trend there and do it right.
>
> It's not like they had to add all the external address pins right away
>
One thing they wanted to do was to make it easy to port software from
the 8080/8085. Also, the technology of the day wasn't anywhere near
where it is now and adding more function would mean more complex chips.
They'd also be larger, which makes them more expensive.
BTW, my first computer was an 8080 powered IMSAI 8080.
https://secure.wikimedia.org/wikipedia/en/wiki/IMSAI_8080
> if they had gone to say 24 (or 28) bit right from 16.
>
--
The Toronto Linux Users Group. Meetings: http://gtalug.org/
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://gtalug.org/wiki/Mailing_lists
More information about the Legacy
mailing list