[GTALUG] How to go fast without speculating... maybe
lsorense at csclub.uwaterloo.ca
Tue Jan 30 17:06:32 EST 2018
On Tue, Jan 30, 2018 at 03:49:54PM -0500, D. Hugh Redelmeier via talk wrote:
> GPUs are extremely parallel by CPU standards. And they certainly are
> getting traction.
> This shows that you may have to grow up in a niche before you can
> expand into a big competitive market.
> - GPUs were useful as GPUs and evolved through many generations
> - only then did folks try to use them for more-general-purpose
> Downside: GPUs didn't have a bunch of things that we take for granted
> in CPUs. Those are gradually being added.
> Another example: ARM is just now (more than 30 years on) targeting
> datacentres. Interestingly, big iron has previously mostly been
> replaced by co-ordinated hordes of x86 micros.
> Yeah, and many tried Gallium Arsenide too. That didn't work out,
> probably due to the mature expertise in CMOS. I guess you could say
> it was also due to energy efficiency being more important that speed
> (as things get faster, they get hotter, and even power-sipping CMOS
> reached the limit of cooling).
> The techniques of designing and debugging asynchronous circuits are
> not as well-developped as those for syncronous designs. That being
> said, clock distribution in a modern CPU is apparently a large
> Actually, this points to an opening.
> Historically, Intel has been a node or so ahead of all other silicon
> fabs. This meant that their processors were a year or two ahead of
> everyone else on the curve of Moore's law.
> That meant that even when RISC was ahead of x86, the advantage was
> precarious. Eventually, the vendors threw in the towel: lots of risk
> with large engineering costs and relatively low payoffs. Some went to
> the promise of Itanium (SGI, HP (who had eaten Apollo and Compaq (who
> had eaten DEC)). Power motors on but has shrunk (loosing game
> machines, automotive (I think), desktops and laptops (Apple), and
> workstations). SPARC is barely walking dead except for contractual
> But now, with Moore's Law fading for a few years, maybe smaller
> efficiency gains start to count. RISC might be worth reviving. But
> the number of transistors on a die mean that the saving by making a
> processor core smaller doesn't count for a lot. Unless you multiply it by
> a considerable constant: many processors on the same die.
Well ARM is RISC, so I am not sure it needs reviving. Seems to be
doing just fine. MIPS is RISC too, and quite a few routers and such
run that too. SGI didn't quite manage to kill that after all.
For that matter, modern x86 chips are internally essentially RISC
chips with an x86 instruction translater on top.
> The Sun T series looked very interesting to me when it came out. It
> looked to me as if the market didn't take note. Perhaps too many had
> already written Sun off -- at least to the extent of using their
> hardware for new purposes. Also Sun's cost structure for marketing
> and sales was probably a big drag.
Most developers were totally unprepared for parallel computing at
the time. So most people couldn't write software to take advantage of
> That hurt RISC, but the vendors knew that they were limited to
> organizations that used UNIX and could recompile all their
> applications. The vendors did try to broaden this but, among others,
> Microsoft really screwed them. Microsoft promised ports to pretty
> much all RISCs but failed to deliver with credible support on any.
Well at least they are now starting to support Windows on ARM. Maybe this
time it will survive.
> Even AMD's 64-bit architecture was screwed by Microsoft. Reasonable
> 64-bit Windows was promised to AMD for when they shipped (i.e. before
> Intel shipped) but 64-bit Windows didn't show up within the useful
> lifetime of the first AMD 64-bit chips.
Well, Microsoft was the one that told intel that they would only support
one 64 bit x86 design, and they were already supporting AMD's design,
so intel better not try to invent their own incompatible version.
Probably after all the wasted time on itanium, Microsoft was not in the
mood for intel to invent yet another architecture.
> A lot of software, by cycles consumed, can use parallelism.
> The Sun T series was likely very useful for running Web front-ends,
> something that is embarrassingly parallel.
Sure, anything with lots of independent jobs for lots of users works well.
So for servers they made good sense.
> Data mining folks seem to have used map/reduce and the like to allow
> parallel processing.
I think that is a more recent development.
> GPUs grew up working on problems that are naturally parallel.
> What isn't easy to do in parallel is a program written in our normal
> programming languages: C / C++ / JAVA / FORTRAN. Each has had
> parallelism bolted on in a way that is not natural to use.
I still remember the people coming on IRC asking for help to setup
beowulf on their 4 computers at their house. As soon as you told them
it wouldn't make firefox go faster they lost interest. Apparently if
it only made custom software with special communication run faster it
stopped being interesting. :)
> No. There are a very few license to produce x86 processors. Intel,
> AMD, IBM, and a very few others that were inherited from dead
> companies. For example, I think Cyrix (remember them?) counted on
> using IBM's license through using IBM's fab (IBM no longer has a fab).
> I don't remember how NCR and Via got licenses. AMD's license is the
> clearest and Intel tried to revoke it -- what a fight!
Cyrix/via/national semi/transmeta/whoever. Yeah not so many left anymore.
> RISC-V looks interesting.
I am not sure it has much real benefit over ARM, so what chance does it
have of going anywhere? Would be great if I was wrong though.
> It's not clear whether this matters much. It matters for workstations
> but that isn't really a contested space any longer. Even though you
> and I care.
> In retrospect, we all know what they should have done. But would that
> have worked? Similar example: Nokia and BlackBerry were in similar
> holes and tried different ways out but neither worked.
> Power was widely adoped (see above).
> The Alpha was elegant. DEC tried to build big expensive systems.
> This disappointed many TLUGers (as we were then known) because that's
> not we'd dream of buying. Their engineering choices were the opposite
> of: push out a million cheap systems to drive forward on the learning
> curves. HP was one of the sources of the Itanium design and so when
> they got Compaq which had gotten DEC, it was natural to switch to
DEC screwed up pricing because they didn't want to hurt VAX sales.
Too bad their competitors didn't mind hurting VAX sales. They would
price the Alpha CPU sanely and then try to charge like $1000 for the
chipset which was very similar to a standard intel PC chipset. Very few
people wanted to pay that. It could have been big.
> (Several TLUGers had Alpha system. The cheapest were pathetically
> worse than PCs of the time (DEC crippled them so as not to compete
> with their more expensive boxes). The larger ones were aquired after they
> were obsolescent. Lennart may still have some.)
Yeah I have a few sheep. :)
I have a few MIPS based SGIs too.
As for pathetic, many of the multias outran PCs at the time easily.
Remember a current intel at the time was a 100MHz pentium. The multia
was a 166MHz or faster Alpha. The problem was people were running windows
NT and trying to run x86 code on it using the instruction emulator.
Of course that was not going to perform well. Those that ran linux on
them saw the actual performance.
> Itanium died for different reasons.
> - apparently too ambitious about what compilers could do (static
> scheduling). I'd quibble with this.
Amazing that intel made that mistake again (it wasn't the first time
they made that exact mistake).
> - Intel never manufactured Itanium on the latest node. So it always
> lost some speed compared with x86. Why did they do this? I think
> that it was that Innovators Dilemma stuff. The x86 fight with AMD
> was existential and Itanium wasn't as important to them.
It costs money to change for small performance gains. The itanium never
had enough customers or demand to justify that cost. I don't think that
had any real impact on its popularity at all.
> - customers took a wait and see attitude. As did Microsoft.
> No, SGI switched horses. Itanium and, later, x86.
> MIPS just seemed lucky to fall into the controller business, but it
> seems lost now. Replaced by ARM.
> Fun fact: Some older Scientific Atlanta / Cisco Set Top Boxes for
> cable use SPARC. Some XEROX copiers did too.
Kodak had some printers that had sparcs too.
> Right. Since power matters so much in the datacentre, lots of
> companies are trying to build suitable ARM systems. Progress is
> surprisingly slow. AMD is even one of these ARM-for-datacentre
> Interesting hopefuls include:
> - GPUs
> - FPGAs stuck on motherboards (eg. Intel can fit (Xilinx?) FPGAs in a
> processor socket of a multi-socket server motherboard.
Intel would probably be putting Altara FPGAs these days. Originally
someone was putting FPGAs in AMD Opteron sockets.
> - neural net accelerators.
> - The Mill (dark horse)
> - quantum computers
> - wafer-scale integration
> You only have to be as secure as "best practices" within your
> industry. Otherwise Windows would have died a generation ago.
Or just ride the wave of demand for backwards compatibility.
> There are security-verified processors for the military. Expensive
> and obsolete by our standards.
> Not enough customers are willing to pay even the first price for security:
> simplicity. That's before we even get to the inconvenience issues.
> Security does not come naturally. Todays Globe and Mail reported:
More information about the talk