[GTALUG] ARM and friends in datacenters
davec-b at rogers.com
Wed Jan 31 11:35:23 EST 2018
On 31/01/18 11:22 AM, Lennart Sorensen wrote:
> On Wed, Jan 31, 2018 at 09:59:28AM -0500, David Collier-Brown via talk wrote:
>> There are two different cases to consider when doing data centers:
>> * uniprocessors for individual tasks or trivially parallelizable ones
>> * multiprocessors for things that aren't parallelizable
>> Anybody can provide the first. The second is harder.
>> Mips had three MMUs, one of which was for each of the above cases and one
>> was a trivial one for embedded, so 32-CPU Mips machines were available.
>> IBM and Sun spent lots of money designing backplanes that could support >=
>> 32 sockets: Sun when so far as to license a Cray design when their in-house
>> scheme failed to scale.
>> Until and unless chip vendors spend significant time and money on MMUs and
>> backplanes, they won't have an offering in the second case, and will have
>> chosen to limit themselves to a large but limited role in the datacentre.
> Well at least for ARM, you have qualcomm and cavium offering 48 core
> CPUs with two socket, so 96 cores in one machine. That's not a bad start.
> Now as to wether you can actually buy any of those stupid boxes unless
> you are a clour provider or google or something, who knows.
> Well it seems maybe you actually can:
Yes: it's way easier if the interconnects are /inside/ the chip!
I notice that the T4 and T5 offerings from Sun concentrated on doing the
most you could in one chunk of silicon, and internally those devices
greatly resembled a "radial" mainframe from the IBM/Honeywell/Sperry/CDC
What goes around, comes around (;-))
David Collier-Brown, | Always do right. This will gratify
System Programmer and Author | some people and astonish the rest
davecb at spamcop.net | -- Mark Twain
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the talk