[GTALUG] ARM and friends in datacenters

Alvin Starr alvin at netvel.net
Wed Jan 31 11:03:25 EST 2018

On 01/31/2018 09:59 AM, David Collier-Brown via talk wrote:
> On Tue, Jan 30, 2018 at 03:49:54PM -0500, D. Hugh Redelmeier via talk wrote:
>> Another example: ARM is just now (more than 30 years on) targeting
>> datacentres.  Interestingly, big iron has previously mostly been
>> replaced by co-ordinated hordes of x86 micros.
> There are two different cases to consider when doing data centers:
>   * uniprocessors for individual tasks or trivially parallelizable ones
>   * multiprocessors for things that aren't parallelizable
> Anybody can provide the first. The second is harder.
Its worse than that.
You have UMA/NUMA and Cache consistency issues and these are the kinds 
of things that are facing the Intel/AMD designers now.

> Mips had three MMUs, one of which was for each of the above cases and 
> one was a trivial one for embedded, so 32-CPU Mips machines were 
> available.
> IBM and Sun spent lots of money designing backplanes that could 
> support >= 32 sockets: Sun when so far as to license a Cray design 
> when their in-house scheme failed to scale.
Hypertransport was an outgrowth of an AMD/DEC project to develop a 
processor interconnect.
Its the thing that allowed AMD to build systems with up to 8 processors 
in a NUMA architecture while Intel was stuck with 2 processor systems 
using a shared memory bus.
Intel now has QPI so that they can have more processors in a system.
I believe the theoretical limit for Hypertransport systems was 64 
processors but I am not sure anybody ever built a system that big.

The big problem is that once you get that many processors in a box you 
have heat dissipation issues and all these interconnect buses have 
distance limitations so there is no putting them on the backplane.

> Until and unless chip vendors spend significant time and money on MMUs 
> and backplanes, they won't have an offering in the second case, and 
> will have chosen to limit themselves to a large but limited role in 
> the datacentre.
The biggest problem with multiprocessor systems is synchronization.

For the most part software uses some kind of memory locking instructions 
to manage concurrency but as more processors are added it becomes 
difficult to insure that memory reads and writes can remain atomic.

Once you get into the realm of supercomputers your using some kind of 
bus where your passing data between separate processors.

Alvin Starr                   ||   land:  (905)513-7688
Netvel Inc.                   ||   Cell:  (416)806-0133
alvin at netvel.net              ||

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gtalug.org/pipermail/talk/attachments/20180131/88d7b0fb/attachment.html>

More information about the talk mailing list