The main advantage of a pre-loaded OS...

Christopher Browne cbbrowne-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org
Mon Feb 28 05:09:15 UTC 2011


On Fri, Feb 25, 2011 at 8:07 PM, William O'Higgins Witteman
<william.ohiggins-H217xnMUJC0sA/PxXw9srA at public.gmane.org> wrote:
> On Fri, Feb 25, 2011 at 06:09:21PM -0500, Christopher Browne wrote:
>
>>And there's a server-relevant side to this, as well.
>>
>>Amazon's EC2 service is renting this out as service.
>>http://aws.amazon.com/ec2/instance-types/
>>
>>Look for:
>>   "Cluster GPU Quadruple Extra Large Instance"
>>
>>What they notably include, GPU-wise, is:
>>   2 x NVIDIA Tesla “Fermi” M2050 GPUs
>>
>>They characterize this as being for "high performance rendering
>>needs," but there are likely other use cases possible, and if this
>>sort of thing takes off, it certainly offers NVIDIA an expanded market
>>in server deployments where they generally have had *no* real place
>>thus far.
>
> I read this[1] post a few months ago that mentioned using GPUs for a
> sorting algorithm, achieving sorting rates of 482 million fixed-length
> key-value pairs per second.  Being able to offload hash-based sorts of
> this kind of magnitude will become more and more useful in big-data
> situations or highly parallel processing tasks.  It's not a problem I've
> got today, but it might be one I have tomorrow.
>
> [1] http://perspectives.mvdirona.com/2010/12/16/GPGPUSorting.aspx

There has been research on this ongoing for some time.  The neat
result I've seen is that shifting sorts over to the GPU can pay off
even though there's a not-inconsiderable cost in copying the data into
place for the GPU to access.

I imagine that the NSA would be keenly interested in "lotsa GPU"
boards to build DSP analysis systems.  Applications like the infamous
Echelon require a lot of signal analysis, and this kind of hardware
should be mighty good at that.  448 cores on one board is nothing to
sniff at!

It mayn't be of general use, though, for the "many many GPU cores"
case.  A sorting application won't be able to harness that many GPUs,
because the amount of work done on each byte of data that needs to be
transferred between the "general purpose" side to the GPUs isn't
enough to warrant it.
-- 
http://linuxfinances.info/info/linuxdistributions.html
--
The Toronto Linux Users Group.      Meetings: http://gtalug.org/
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://gtalug.org/wiki/Mailing_lists





More information about the Legacy mailing list