[GTALUG] NVidia GTX 1080 annoucement

D. Hugh Redelmeier hugh at mimosa.com
Sat May 7 11:52:20 EDT 2016


| From: William Park <opengeometry at yahoo.ca>

| I only caught the last 30min of the live stream.  Made me wonder... Is
| there way to use its 2560 GPU cores as substitute for the usual 4 CPU
| cores?

Yes, kind of.

High-end GPUs have evolved towards providing computing power for
non-video problems.  But they are kind of horrible and odd to program.

That's what Cuda and OpenCL are all about: somewhat high-level
languages that can be used to program these monsters.

These have massive parallelism, awkward memory resources, few separate
instruction streams, and idiosyncratic instruction sets.

Right now the hottest application seems to be deep neural nets.
Neural nets tend to require lots of floating-point array work,
something well suited to GPUs.

There is a meetup group in town that is about GPU computing
	<http://www.meetup.com/GPU-Programming-in-Toronto/>

I haven't read it, but this article might be useful
	<https://en.wikipedia.org/wiki/General-purpose_computing_on_graphics_processing_units>

PS: AMD is supposed to be close to announcing its new offerings.  NVidia's 
and AMD's new generation is the first real progress in a few years.
Previous announcements have been only incremental changes since both 
NVidia and AMD were held up by process shrink difficulties and the silicon 
fabs.

I have a soft spot for AMD since it tries hard to be open-source and it is 
local.


More information about the talk mailing list