[GTALUG] GPU Programming Questions or Figuring out the GPU Space Ideas

Nicholas Krause xerofoify at gmail.com
Wed Jun 10 18:29:56 EDT 2020


Greetings,

Since GPU programming keeps coming it may be of use to try one of three 
things, I can help out with two of them but the other I'm unfortunately
of little help. Sorry if this rather long.

The ones I can help with are:
1. I've a contact in the LLVM backend/Mesa team out in Germany if I 
recall the location, who can answer questions depending on what they
are from both the AMD side, LLVM and Mesa side. If people
give me a list of questions I can try and get 2-5 answered.
2. In my notes from the LLVM talk in December I mentioned what a backend
is for compilers. In general its the tuning for each CPU/GPU specific
optimizations. The LLVM backends have one for AMD GPUs called AMPGPU or
something similar. I can poke around it and see if there is anything in 
terms of interesting differences between that and the standard backends 
for microprocessors. From memory, GPUs use kernels in hardware and its
mentioned in the ISA docs linked in etherpad what those are. Not sure 
about about what optimizations the back end makes around those. It may 
give people an idea of what is entailed in getting out something like 
SysCL or other GPU programming libraries,implementations and frameworks. 
Not to mention some of the challenges with it.

The other one that may be of more help is trying to find something who
works on the LLVM GPU backends or Mesa directly locally in the GTA. They
would be much better at answering the current state of the GPU 
programming space then me. Unfortunately I'm of little help here but
that's who I would try to find locally. Due to the fragmentation issues
that may be the best idea, as all of the GPU programming implementations
seem to be in one of or mostly focused on one of three areas:
1. Pipeline optimizations in rendering i.e. Vulcan or Metal mostly video
games or other things that want stable pipelines rather than ones that
are dynamic in nature for performance reasons.
2. AI a or other HPC computing seems SysCL, Cuda and maybe
TensorFlow are here
3. GPU Offloading in terms of second tier specific computation

Sometimes they overlap. Frankly I've not sure a) what people want
and b) which are of the most use are going forward being the most
implemented since last I checked.


Maybe that helps people figure out the GPU space better a little
as it's always seem to me to be overly complex than it really
had to either due to fragmentation, or other things as Hugh pointed
out,
Nick
-- 
Fundamentally an organism has conscious mental states if and only if 
there is something that it is like to be that organism--something it is 
like for the organism. - Thomas Nagel


More information about the talk mailing list