[GTALUG] Running Dell branded Nvidia gtx 1060 in non-dell system

Alex Volkov alex at flamy.ca
Wed Aug 7 13:54:32 EDT 2019


Hey Hugh,

I want to add a quick note on how tensorflow library is structured:

* tensorflow -- main logic package, uses cpu for all computation
* tensorflow-gpu -- NVIDIA CUDA optimized package, depends on 
tensorflow, has the following dependencies on CUDA -- 
https://www.tensorflow.org/install/gpu
* tensorflow-rocm -- AMD fork of tensorflow that depends on install 
system ROCM packages.

In other words, if you want to run tensorflow models install tensorflow

$ pip install tensorflow

If you want to run tensorflow faster, with CUDA:

<lots of system-level package installation magic>
$ pip install tensorflow tensorflow-gpu

If you want to run tensorflow on AMD hardware:

<so system-level package installation magic>

$ pip install tensorflow-rocm

Going back to the comment you made previously, that you've seen not 
expensive Radeon RX 580s on kijiji.

This all comes back to the hardware constraints that I currently have -- 
I have Ryzen 5 system in a small package with 160W power supply that can 
take this kind of hardware, but I need to upgrade case and power supply 
in order to do that, which is not the purpose the system currently fulfils.

The other system I have is from 4 years ago, based on AM3+ platform, so 
it's only has PCI v2 (there's a mention of PCI v3 requirements in ROCm 
documentation), probably no exposed CRAT tables, and sketchy IOMMU support.
Just installing the card into it would likely not work, and upgrading 
the system to something that will support ROCm stack properly would cost 
more then $500, whereas just getting gtx off kijiji would be less than $200.

Alex.


On 2019-08-07 9:54 a.m., D. Hugh Redelmeier via talk wrote:
> | From: xerofoify via talk <talk at gtalug.org>
>
> | On Tue, Aug 6, 2019 at 4:23 PM Alex Volkov via talk <talk at gtalug.org> wrote:
>
> | I don't know how much your intending to do with that GPU or otherwise.
>
> He said he wants to do human (himself) learning about machine
> learning.  Less cute way of saying this: he want to experiment and
> play with ML.
>
> | If your just using Nvidia I can't help
> | you as mentioned but if your interested in GPU workloads  was looking
> | at the AMDGPU backend for LLVM.
>
> Most people learning (or doing) ML pick a tall stack of software and
> the learn almost nothing about the underlaying hardware.  I admint
> that sometimes performance issues poke their way through those levels.
>
> If I remember correctly, the base of the stack that Alex was playing
> with was Google's TensorFlow.  Of course there is stuff below that but
> the less he has to know about it the better.
>
> See Alex's discussion about getting TensorFlow to work on AMD.  If I
> understood him correctly (maybe not), the normal route to TesorFlow on
> AMD is through ROCm, and that won't work on an APU.  Too bad.
>
> My guess: even if he could run ROCm, he might hit some hangups with
> TensorFlow since the most used path to TensorFlow is Nvidia cards and
> (I think) Cuda.  It's always easier to follow a well-worn path.
>
> I, on the other hand, think I'm interested in the raw hardware.  I
> have not put any time into this but I intend to (one of many things I
> want to do).
>
> | Not sure if there is one that targets Nvidia cards but it may be of
> | interest to you as you would be able to
> | compile directly for the GPU rather than using an API to access it.
> | Not sure about Nvidia so double check
> | that.
>
> As I understand it:
>
> - LCC targets raw AMD GPU hardware
>
> - that's probably not very useful because runtime support is needed
>    for what you could consider OS-like functionality and that isn't
>    provided.
>
>    + scheduling
>
>    + communicating with the host computer
>
>    + failure handling and diagnostics
>
> - Separately, a custom version of LCC is used as a code generator
>    (partly(?) at runtime!) for OpenCL.  I think that AMD tries to
>    "upstream" their LCC changes but this is never soon enough.
>
> - I think that nvidia also has a custom LCC but does not try to
>    upsteam all of their goodies (LCC is not copylefted).
>
> I may be wrong about much or all of this.  I would like to know an
> accurate, comprehensive, comprehensible source for this kind of
> information.
>
> | Here is the official documentation for AMD through:
> | https://llvm.org/docs/AMDGPUUsage.html
>
> Thanks.  I'll have a look.
>
> | If your using it for machine learning it may be helpful to be aware of
> | it
>
> You'd think so but few seem to bother.  There's enough to get ones
> head around at the higher levels of abstraction.
>
> Much ML seems to be done via cook-books.
>
> | Hopefully that helps a little,
>
> I'd love to hear a GTALUG talk about the lower levels.  Perhaps a
> lightning talk next week would be a good place to start.
> ---
> Post to this mailing list talk at gtalug.org
> Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk




More information about the talk mailing list