[GTALUG] Running Dell branded Nvidia gtx 1060 in non-dell system

xerofoify xerofoify at gmail.com
Tue Aug 6 20:09:08 EDT 2019


On Tue, Aug 6, 2019 at 4:23 PM Alex Volkov via talk <talk at gtalug.org> wrote:
>
> Yes.
>
> Unfortunately I went though the debugging process before I got to the paragraph. On the upside I think I got a lightning talk out of it, that I'll try to present at the next meeting.
>
> Alex.
>
Alex,
I don't know how much your intending to do with that GPU or otherwise.
If your just using Nvidia I can't help
you as mentioned but if your interested in GPU workloads  was looking
at the AMDGPU backend for LLVM.
Not sure if there is one that targets Nvidia cards but it may be of
interest to you as you would be able to
compile directly for the GPU rather than using an API to access it.
Not sure about Nvidia so double check
that.

Here is the official documentation for AMD through:
https://llvm.org/docs/AMDGPUUsage.html

If your using it for machine learning it may be helpful to be aware of
it as you could compile
the libraries if possible onto the GPU target rather than access than
indirectly through
the CPU. Again not sure of what libraries but you should for most of
the popular ones
and that may increase throughput a lot as it's direct assembly for the
card not abstracted.

As for GPU memory that may be a issue as Hugh mentioned depending on the size
of the workload. I don't think it would matter for your tutorials but
going across the
PCI bus is about as bad as cache misses for CPUs so best to not have them if
possible. If you were able to find a 6GB version that would be more than enough
for most workloads excluding professional. 1060s were shipped with either 3 or
6GB so that may be something for card you ordered to check. Retail I recall
it being about a 30-50 Canadian difference and for double the RAM it was
a good detail at the time if you bought one.

Hopefully that helps a little,

Nick

P.S. Not aware but I'm assuming there is one for gcc as well if you would
prefer that for your development or learning.

> On 2019-08-05 11:12 a.m., D. Hugh Redelmeier via talk wrote:
>
> | From: Alex Volkov via talk <talk at gtalug.org>
>
> | I have another system with Ryzen 5 2400G and was hoping to run ROCm on it, but
> | as it turns out -- ROCm doesn't fully support AMD cards with built-in
> | graphics. I still can install discreet card into that system but the solution
> | is not as cheap as getting a used GTX off craigslist.
>
> In January I saw cheap Radeo RX 580s on Kijiji too.  I haven't looked
> recently.
>
> One advantage of AMD over nvidia is that larger memories are more common.
>
> It's a shame about ROCm's lack of APU support.  Parts of it are there.
>
> <https://rocm.github.io/hardware.html>
>
>     The iGPU in AMD APUs
>
>     The following APUs are not fully supported by the ROCm stack.
>
> “Carrizo” and “Bristol Ridge” APUs
> “Raven Ridge” APUs
>
>     These APUs are enabled in the upstream Linux kernel drivers and the
>     ROCm Thunk. Support for these APUs is enabled in the ROCm OpenCL
>     runtime. However, support for them is not enabled in our HCC compiler,
>     HIP, or the ROCm libraries. In addition, because ROCm is currently
>     focused on discrete GPUs, AMD does not make any claims of continued
>     support in the ROCm stack for these integrated GPUs.
>
>     In addition, these APUs may may not work due to OEM and ODM choices
>     when it comes to key configurations parameters such as inclusion of
>     the required CRAT tables and IOMMU configuration parameters in the
>     system BIOS. As such, APU-based laptops, all-in-one systems, and
>     desktop motherboards may not be properly detected by the ROCm drivers.
>     You should check with your system vendor to see if these options are
>     available before attempting to use an APU-based system with ROCm.
>
>
> ---
> Post to this mailing list talk at gtalug.org
> Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk
>
>
> ---
> Post to this mailing list talk at gtalug.org
> Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk


More information about the talk mailing list