I am trying to run magma_dgetrf on my Tesla GPU, but for some reason it is defaulting to my display GPU.
I have two GPUs on my system, one display GPU and a Tesla. For matrices that fit into GPU memory, I can specify the GPU to execute using magma_setdevice, but as soon as the matrix size exceeds the 6GB memory of my Tesla, it defaults to my display GPU for some reason, even though I use setdevice it to use the Tesla
The display GPU is the GTX TITAN Z and the compute GPU is the Tesla C2075.
Selecting GPU for LUD defaulting to display GPU
Re: Selecting GPU for LUD defaulting to display GPU
Unfortunately in this instance, when it exceeds the memory, it switches to a multi-GPU, non-resident version. The multi-GPU code loops over GPUs. Even though you are only using 1 GPU, most likely, it starts its loop from GPU 0. A quick check would be to set CUDA_VISIBLE_DEVICES to include only your Tesla GPU. See:
https://devblogs.nvidia.com/parallelfor ... e_devices/
Let us know if that is the case. This is helpful feedback, as there really ought to be a way to specify what GPUs MAGMA operates with.
-mark
https://devblogs.nvidia.com/parallelfor ... e_devices/
Let us know if that is the case. This is helpful feedback, as there really ought to be a way to specify what GPUs MAGMA operates with.
-mark
Re: Selecting GPU for LUD defaulting to display GPU
That seemed to have fixed the issue.
Thank you!
I hope to see a future revision to add support for selecting a subset of GPUs to operate within Magma.
Thank you!
I hope to see a future revision to add support for selecting a subset of GPUs to operate within Magma.