Selecting GPU for LUD defaulting to display GPU

Open discussion for MAGMA library (Matrix Algebra on GPU and Multicore Architectures)

Selecting GPU for LUD defaulting to display GPU

Postby tblattner » Thu Aug 18, 2016 11:10 am

I am trying to run magma_dgetrf on my Tesla GPU, but for some reason it is defaulting to my display GPU.

I have two GPUs on my system, one display GPU and a Tesla. For matrices that fit into GPU memory, I can specify the GPU to execute using magma_setdevice, but as soon as the matrix size exceeds the 6GB memory of my Tesla, it defaults to my display GPU for some reason, even though I use setdevice it to use the Tesla

The display GPU is the GTX TITAN Z and the compute GPU is the Tesla C2075.
tblattner
 
Posts: 8
Joined: Tue Aug 09, 2016 4:38 pm

Re: Selecting GPU for LUD defaulting to display GPU

Postby mgates3 » Thu Aug 18, 2016 6:46 pm

Unfortunately in this instance, when it exceeds the memory, it switches to a multi-GPU, non-resident version. The multi-GPU code loops over GPUs. Even though you are only using 1 GPU, most likely, it starts its loop from GPU 0. A quick check would be to set CUDA_VISIBLE_DEVICES to include only your Tesla GPU. See:
https://devblogs.nvidia.com/parallelfor ... e_devices/

Let us know if that is the case. This is helpful feedback, as there really ought to be a way to specify what GPUs MAGMA operates with.

-mark
mgates3
 
Posts: 750
Joined: Fri Jan 06, 2012 2:13 pm

Re: Selecting GPU for LUD defaulting to display GPU

Postby tblattner » Fri Aug 19, 2016 1:36 pm

That seemed to have fixed the issue.

Thank you!

I hope to see a future revision to add support for selecting a subset of GPUs to operate within Magma.
tblattner
 
Posts: 8
Joined: Tue Aug 09, 2016 4:38 pm


Return to User discussion

Who is online

Users browsing this forum: Bing [Bot] and 2 guests

cron