Open discussion for MAGMA library (Matrix Algebra on GPU and Multicore Architectures)


Postby Nicolas_S » Tue Apr 17, 2012 11:01 am

I am quite newbie in CUDA and i am very interested in Magma. I try to use magma_dgeev.
I would like my programm to do :
1 - Copy datas from host to device
2 - Execute magma_dgeev
3 - Do other stuff with the results on the device.
4 - Copy results from device to host

In the testing example testing_dgeev.cpp, matrix memory is allocated with cudaMallocHost, that is on the host. My first attemp works the same way, but to achieve "step 3" i need to (re)send the data to the device... Is it possible to keep it there ? I mean, i would rather call magma_dgeev with inputs that are in device memory, and results written in device memory.
How can i do that ?

Thanks !

(and sorry for english mistakes)
Posts: 2
Joined: Tue Apr 17, 2012 10:49 am

Re: magma_dgeev

Postby mgates3 » Fri May 04, 2012 5:46 pm

The magma dgeev code is a hybrid algorithm -- it uses both the CPU and the GPU to solve the system. Currently magma provides only the interface that takes the data on the CPU. Internally it copies blocks to the GPU during certain operations such as the Hessenberg factorization, while other operations like the QR iteration are done entirely on the CPU. The QR iteration algorithm is not amenable to speedup on the GPU, which is why we use existing CPU code to solve it. So if you send the eigenvalues or eigenvectors to the GPU, you are not re-sending them, as they were computed on the CPU.
Posts: 626
Joined: Fri Jan 06, 2012 2:13 pm

Re: magma_dgeev

Postby Nicolas_S » Fri May 11, 2012 4:45 am

Thanks for your answer !
Posts: 2
Joined: Tue Apr 17, 2012 10:49 am

Return to User discussion

Who is online

Users browsing this forum: No registered users and 1 guest