I would like to know if there are plans to create a version of SCALAPACK that uses MAGMA routines to use GPUs to do the calculations if:
1. There are GPU device(s)s attached to the host nodes.
2. The computation would be more efficient there (If the size of the matrix is large enough to justify the expense of transferring the data to the GPU to perform the computations).
Or if this is already possible with appropriate link options, could you give an example of how to do this (using Intel Xeons/CentOS and nVidia GPUs)?