Open discussion for MAGMA library (Matrix Algebra on GPU and Multicore Architectures)
3 posts • Page 1 of 1
The problem I am working on requires solving for the smallest (algebraic) eigenvalues (they are negative numbers) and eigenvectors of very large symmetric, sparse matrices (larger than 5 million x 5 million, sparsity <0.005%). I have been shopping around a variety of packages looking for one that can reliably perform these calculations, particularly utilizing some parallelism to accelerate the eigensolve. Is MAGMA suitable for sparse, symmetric eigensolves on matrices this large (preferably order 10s of millions, though quite sparse)? If so, can you point me to a simple example of such code. I have the non-zero entries in the form (i, j, value) and just need a "simple" real, symmetric eigensolve.
Looks like you need about 15 GB just for the matrix. You could use MAGMA and 32GB GPU to solve such problems. MAGMA implements the LOBPCG method, but that has to be adjusted for your problem - the matrix has to be definite and you would look for the smallest (or largest) eigenstates. One way is folded spectrum method where instead of solving directly an eigenproblem for a matrix H, you solve for the smallest eigenstates of (H - \alpha I)^2 and recover the ones that you need from there. \alpha should be smaller that the smallest eigenvalue. Is something like this that you have used so far?
I haven't used anything like that so far. I've had the most success so far with Eigen/Spectra, but we are running into real bottlenecks due to its serial implementation. Also I lost an order of magnitude on the sparsity, it's <0.0005%. Why do you need to square the matrix, why can't you just level-shift the whole thing and call it good?