How can i collect matrices on different nodes to one node?

Open discussion regarding features, bugs, issues, vendors, etc.

How can i collect matrices on different nodes to one node?

Postby ronniepops » Wed Jul 13, 2005 3:55 pm

Hello,

Please help me if there is someone who has done this before!

I am using pcheevx to perform eigen vector decomposition of a matrix after i have initialized local matrices on each node using pcelset. The pcheevx subroutine works fine and returns successfully but however, the eignvectors computed are left on each of the different nodes.

i have tried using pcelget to collect the eigenvectors to one node but this is taking long. i have also tried to write the eigenvectors to a file and read them from there, eventhough this is faster than using pcelget, it is still considerably slow.

Does anyone know of any other subroutine or method that i can use to collect back all the local matrices from the different nodes to one node in a very efficient way. Please help!!

I appreciate your help in this matter

ronnie
ronniepops
 
Posts: 2
Joined: Fri Jun 17, 2005 4:30 pm

Re: How can i collect matrices on different nodes...

Postby brianlane723 » Thu Jul 21, 2005 11:02 am

Ronnie,

I, too, have had similar frustrations with trying to find a routine that would do this. I went ahead and wrote my own, and it seems to perform pretty well. I'll paste my MPI/fortran code in below. NB: zl(*,*) dentoes the locally stored eigenvectors, and z(*,*) denotes the globally collected eigenvectors and ceil(x) is the ceiling function. Everything else should follow ScaLAPACK convention (I think). I hope it helps!

---------

** Collect eigenvectors.

* Begin master part.

print*,'collecting vectors'

if (rank.eq.0) then

mylocr = locr
mylocc = locc

do k = 0, isize - 1
if (k.eq.0) then
irow = myrow
icol = mycol
else
call MPI_Recv(irow, 1, MPI_Integer, k, 10, MPI_Comm_World,
+ status, mpierr)
call MPI_Recv(icol, 1, MPI_Integer, k, 20, MPI_Comm_World,
+ status, mpierr)
locr = numroc(n,nb,irow,0,nprow)
locc = numroc(n,nb,icol,0,npcol)
call MPI_Recv(zl, n*n, MPI_Double_Precision, k, 30,
+ MPI_Comm_World, status, mpierr)
endif

nbr = ceil(real(locr)/real(mb))
nbc = ceil(real(locc)/real(nb))

do ibr = 1, nbr
if(ibr.eq.nbr) then
irmax = locr-(nbr-1)*mb
else
irmax = mb
endif

do ibc = 1, nbc
if (ibc.eq.nbc) then
icmax = locc-(nbc-1)*nb
else
icmax = nb
endif

do ir = 1, irmax
do ic = 1, icmax
z((irow+(ibr-1)*nprow)*mb + ir,
+ (icol+(ibc-1)*npcol)*nb + ic) =
+ zl((ibr-1)*mb + ir, (ibc-1)*nb + ic)
enddo
enddo
enddo
enddo

enddo

50 continue

locc = mylocc
locr = mylocr
* End master part.
* Begin slave part.
else

call MPI_Send(myrow, 1, MPI_Integer, 0, 10, MPI_Comm_World,
+ mpierr)
call MPI_Send(mycol, 1, MPI_Integer, 0, 20, MPI_Comm_World,
+ mpierr)
call MPI_Send(zl, n**2, MPI_Double_Precision, 0, 30,
+ MPI_Comm_World, mpierr)

endif
* End slave part.
**
brianlane723
 
Posts: 3
Joined: Wed Jun 08, 2005 12:18 pm

Postby darshan » Tue Oct 18, 2005 7:36 pm

I do not have experience with the particular solver, but, if you have a vector that is distributed over one context, you can use the redistribution routines to transfer them to another context (consisting of one one processor). The redist should be much faster than pcelget's because it uses a block-intersection algorithm instead of setting one element at a time.

HTH
darshan
 
Posts: 1
Joined: Tue Oct 18, 2005 7:31 pm


Return to User Discussion

Who is online

Users browsing this forum: Google [Bot] and 4 guests