I think the (global and local input) for DESC_ arguments like for example:
* (global and local input) INTEGER array of dimension 8
* The array descriptor of the distributed matrix A
means that some of the entry of DESC_ are global and some others are
local. ACtually only the last entry of DESC_, DESC_(9), is local.
DESC_ (1) = DTYPE (global)
DESC_ (2) = CTXT (global)
DESC_ (3) = M (global)
DESC_ (4) = N (global)
DESC_ (5) = MB (global)
DESC_ (6) = NB (global)
DESC_ (7) = RSRC (global)
DESC_ (8) = CSRC (global)
DESC_ (9) = LLD (local)
Global means that the number has better be the same on all the processors.
For example I hope that the sizes of your matrix, M and N, are the same on
all the processors....
And yes even if your processor does not hold any elements for a matrix
(say X) but is in the process grid, it needs to possess a correct DESCX
otherwise the application is aborted.
Local means that this value is dependent on the processor. Here LLD the
leading dimension of the 2D array containing the value of the local A is a
On Mon, 20 Mar 2006, HuiZhong LU wrote:
Thanks a lot for your response about blacs_pcoord.
I have a question about the comment for arguments
likes following (for example for the subroutine PvGEMV)
(global and local input) INTEGER array of dimension 8
The array descriptor of the distributed matrix A
I don't understand the meaning of "global and local".
Does it mean really GLOBAL, because I must declare
the array X on proc where X is not useful in order to
obtain a correct DESCX which is needed by PvGEMV.
Otherwise, if I don't initialize DESCX on proc where
X is useless I got an error and PvGEMV is aborted.
Julien Langou wrote:
1- Just a remark, in theory, you should not call MPI_INIT from your main
program in Fortran.
For your code you should replace the 3 following lines by
CALL MPI_COMM_SIZE(MPI_COMM_WORLD, NPROCS, INFO)
CALL MPI_COMM_RANK(MPI_COMM_WORLD, IPROC, INFO)
CALL BLACS_PINFO(IPROC, NPROCS)
now ... if you want to interoperability between MPI and BLACS and do
some fancy stuff, this is possible. It is definetely easier in C. In
Fortran there are some SYS2BLACS_HANDLE and BLACS2SYS_HANDLE routines.
But the best is to avoid this kind of business if possible.
2- good point.
seems like BLACS_PCOORD only work for Row major grid ...
(see line 367 of BLACS--MPICH/SRC/MPI/Bdef.h)
if you use
CALL BLACS_GRIDINIT( ICNTXT, 'ROW', NPROW, NPCOL )
CALL BLACS_GRIDINIT( ICNTXT, 'COL', NPROW, NPCOL )
everything will work fine
if you use ORDER='COL' then you are right rows and columns are swapped.
I am not sure this is what I would have expected from BLACS_PCOORD
So not very conclusive, I'll try to come back with more information.
On Tue, 14 Mar 2006, HuiZhong LU wrote:
Here is the output of a simple program which
compare just the result of blacs_gridinfo and
blacs_pcoord, the simple code (fotran 90) is
following the output (running with 4 proc).
I have checked on several platforms:
the results of blacs_pcoord is not correct.
iproc,myrow,mycol(gridinfo)= 0 0 0
iproc,myrow,mycol(gridinfo)= 1 1 0
iproc,myrow,mycol(gridinfo)= 3 1 1
iproc,myrow,mycol(gridinfo)= 2 0 1
iproc,RDEST,CDEST(pcoord)= 0 0 0
iproc,RDEST,CDEST(pcoord)= 1 0 1
iproc,RDEST,CDEST(pcoord)= 2 1 0
iproc,RDEST,CDEST(pcoord)= 3 1 1
INTEGER :: iproc, nprocs, info
INTEGER :: nprow, npcol
INTEGER :: icntxt
INTEGER :: myrow, mycol
integer :: i,j, rdest, cdest
Call MPI_COMM_SIZE(mpi_comm_world, nprocs, info)
Call MPI_COMM_RANK(mpi_comm_world, iproc, info)
!call blacs_pinfo(iproc, nprocs)
NPCOL = INT( sqrt( float(nprocs) ) )
NPROW = nprocs / NPCOL
call blacs_get(0, 0, icntxt)
CALL blacs_gridinit( icntxt, 'COL', NPROW, NPCOL )
call blacs_gridinfo(icntxt, nprow, npcol, myrow, mycol)
if( (myrow<0 .and. myrow>=nprow) .or. &
(mycol<0 .and. mycol>=npcol) ) goto 100
do i = 0, nprocs-1
CALL blacs_pcoord( icntxt, i, RDEST, CDEST ) !! error in
if( iproc==0 ) write(6,'(A,3I3)')
CALL MPI_BARRIER( mpi_comm_world, info )
Julien Langou wrote:
The documentation is correct. For blacs_pcoord:
On output, the row coordinate of process PNUM in the BLACS grid.
On output, the column coordinate of process PNUM in the BLACS grid.
Maybe test out the program 'Hello World' at:
It basically checks in a few lines blacs_pcoord
On Thu, 9 Mar 2006, HuiZhong LU wrote:
I find an error in blacs_pcoord( icntxt, pnum, prow, pcol ).
The real result is:
prow is the columns number of pnum in the grid
pcol is the row number of pnum in the grid
Univ. of Sherbrooke
Scalapack mailing list