ScaLAPACK Archives

[Scalapack] mpi/scalapack combined example


Hello, 

would you mind posting your questions on the LAPACK forum? 
http://icl.cs.utk.edu/lapack-forum/

MPI and BLACS can work together. Here is how I have done in my codes. 
My routines are typically wrappers on top of ScaLAPACK. The user is
in the MPI world and provides pointers to his data structures and the
MPI communicator to the wrapper. Then the wrapper routine calls the
BLACS, ScaLAPACK and takes care of various details. 

The wrapper code looks like this

int scalapackqr2_A(int mloc, int n, double *A, int lda, double *R, int ldr, 
MPI_Comm mpi_comm){

        int nprocs;
        int bhandle;
        int icontxt, nprow, npcol, myrow, mycol;

        MPI_Comm_size(mpi_comm, &nprocs);
        bhandle = Csys2blacs_handle(mpi_comm);
        icontxt = bhandle;
        npcol = nprocs;
        nprow = 1;
        Cblacs_gridinit( &icontxt, "Row", npcol, nprow );
        Cblacs_gridinfo( icontxt, &nprow, &npcol, &myrow, &mycol );
        
/*      Now you are in business, the grid is set. Note that here I have chosen 
a nprow-by-1 grid but any grid is ok a priori */
/*      You can call ScaLAPACK                                                  
                                             */

        Cfree_blacs_system_handle(bhandle);
        Cblacs_gridexit(icontxt);
        Cblacs_exit(1);

        return 0;

}

I have tested this with various communicators and always have good behavior.

Note that I have experienced problem with the Cblacs_exit(1). A priori you
should be able to continue MPI calls because of the Cblacs_exit(1); instead
of Cblacs_exit(0); this works, but I remeber that I have problems to call
BLACS. I am not sure any more because I can not find what my problem was ...
I remember that at some points I removed the Cblacs_exit(1); from the code ...
But it's back again in my codes so I forgot. 

Hope this helps. By the way, the useful information to understand MPI-BLACS
interoperability is  at
Outstanding Issues in the MPIBLACS,R. Clint Whaley, 1997. 
http://www.netlib.org/blacs/mpiblacs_issues.ps

Julien



-----Original Message-----
From: scalapack-bounces@Domain.Removed on behalf of Holger St.John
Sent: Mon 4/16/2007 2:57 PM
To: scalapack@Domain.Removed
Subject: [Scalapack] mpi/scalapack combined example
 


  Hi
    I have not been able to find, nor have I successfully managed to 
create ,a simple example that uses MPI and scalapack together (on 64 
bit Opteron system using PGI6.2).
What I have is an MPI code which needs to call scalapack for the
linear algebra part. So I set the world communicator  with the usual mpi
calls. Then I split the world communicator into separate 
communicators, one of which i want to use for blacs/scalapack and the 
other is to be used in MPI
routines at the same time. IS there a working  example of this available?.

  I have tried

       CALL MPI_Init(mpiierr)
       CALL MPI_COMM_RANK(MPI_COMM_WORLD,myid,mpiierr) !get processor id
       CALL MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, mpiierr )
       ALLOCATE(group_no(numprocs))
       group_no(myid)=1
       if(myid .lt. numprocs/no_groups)group_no(myid) = 0

       CALL  MPI_Comm_split(MPI_COMM_WORLD,group_no(myid),myid,new_comm,mpiierr)

        CALL BLACS_GET(new_comm,10, ICTXT )

         print *,'in blacs section,myid=',myid
       CALL BLACS_GRIDINIT(ICTXT, 'Row', NPROW, NPCOL)
       CALL BLACS_GRIDINFO( ICTXT, NPROW, NPCOL, MYROW, MYCOL )

      snip....

   but the print statment will never get executed (It appears that
  the call to BLACS_GET terminates the program without any error messages.


  Should this work? Or am  I off on the wrong tangent?
  Could there be a 32/64 bit pointer issue?


  Thanks in advance

   Holger
_______________________________________________
Scalapack mailing list
Scalapack@Domain.Removed
http://lists.cs.utk.edu/listinfo/scalapack


<Prev in Thread] Current Thread [Next in Thread>


For additional information you may use the LAPACK/ScaLAPACK Forum.
Or one of the mailing lists, or