Page 1 of 1

blacs_gridmap globally blocking

PostPosted: Thu Feb 23, 2017 10:20 am
by cfried
Dear all,

I have a question about how to perform ScaLAPACK operations in parallel. To be more specific, I want several MPI subcommunicators to call ScaLAPACK routines independently of each other, including the routine BLACS_GRIDMAP (which is used to create a context for parallel execution).

The problem I have encountered is that BLACS_GRIDMAP is always globally blocking with respect to MPI_COMM_WORLD so that the subcommunicators do not run independently (they hang until all processes have called BLACS_GRIDMAP, which is not always guaranteed). After some research on the internet I have found that older BLACS versions included the file Bmake.inc, in which the macro "TRANSCOMM" would have to be set to "-DUseMpich". However, there does not seem to be a Bmake.inc file in the new ScaLAPACK packages. Is there a possibility to avoid the global blocking of BLACS_GRIDMAP?

Thank you for your help in advance!

PS: It seems that an older post was interpreted by the system as "spam".

Best wishes
Christoph

Re: blacs_gridmap globally blocking

PostPosted: Sun Feb 26, 2017 2:50 pm
by Avgvst
I had encountered the same problem so I will expect your responses! Thank you in advance!

Re: blacs_gridmap globally blocking

PostPosted: Tue Apr 21, 2020 3:48 pm
by vincentm
I had the same issue. I suspected that MPI_Comm_create might be the cause, as I ran into a similar issue playing with it recently. MPI_Comm_create is blocking at the parent communicator level. The equivalent routine which is blocking at the child level is MPI_Comm_create_group.
So I grepped MPI_Comm_create and found it in BLACS/SRC/blacs_map_.c and BLACS/SRC/BI_TransUserComm.c (and BLACS/INSTALL/cmpi_sane.c but we don't care about this one). Making the substitution
Code: Select all
MPI_Comm_create(tcomm, tgrp, &comm) ==> MPI_Comm_create_group(tcomm, tgrp, 0, &comm) ! "0" is a tag

solved my problem, at least for now.