The LAPACK forum has moved to https://groups.google.com/a/icl.utk.edu/g/lapack.

blacs_gridmap globally blocking

Post here if you have a question about the installation process

blacs_gridmap globally blocking

Postby cfried » Thu Feb 23, 2017 10:20 am

Dear all,

I have a question about how to perform ScaLAPACK operations in parallel. To be more specific, I want several MPI subcommunicators to call ScaLAPACK routines independently of each other, including the routine BLACS_GRIDMAP (which is used to create a context for parallel execution).

The problem I have encountered is that BLACS_GRIDMAP is always globally blocking with respect to MPI_COMM_WORLD so that the subcommunicators do not run independently (they hang until all processes have called BLACS_GRIDMAP, which is not always guaranteed). After some research on the internet I have found that older BLACS versions included the file Bmake.inc, in which the macro "TRANSCOMM" would have to be set to "-DUseMpich". However, there does not seem to be a Bmake.inc file in the new ScaLAPACK packages. Is there a possibility to avoid the global blocking of BLACS_GRIDMAP?

Thank you for your help in advance!

PS: It seems that an older post was interpreted by the system as "spam".

Best wishes
Christoph
cfried
 
Posts: 3
Joined: Thu Feb 16, 2017 6:43 am

Re: blacs_gridmap globally blocking

Postby Avgvst » Sun Feb 26, 2017 2:50 pm

I had encountered the same problem so I will expect your responses! Thank you in advance!
Avgvst
 
Posts: 1
Joined: Sun Feb 26, 2017 2:33 pm

Re: blacs_gridmap globally blocking

Postby vincentm » Tue Apr 21, 2020 3:48 pm

I had the same issue. I suspected that MPI_Comm_create might be the cause, as I ran into a similar issue playing with it recently. MPI_Comm_create is blocking at the parent communicator level. The equivalent routine which is blocking at the child level is MPI_Comm_create_group.
So I grepped MPI_Comm_create and found it in BLACS/SRC/blacs_map_.c and BLACS/SRC/BI_TransUserComm.c (and BLACS/INSTALL/cmpi_sane.c but we don't care about this one). Making the substitution
Code: Select all
MPI_Comm_create(tcomm, tgrp, &comm) ==> MPI_Comm_create_group(tcomm, tgrp, 0, &comm) ! "0" is a tag

solved my problem, at least for now.
vincentm
 
Posts: 1
Joined: Tue Apr 21, 2020 3:34 pm


Return to Installation

Who is online

Users browsing this forum: No registered users and 2 guests