Dear all,
I want to use ScaLAPACK with MPI in the following way:
- construct subcommunicators with MPI_COMM_SPLIT
- each subcommunicator calls ScaLAPACK routines independently.
At the beginning of the program, right after MPI_INIT, I create a "global" context:
call BLACS_GET(0,0,Mctxt)
Then after splitting into subcommunicators I create a "local" context
ctxt = Mctxt
call BLACS_GRIDMAP(ctxt,ranks,npr,npr,npc)
However, the call to BLACS_GRIDMAP is still globally blocking. So, the program hangs in this routine (perhaps even forever if one subcommunicator calls it more often than another).
I have searched around a bit and have found that in an older BLACS package one would have to set
TRANSCOMM = -DUseMpich -DPOINTER_64_BITS=1
in Bmake.inc. However the new ScaLAPACK packages, which contain BLACS, do not seem to require a Bmake.inc anymore. (There is no BMAKES directory.)
So, how is it possible to avoid BLACS_GRIDMAP to block globally in the new ScaLAPACK?
Thank you in advance!
Best wishes
Christoph

