The LAPACK forum has moved to https://github.com/Reference-LAPACK/lapack/discussions.

usage of scalapack and mpi

Open discussion regarding features, bugs, issues, vendors, etc.

usage of scalapack and mpi

Postby hiro » Wed Oct 03, 2007 5:33 am

Hello


I'd like to know about
how to use both MPI and Scalapack simultaneously in F90.
Now I'm building a large matrix diagonalization program
by using the pdsyev routine.
In that, I want to evaluate diagonalization matrix elements (referred as H)
by using MPI parallel execution by divide of loops
and each elements are stacked with ranks
(Elements of H is divided by nprocs and each ranks stack divided H elements).
After that, I want to excecute diagonalization of H by the pdsyev routine.
The problems are that

1) run-time error 'SIGSEGV, segmentation fault occurred' occur at
CALL BLACS_GRIDINFO( CONTXT, NPROW, NPCOL, MYROW, MYCOL ).
What is wrong?

2) relation of mpi_rank and blacs parallelization variables 'IAM(in pdsyev's sample program)?'.
Elements of H are stacked with each mpi_rank,
which must be sent to BLACS as local variables by PDELSET(A,I,J,DESCA,H(I,J)).
Can I use both mpi and blacs variables simultaneously ?

If anyone has ideas, please gime me suggestions?
Thank you in advance.

The basic structure of my program is like below:
=============================================================
double precision h(m,n),a(n/nprow,n/npcol)

call mpi_init(ierr)
call mpi_comm_size(mpi_comm_world,nprocs,ierr)
call mpi_comm_rank(mpi_comm_world,myrank,ierr)

m=n/nprocs
istart=myrank*(m-1)
do j=1,n
do i=i,m
h(i,j)=(evaluated; h(i+istart,j))
end do
end do

call mpi_barrier(mpi_comm_world,ierr)

definition of nb,nprow,npcol

call blacs_pinfo(iam,nprocs)

call blacs_get(-1,0,contxt)
call blacs_init(contxt,'R',nprow,npcol)
call blacs_gridinfo(contxt,nprow,npcol,myrow,mycol)

call descinit(desca,n,n,nb,nb,0,0,contxt,lda,info)
call descinit(descz,n,n,nb,nb,0,0,contxt,lda,info)

do j=1,n
do i=1,m
ii=istart+i
call pdelset(a,ii,j,desca,h(i,j))
end do
end do

call pdsyev('N','U',n,a,1,1,desca,w,z,1,1,descz,work,lwork,info)

call blacs_gridexit(contxt)
call blacs_exit(0)

call mpi_finalize(ierr)

stop
end
================================================================
hiro
 
Posts: 1
Joined: Mon Oct 01, 2007 7:31 am

usage of scalapack and mpi

Postby rosenbe2 » Tue Nov 13, 2007 4:24 pm

Hiro,

I have used mpi and scalapack with intel fortran.

Since you only give a sketch of your code, I am not sure how useful my comments will be, but ...

Make sure contxt is an integer, nprow*npcol=nprocs and nb could be a number like 64 or 128, depending on how large your array is - if it is very large, these numbers should be OK, if it is small you might want to use a smaller number like 2.

You want to call blacs_exit(1) , which implies that you still want to do more MPI calls.

If your routine is in a loop, you only want to call blacs_pinfo, blacs_get, blacs_gridinfo once as well as grid_exit and blacs_exit. Do a fortran save to save the grid variables like nprow and npcol and contxt.

Finally, desca and descz will be identical, so you need only use desca.

Also, if you wish to get back a global version of the the z-matrix you need
to use dgsum2d(contxt,'A',' ',m,n,-1,-1).
rosenbe2
 
Posts: 2
Joined: Tue Nov 06, 2007 2:42 pm
Location: Naval Research Laboratory


Return to User Discussion

Who is online

Users browsing this forum: No registered users and 4 guests