Motivation and Objective
Simulation of flow and transport in porous media gives rise to
extremely large scale computations, making the consideration of iterative
solvers natural. The solutions of such problems exhibit local behavior.
Therefore it is essential that local grid refinement, based on a
posteriori error analysis, is applied. Also, fundamental physical
limitations on the computer processing speed may require the exploitation
of parallelism. The goal is to create a tool that is based on discretization
techniques utilizing finite elements/volumes, efficient preconditioning
(in parallel) of the resulting sparse system, error control and adaptive
grid refinement (in parallel).
Approach and Accomplishments
Our computational approach is described as follows.
We use mesh generator (triangle for 2-D meshes and NETGEN
for 3-D meshes) to generate a good coarse mesh. Then the considered
problem is solved sequentially on the coarse mesh (by every processor).
The solution is used to compute a posteriori error
estimates, which are used as weights in an element based splitting of
the coarse mesh into sub-domains (using METIS).
Such splitting insures that the local
refinements that follow will produce computational mesh with number
of triangles/tetrahedrons balanced over the sub-domains.
Every sub-domain is ``mapped'' to a processor.
Then, based on a posteriori error analysis, each processor refines
consecutively its region independently. After every step of independent
refinement there is communication between the processors in order
to make the mesh on that level globally conforming.
Concerning the a posteriori error analysis I worked with Dr. Raytcho Lazarov
on article ``Error Control and Adaptive Grid Refinement for
Convection-Diffusion-Reaction Problems in 3-D''. The article contains
the description of an adaptive numerical technique based on finite volume
approximations and the computational results of various model
simulations of steady-state single phase flow and transport of
passive chemicals in non-homogeneous porous media in 3-D.
I developed a 2-D code with functionality as described in the computational
approach above. The obtained multi-level structure is used to define
multigrid preconditioners.
I worked with Dr. Panayot Vassilevski
and Dr. Charles Tong on connecting the developed software to the
hypre Preconditioner Library. The connection is
implemented using The Finite-Element Interface (FEI) specification,
which provides a layered abstraction that minimizes the concern
for the internal details in the hypre library.
The initialization is done in parallel. GUI, using Motif, has been
developed to utilize the selection of different hypre options.
After the solution is obtained on certain level it's send directly through
AF_INET socket to visualizer (SG,
developed by Dr. Michael Holst)
residing on the users machine. The benefit in such strategy is that the
parallel machine (usually remote) is used only for computations and
the local machine handles the visualization. This idea is very efficient
for real time visualization of time dependent problems.
The 3-D version of the code has the same features. Under construction is
the parallel local refinement step of communication between the processors
for making the mesh on certain level globally conforming. This step is
significantly more complicated than its 2-D equivalent.
Raviart-Thomas zero order (RT0) finite element has been added to both
the 2-D and 3-D codes. Under development is discontinuous approximation
of convection terms for mixed finite element.
Both the 2-D and 3-D codes are
written in C++. The parallel computations are done
using the Message Passing Interface library (MPI).
I finished with Dr. Raytcho Lazarov and Dr. Panayot Vassilevski article on
``Interior penalty discontinuous approximations of elliptic problems'',
which has been submitted to SIAM J. Scientific Computing.
Future Plans
I will continue my work on error control, adaptive grid refinement and a
posteriori error estimates for elliptic problems based on finite
volume/element methods, paying special attention to parallel computations.
The 3-D parallel version of the developed code is still an on-going
project.
August 17, 2000.