Compiling Elmer parallel version (Linux)

From Elmer Wiki
Jump to: navigation, search

Instructions on how Elmer could be compiled using OpenMPI and (optionally) Hypre on *NIX/Linux:

1.) Building the OpenMPI libraries, one should preferably opt for static libraries (--enable-static in configure). You should build an installation of OpenMPI containing the following sub-directories /path/to/openmpi/{bin,lib,include}

2.) Building Elmer, you should put the directory /path/to/openmpi/lib into your LD_LIBRARY_PATH environment variable as well as include the path to your compiler wrapper-scripts (mpif90, mpicc, mpic++) /path/to/openmpi/bin into PATH.

N.B.: You will need the MPI libraries included into the LD_LIBRARY_PATH also during runtime, so it would be smart to introduce this into one of your rc-files.

Further you will be in need to set your compilers using the following environment variables:

CC  mpicc
CXX  mpic++
F77 mpif90
FC mpif90

3.) Configure in each module Elmer giving the root of your MPI installation tree using --with-mpi-dir="/path/to/openmpi" and of course the desired target path for installation using --prefix="/path/to/where/elmerinstallation".

4.) Thereafter, do a "make clean" (to be save to get rid of earlier compiled stuff), "make" (to compile it) and "make install" (to put the files into the installation)

Steps 2.), 3.) and 4.) packed into a script could look as follows (of course you have to change the paths to fit with your system):

#!/bin/sh -f
# This compilation script worked on a RHEL5.0
# kernel: x86_64      2.6.18-92.1.22.el5

export CC="mpicc"
export CXX="mpicxx"
export FC="mpif90"
export F77="mpif90"

export ELMER_INSTALL="/usr/local/"
export ELMER_HOME="/usr/local/"
export MPI_HOME="/usr/lib64/openmpi/1.2.8-gcc44"


export CFLAGS="-O3 -march=x86-64  -ftree-vectorize -funroll-loops -ffast-math"
export CXXFLAGS="-O3 -march=x86-64  -ftree-vectorize -funroll-loops -ffast-math"
export FCFLAGS="-O3  -march=x86-64  -ftree-vectorize -funroll-loops -ffast-math"
export F90FLAGS="-O3  -march=x86-64  -ftree-vectorize -funroll-loops -ffast-math"
export F77FLAGS="-O3 -march=x86-64  -ftree-vectorize -funroll-loops -ffast-math"
export FFLAGS="-O3  -march=x86-64  -ftree-vectorize -funroll-loops -ffast-math"


modules="matc umfpack mathlibs meshgen2d eio hutiter fem elmergrid"

##### configure, build and install #########
 for m in $modules; do
   echo "module $m"
   echo "###############"
   ##### parallel #######
  cd $m ; make distclean; ./configure --prefix=$ELMER_INSTALL --with-mpi=yes --with-mpi-dir=$MPI_HOME --with-64bits=yes && make install && cd ..
   ##### serial #########
#    cd $m ; ./configure  --prefix=$ELMER_INSTALL && make -j2  && make install && cd ..
done

Recent Ubuntu Versions (as of Sept. 2010: Lucid Lynx 10.04) seem to have their own peculiarities with respect to compilation. Some hints are are collected into a separate Ubuntu parallelisation page.

Adding Hypre

Elmer also has been adapted to utilize the quite capable parallel numerical library HYPRE. If you would like to include it into your Elmer build, you should compile it in between step 1.) and 2.) above applying the same settings as in 2.) with one exception: change the CC to "mpicc -fPIC" (i.e., including the -fPIC into the compilation command), as there is the problem with a single file in the HYPRE compilation for which the CFLAGS are ignored but the switch -fPIC is needed. You do not need HYPRE to get a working Elmer MPI version. If it is not found, it is simply not included.
The following lines being added to the compilation script above should include Hypre

export HYPRE=/path/to/hypre2.0.0 ## change with correct path
export CPPFLAGS="-I$HYPRE/include"

# modified configure and build
for m in $modules; do
  cd $m
  ./configure --prefix=$ELMER_INSTALL --with-mpi=yes --with-mpi-dir=$MPI_HOME --with-64bits=yes --with-hypre="-L$HYPRE/lib -lHYPRE" && make -j2 && make install
  cd ..
done

Some distributions include a package for HYPRE (e.g., Debian), hence the library should be found automatically (without applying any of those changes above).

Adding MUMPS

MUMPS is a very effective parallel (direct) MUltifrontal Massively Parallel Solver. It needs ScaLapack and BLACS to be compiled in. Like HYPRE above, MUMPS has to be built in advance (before linking it to Elmer). The following modification to the compilation script might be needed to include MUMPS into a parallel version of ElmerSolver:

## change with correct paths
export MUMPS="/path/to/MUMPS"
export BLACS="/path/to/BLACS"
export SCALAPACK="/path/to/scalapack/lib"

export FCPPFLAGS="$FCPPFLAGS -DHAVE_MUMPS"
export FCFLAGS="$FCFLAGS -I$MUMPS/include"
export LDFLAGS="-L$MUMPS/lib -ldmumps -lmumps_common -lpord -L$SCALAPACK -lscalapack \
                $BLACS/LIB/*.a $BLACS/LIB/*.a -L$MPI_HOME/lib -lmpi_f77 -lmpi

Mind, that on some distributions BLACS and ScaLapack are included (e.g., Debian) and hence do not need to be explicitly declared in the script.