Internal partitioning

One noteworthy feature of Elmer is its good parallel performance on many applications areas. This is achieved by splitting the computational mesh into number of pieces – a process called partitioning. The relatively independent pieces are solved by different processes that communicate using MPI.

Partitioning in Elmer has been historically obtained using ElmerGrid as a preprocessing step. This unfortunately has some limitations

  • It is cumbersome to have to perform a new partitioning when number of processes is changed. This applies particularly the new users of parallel computing.
  • It is difficult to include all information needed to minimize communication. For example, rotating machines should have a partitioning that keeps the rotating BCs on same partitioning.
  • It is impossible to repartition a mesh within ElmerSolver if the mesh would be changed.

For these reasons there has recently been development of internal partitioning routines. The generic internal partitioning routine takes use of Zoltan which has the advantage in repartitioning that it can honor the previous partitioning in minimizing communication. There are also geometric methods providing sometimes optimal communication.

To compile with Zoltan you need to have fresh source codes and typically include the following line in your build script:

-DWITH_Zoltan:BOOL=TRUE

Note that Zoltan resides in a submodule that is not updated by default. Hence you may need to given the following command in your Git repo.

 git submodule update --init 

Some features already available for the end-user include:

  • Master-slave type of partitioning where partition “0” does all the work and communicates the mesh data to other partitions.
  • Partitioning with Zoltan using the dual graph
  • Partitioning with recursive geometric division
  • Hybrid partitioning where BCs may be partitioned first using geometric partitiong followed by partitioning of the rest of the mesh
  • Boundary partitioning may be extended given number of layers to the bulk mesh
  • Halo elements may be defined for boundary-boundary coupling enabling parallel solution of rotating interfaces or contact problems, for example (BC halo)
  • Halo elements may be defined for Discontinuous Galerkin problems (DG halo).

The basic use of internal partitioning consists of adding the following to the Simulation section of the .sif file

Partition Mesh = Logical True
Partitioning Method = String "zoltan"

and runnig the case with a standard parallel manner, for example

mpirun -np 8 ElmerSolver_mpi

The development is still continuing but those interested may study some test cases, for example

  • PartitioningZoltanQuads: standard partitioning
  • MortarPoissonZoltan3D: hybrid partitioning where rotating interface is split into two and the rest are further split with Zoltan to four (BC halo)
  • ContactPatch3DZoltan: contact pair is contained in one process and the rest is split for many processes with Zoltan.
  • AdvReactDGZoltan: standard Discontinuous Galerkin solution where interface elements are communicated (DG halo).

The examples might help to take the features to use. The features will be still developed and later documented to the ElmerSolver manual. All feedback is welcome!