Since quite recently Elmer includes an interface to Zoltan library which provides partitioning and repartitioning routines (thanx Joe & Juhani). For standard use cases this can provide a more straight-forward way of parallel computing with MPI. Now the serial mesh can be loaded by a master process and the mesh is internally distributed to other parallel tasks.
To use internal partitioning with Zoltan you need two additional commands in the Simulation section:
Code: Select all
Partition Mesh = Logical True
Partitioning Method = String "Zoltan"
Code: Select all
mpirun -np #np ElmerSolver_mpi
In order to use Zoltan you need a version that has been compiled with that. Zoltan is available in a git subrepo as a cmake project. To include it in your own build say in the source tree:
Code: Select all
git submodule sync
git submodule update --init
Code: Select all
-DWITH_Zoltan:BOOL=TRUE
Ideally this is suited cases where the master process does not introduce a CPU time or memory bottle-neck. Also, halo elements are not yet communicated and no special constraints (related to BCs, for example) are considered. So the old predistributed approach is still often the best choice.
All comments and experiences are welcome!
-Peter