First off, I am using Elmer revision 7111 on Linux Mint 17.
I'm working on Josefin's modification to the Stokes solver in Elmer/Ice. I have been running into problems with memory leaks when running in parallel, which are significant (my computer runs out of memory on a 11,000 element simulation after 10 time steps). I have narrowed the memory leak to a ParallelInitMatrix call, specifically to the pointer:
In the program I am working on, the Matrix pointer is a temporary pointer that should have its memory cleared at the end of the solver. However, simply deallocating Matrix % ParMatrix is not sufficient to release the memory. A sample memory trace using Valgrind is below:
Code: Select all
Matrix % ParMatrix => & ParInitMatrix( Matrix, Matrix % ParallelInfo )
Where FlowSolveSIAFS.f90 is the modified Stokes solver.
Code: Select all
==13426== 6,338,780 (1,880 direct, 6,336,900 indirect) bytes in 1 blocks are definitely lost in loss record 1,141 of 1,142 ==13426== at 0x4C2AB80: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==13426== by 0x4FAD2DC: __generalutils_MOD_allocatematrix (GeneralUtils.f90:1740) ==13426== by 0x512D677: __sparitersolve_MOD_splitmatrix (SParIterSolver.f90:430) ==13426== by 0x51338DA: __sparitersolve_MOD_parinitmatrix (SParIterSolver.f90:377) ==13426== by 0x516FDFB: __parallelutils_MOD_parallelinitmatrix (ParallelUtils.f90:563) ==13426== by 0x1B1604B7: flowsolversiafs_ (FlowSolveSIAFS.f90:2011) ==13426== by 0x500F4DA: execsolver_ (in /usr/local/ElmerRev7111/lib/libelmersolver-7.0.so) ==13426== by 0x506AB57: __mainutils_MOD_singlesolver (MainUtils.f90:3795) ==13426== by 0x507A88F: __mainutils_MOD_solveractivate (MainUtils.f90:3961) ==13426== by 0x507B872: solvecoupled.5582 (MainUtils.f90:1835) ==13426== by 0x507E489: __mainutils_MOD_solveequations (MainUtils.f90:1595) ==13426== by 0x529E5A2: execsimulation.1923 (ElmerSolver.f90:1577)
My question is, is there an easy way to make sure that all of the memory from Matrix % ParMatrix is released from the temporary pointer? I'm solving this using a direct method with MUMPS.