Convergence problem with parallel computations

Numerical methods and mathematical models of Elmer
Post Reply
stoykov
Posts: 26
Joined: 11 May 2012, 13:18
Antispam: Yes

Convergence problem with parallel computations

Post by stoykov »

Hello Elmer team,

I am modeling elastic structures with geometrical nonlinearity and I use ElasticSolver. I obtain a solution when I use different iterative methods on single processor, but when I go to parallel computations I cannot obtain a solution, i.e. the method does not converge. Here is an example of my Solver section:

Code: Select all

Solver 1
  Equation = Elasticity Solver
  Variable = Displacement
  Variable DOFs = 3
  Procedure = "ElasticSolve" "ElasticSolver"

  Linear System Solver = Iterative
  Linear System Iterative Method = BiCGStabl
  Linear System Preconditioning = ILU1
  Linear System ILUT Tolerance = 1.0e-3
  Linear System Max Iterations = 500
  Linear System Convergence Tolerance = 1.0e-4
  Linear System Precondition Recompute = 1

  Nonlinear System Newton After Tolerance = 1.0e-6
  Nonlinear System Newton After Iterations = 20
  Nonlinear System Max Iterations = 1000
  Nonlinear System Convergence Tolerance = 1.0e-5
  Nonlinear System Relaxation Factor = 1.0
  Steady State Convergence Tolerance = 1.0e-4
End
The ouput is the following:

Code: Select all

ElasticSolve: 
ElasticSolve: 
ElasticSolve: -------------------------------------
ElasticSolve:  ELASTICITY ITERATION              1
ElasticSolve: -------------------------------------
ElasticSolve: 
ElasticSolve: Starting assembly...
ElasticSolve: Assembly done
ElasticSolve: BCs are now set
      10 0.1934E+01 0.4599E+01
      20 0.2436E+01 0.5794E+01
      30 0.7655E+01 0.1820E+02
      40 0.4041E+01 0.9609E+01
      50 0.9782E+00 0.2326E+01
      60 0.1523E+01 0.3622E+01
      70 0.1003E+01 0.2385E+01
      80 0.1211E+01 0.2879E+01
      90 0.1638E+01 0.3896E+01
     100 0.5240E+01 0.1246E+02
     110 0.1236E+01 0.2939E+01
.....
     420 0.1255E+01 0.2984E+01
     430 0.3043E+02 0.7237E+02
     440 0.1551E+01 0.3690E+01
     450 0.1974E+02 0.4695E+02
     460 0.4023E+02 0.9566E+02
     470 0.8558E+02 0.2035E+03
     480 0.1478E+01 0.3515E+01
     490 0.2979E+01 0.7084E+01
     500 0.3644E+01 0.8666E+01
     501 0.3644E+01 0.8666E+01
ERROR:: IterSolve: Failed convergence tolerances.
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 9526 on
node stan exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
I generate the mesh wifh Gmsh and then I sepearte it with ElmerGrid -metis or with -partitionorder
Does anyone managed to do parallel computations of geometrically nonlinear structures? What should I change in the Solver section in order to obtain convergence?

Best regards,
Stan
raback
Site Admin
Posts: 4832
Joined: 22 Aug 2009, 11:57
Antispam: Yes
Location: Espoo, Finland
Contact:

Re: Convergence problem with parallel computations

Post by raback »

Hi Stan

The ILU preconditioning is different in parallel and therefore it probably fails here. Preconditioner "none" (actually diagonal as scaling is by default on) work exactly the same way in serial and parallel.

You could also try using block preconditioner to boost the speed. Then you might need to go parallel only later. And even then the block preconditioner could use some strategies which are the same in serial.

Below is an example utilizing the serial cmg. But you could play with the inner solver. In parallel you could try the normal Krylov method or, if you have compiled with Hypre, BoomerMG, and if with ML, then that.

-Peter

Code: Select all

! These choose the overall block strategies
!-----------------------------------------
  Block Solver = Logical True
  Block Preconditioner = Logical True
  Block Gauss-Seidel = Logical True
  Block Matrix Reuse = Logical True

! Linear system solver for outer loop
!-----------------------------------------
  Outer: Linear System Solver = string "Iterative"
  Outer: Linear System Convergence Tolerance = real 1e-8
  Outer: Linear System Iterative Method = string GCR
  Outer: Linear System GCR Restart = Integer 50
  Outer: Linear System Residual Output = integer 1
  Outer: Linear System Max Iterations = integer 500

! Linear system solver for blocks
!-----------------------------------------
  Linear System Solver = multigrid      
  Linear System Convergence Tolerance = 1.0e-05
  Multigrid Levels = Integer 10

!--- basic algebraic multigrid iteration stuff
  MG Levels = Integer 10
  MG Smoother = String sgs
  MG Pre Smoothing Iterations(1) = 1
  MG Post Smoothing Iterations(1) = 1
 
!--- cluster MG specific parameters
  MG Method = String cluster
  MG Cluster Size = Integer 0
  MG Cluster Alpha = Real 1.8
  MG Strong Connection Limit = Real 0.01
! MG Strong Connection Minimum = Integer 4
  MG Max Iterations = Integer 2
stoykov
Posts: 26
Joined: 11 May 2012, 13:18
Antispam: Yes

Re: Convergence problem with parallel computations

Post by stoykov »

Hi Peter,

Thank you for the suggestion of using block preconditioner. Unfotionately I couldn't obtain convergence in the sequential version, using your example. What I cannot understand is why this example works for 2D probelems (the solver example is similar to BlockLinElast3), but it does not work for 3D problems. Here is the output of my program, no convergence even for 10 000 iterations.

Code: Select all

MAIN: 
MAIN: =============================================================
MAIN: ElmerSolver finite element software, Welcome!
MAIN: This program is free software licensed under (L)GPL
MAIN: Copyright 1st April 1995 - , CSC - IT Center for Science Ltd.
MAIN: Webpage http://www.csc.fi/elmer, Email elmeradm@csc.fi
MAIN: Library version: 7.0 (Rev: 6003M)
MAIN:  HYPRE library linked in.
MAIN:  MUMPS library linked in.
MAIN: =============================================================
MAIN: 
MAIN: 
MAIN: -------------------------------------
MAIN: Reading Model: caseIter4.sif
Model Input:  Unlisted keyword: [block preconditioner] in section: [solver 1]
Model Input:  Unlisted keyword: [block gauss-seidel] in section: [solver 1]
Model Input:  Unlisted keyword: [block matrix reuse] in section: [solver 1]
Model Input:  Unlisted keyword: [mg cluster alpha] in section: [solver 1]
Loading user function library: [ElasticSolve]...[ElasticSolver_Init0]
Loading user function library: [ResultOutputSolve]...[ResultOutputSolver_Init0]
MAIN: -------------------------------------
Loading user function library: [ElasticSolve]...[ElasticSolver_Init]
Loading user function library: [ElasticSolve]...[ElasticSolver]
OptimizeBandwidth: ---------------------------------------------------------
OptimizeBandwidth: Computing matrix structure for: elasticity solver...done.
OptimizeBandwidth:  Half bandwidth without optimization:         2619
OptimizeBandwidth: 
OptimizeBandwidth: Bandwidth Optimization ...done.
OptimizeBandwidth:  Half bandwidth after optimization:          109
OptimizeBandwidth: ---------------------------------------------------------
Loading user function library: [ResultOutputSolve]...[ResultOutputSolver_Init]
Loading user function library: [ResultOutputSolve]...[ResultOutputSolver]
MAIN: 
MAIN: -------------------------------------
MAIN:  Steady state iteration:            1
MAIN: -------------------------------------
MAIN: 
BlockSolver: Solving system of equations utilizing block strategies
BlockSolver: Using existing variable > displacement 1 <
BlockSolver: Using existing variable > displacement 2 <
BlockSolver: Using existing variable > displacement 3 <
ElasticSolve: 
ElasticSolve: 
ElasticSolve: -------------------------------------
ElasticSolve:  ELASTICITY ITERATION              1
ElasticSolve: -------------------------------------
ElasticSolve: 
ElasticSolve: Starting assembly...
ElasticSolve: Assembly done
ElasticSolve: BCs are now set
ElasticSolve:  Result Norm   :   7.41098468761869816E-323
ElasticSolve:  Relative Change :    0.0000000000000000
BlockSolver: Applying scaling
CMGSolve: -------------------------------------------------
CMGSolve: Creating a new matrix and projector for level 15
ChooseClusterNodes: Using clustering based on matrix connections
CMGBonds: Number of eliminated nodes      41
CMGBonds: Average number of strong bonds  15.270
CMGClusterForm: Number of clusters     210
CMGClusterForm: Average size of clusters  12.838
CRS_ClusterMatrixCreate: Coarse matrix reduction factor    27.478
CMGSolve: -------------------------------------------------
CMGSolve: Creating a new matrix and projector for level 14
ChooseClusterNodes: Using clustering based on matrix connections
CMGBonds: Average number of strong bonds   7.038
CMGClusterForm: Number of clusters      22
CMGClusterForm: Average size of clusters   9.545
CRS_ClusterMatrixCreate: Coarse matrix reduction factor    28.514
CMGSolve: -------------------------------------------------
CMGSolve: Creating a new matrix and projector for level 13
ChooseClusterNodes: Using clustering based on matrix connections
CMGBonds: Average number of strong bonds   2.273
CMGClusterForm: Number of clusters       6
CMGClusterForm: Average size of clusters   3.667
CRS_ClusterMatrixCreate: Coarse matrix reduction factor     4.625
CMGSolve: -------------------------------------------------
CMGSolve: Creating a new matrix and projector for level 15
ChooseClusterNodes: Using clustering based on matrix connections
CMGBonds: Number of eliminated nodes      41
CMGBonds: Average number of strong bonds  15.515
CMGClusterForm: Number of clusters     213
CMGClusterForm: Average size of clusters  12.643
CRS_ClusterMatrixCreate: Coarse matrix reduction factor    28.688
CMGSolve: -------------------------------------------------
CMGSolve: Creating a new matrix and projector for level 14
ChooseClusterNodes: Using clustering based on matrix connections
CMGBonds: Average number of strong bonds   6.526
CMGClusterForm: Number of clusters      25
CMGClusterForm: Average size of clusters   8.520
CRS_ClusterMatrixCreate: Coarse matrix reduction factor    22.209
CMGSolve: -------------------------------------------------
CMGSolve: Creating a new matrix and projector for level 13
ChooseClusterNodes: Using clustering based on matrix connections
CMGBonds: Average number of strong bonds   2.320
CMGClusterForm: Number of clusters       7
CMGClusterForm: Average size of clusters   3.571
CRS_ClusterMatrixCreate: Coarse matrix reduction factor     4.789
CMGSolve: -------------------------------------------------
CMGSolve: Creating a new matrix and projector for level 15
ChooseClusterNodes: Using clustering based on matrix connections
CMGBonds: Number of eliminated nodes      41
CMGBonds: Average number of strong bonds  15.481
CMGClusterForm: Number of clusters     223
CMGClusterForm: Average size of clusters  12.031
CRS_ClusterMatrixCreate: Coarse matrix reduction factor    27.649
CMGSolve: -------------------------------------------------
CMGSolve: Creating a new matrix and projector for level 14
ChooseClusterNodes: Using clustering based on matrix connections
CMGBonds: Average number of strong bonds   6.547
CMGClusterForm: Number of clusters      24
CMGClusterForm: Average size of clusters   9.292
CRS_ClusterMatrixCreate: Coarse matrix reduction factor    26.885
CMGSolve: -------------------------------------------------
CMGSolve: Creating a new matrix and projector for level 13
ChooseClusterNodes: Using clustering based on matrix connections
CMGBonds: Average number of strong bonds   2.125
CMGClusterForm: Number of clusters       7
CMGClusterForm: Average size of clusters   3.429
CRS_ClusterMatrixCreate: Coarse matrix reduction factor     4.105
       1 0.5938E+00
       2 0.3883E+00
       3 0.3656E+00
       4 0.3593E+00
       5 0.3537E+00
       6 0.3483E+00
       7 0.3441E+00
       8 0.3409E+00
       9 0.3375E+00
      10 0.3339E+00
      11 0.3305E+00
      12 0.3271E+00
      13 0.3240E+00
      14 0.3204E+00
      15 0.3168E+00
      16 0.3133E+00
      17 0.3107E+00
      18 0.3080E+00
      19 0.3057E+00
      20 0.3032E+00
      21 0.3011E+00
      22 0.2989E+00
      23 0.2972E+00
      24 0.2956E+00
      25 0.2943E+00
      26 0.2933E+00
      27 0.2924E+00
      28 0.2915E+00
      29 0.2904E+00
      30 0.2895E+00
      31 0.2884E+00
      32 0.2874E+00
      33 0.2865E+00
      34 0.2855E+00
      35 0.2841E+00
      36 0.2828E+00
      37 0.2811E+00
      38 0.2791E+00
      39 0.2767E+00
      40 0.2748E+00

	...

    9971 0.5518E-01
    9972 0.5517E-01
    9973 0.5515E-01
    9974 0.5514E-01
    9975 0.5513E-01
    9976 0.5512E-01
    9977 0.5511E-01
    9978 0.5511E-01
    9979 0.5511E-01
    9980 0.5510E-01
    9981 0.5510E-01
    9982 0.5510E-01
    9983 0.5510E-01
    9984 0.5509E-01
    9985 0.5509E-01
    9986 0.5509E-01
    9987 0.5508E-01
    9988 0.5508E-01
    9989 0.5508E-01
    9990 0.5507E-01
    9991 0.5505E-01
    9992 0.5503E-01
    9993 0.5501E-01
    9994 0.5499E-01
    9995 0.5499E-01
    9996 0.5498E-01
    9997 0.5497E-01
    9998 0.5495E-01
    9999 0.5495E-01
   10000 0.5494E-01
ERROR:: IterSolve: Failed convergence tolerances.
The good thing is that the error decreses in all iterations. I tried to change several parameters and methods, and still no convergence. Do you have an idea why there is no convergence for 3D elastisity problems? Also I am attaching the mesh and the sif files.

Best regards,
Stan
Attachments
BT2.msh
(282.33 KiB) Downloaded 287 times
caseIter4.sif
(2.5 KiB) Downloaded 274 times
Post Reply