Convergence of WhitneyAVHarmonicSolver

Numerical methods and mathematical models of Elmer
Post Reply
hkr
Posts: 5
Joined: 13 Nov 2017, 00:37
Antispam: Yes

Convergence of WhitneyAVHarmonicSolver

Post by hkr »

Dear Elemer-Experts,

I have tried to set up a case using the harmonic WhitneyAV solver.
The case consists of a coil surrounded by an iron cylinder. I would like to solve for the eddy current losses in the iron depending on the frequency.

I have looked at different example cases and this is what I derived:
https://www.dropbox.com/s/3fx66h9w6pogx ... n.zip?dl=0
(The mesh was created in gmsh and needs to be converted by ElmerGrid 14 2 model.msh -removelowdim -autoclean -scale 1e-3 1e-3 1e-3)

Here, the coil is replaced by a hollow cylinder with a circumferential current as body force. I also tried to resolve the coil geometry directly. In any case, I have problems with the convergence if the WhitneyAVHarmonicSolver. It runs for quite a large number of iterations but the residuals remain quite for from converged.

I'm not sure how to proceed to get some reliable results. Maybe I have overlooked something Does anybody has suggestions how to improve convergence?

Thanks for any hint.

Regards, Hannes
mb5
Posts: 20
Joined: 10 Jun 2017, 18:07
Antispam: Yes

Re: Convergence of WhitneyAVHarmonicSolver

Post by mb5 »

Hi,

I tested your code and got an other behavior: There is only one linear iteration with residual 0.

Code: Select all

ComputeChange: SS (ITER=1) (NRM,RELC): (  0.0000000      0.0000000     ) :: mgdynamics
When opening the resulting vtu file with paraview all values are 0 - even the current density which should be feed by your definition. Maybe there is a problem with the exitation.

best regards
Martin
hkr
Posts: 5
Joined: 13 Nov 2017, 00:37
Antispam: Yes

Re: Convergence of WhitneyAVHarmonicSolver

Post by hkr »

Hmm. I downloaded myself and checked again: For me, the solver iterates:

Code: Select all

MAIN: 
MAIN: -------------------------------------
MAIN:  Steady state iteration:            1
MAIN: -------------------------------------
MAIN: 
SingleSolver: Attempting to call solver
SingleSolver: Solver Equation string is: mgdynamics
OptimizeBandwidth: ---------------------------------------------------------
OptimizeBandwidth: Computing matrix structure for: mgdynamics...done.
OptimizeBandwidth: Half bandwidth without optimization: 27447
OptimizeBandwidth: 
OptimizeBandwidth: Bandwidth Optimization ...done.
OptimizeBandwidth: Half bandwidth after optimization: 3384
OptimizeBandwidth: ---------------------------------------------------------
DefUtils::DefaultDirichletBCs: Setting Dirichlet boundary conditions
EnforceDirichletConditions: Applying Dirichlet conditions using scaled diagonal
DefUtils::DefaultDirichletBCs: Dirichlet boundary conditions set
SolveSystem: Solving linear system
IterSolver: Using iterative method: bicgstab
IterSolver: Matrix is complex valued
CRS_ComplexIncompleteLU: ILU(0) (Complex), Starting Factorization:
CRS_ComplexIncompleteLU: ILU(0) (Complex), NOF nonzeros:    430554
CRS_ComplexIncompleteLU: ILU(0) (Complex), filling (%) :       100
CRS_ComplexIncompleteLU: ILU(0) (Complex), Factorization ready at (s):     0.02
      48 0.5824E-10
DefaultStart: Starting solver: mgdynamics
DefUtils::DefaultDirichletBCs: Setting Dirichlet boundary conditions
EnforceDirichletConditions: Applying Dirichlet conditions using scaled diagonal
DefUtils::DefaultDirichletBCs: Dirichlet boundary conditions set
SolveSystem: Solving linear system
IterSolver: Using iterative method: bicgstabl
IterSolver: Matrix is complex valued
      50 0.4581E-01
     100 0.1327E-01
     150 0.1175E-01
     200 0.1519E-01
...
Note that I built the solver directly from the "devel" branch in the Git repository. (Commit ca9ac34ac9199afa3ae566e4e34617f2fb7e19d6 from 2017-11-03, everything in Ubuntu Linux).
hkr
Posts: 5
Joined: 13 Nov 2017, 00:37
Antispam: Yes

Re: Convergence of WhitneyAVHarmonicSolver

Post by hkr »

Just to be sure: in the meanwhile I got back to tag "release-8.3" in the git repo and it runs the same way as I observed before.

I am not sure, if it is ok to prescribe a perfectly circular driving current as a body force, i.e. if this can be represented by the potentials which are used in the solver? (Sorry if this is a dumb question, but I am not yet very familiar with electromagnetics problems).

Anyway I have set up another variant of my case with the coil conductor resolved. My idea was to avoid issues due to impossible simplifications. The case can be downloaded here:
https://www.dropbox.com/s/ycn5rbklo2t6x ... d.zip?dl=0
As boundary condition, I prescribed a potential instead of a current this time.
Well, this case runs, but it still does not meet the convergence goal:

Code: Select all

$ ELMER_HOME=/usr mpirun -np 20 ElmerSolver_mpi                                      
ELMER SOLVER (v 8.3) STARTED AT: 2017/11/15 08:28:34
ELMER SOLVER (v 8.3) STARTED AT: 2017/11/15 08:28:34
ELMER SOLVER (v 8.3) STARTED AT: 2017/11/15 08:28:34
ELMER SOLVER (v 8.3) STARTED AT: 2017/11/15 08:28:34
ELMER SOLVER (v 8.3) STARTED AT: 2017/11/15 08:28:34
ELMER SOLVER (v 8.3) STARTED AT: 2017/11/15 08:28:34
ELMER SOLVER (v 8.3) STARTED AT: 2017/11/15 08:28:34
ELMER SOLVER (v 8.3) STARTED AT: 2017/11/15 08:28:34
ELMER SOLVER (v 8.3) STARTED AT: 2017/11/15 08:28:34
ELMER SOLVER (v 8.3) STARTED AT: 2017/11/15 08:28:34
ELMER SOLVER (v 8.3) STARTED AT: 2017/11/15 08:28:34
ELMER SOLVER (v 8.3) STARTED AT: 2017/11/15 08:28:34
ELMER SOLVER (v 8.3) STARTED AT: 2017/11/15 08:28:34
ELMER SOLVER (v 8.3) STARTED AT: 2017/11/15 08:28:34
ELMER SOLVER (v 8.3) STARTED AT: 2017/11/15 08:28:34
ELMER SOLVER (v 8.3) STARTED AT: 2017/11/15 08:28:34
ELMER SOLVER (v 8.3) STARTED AT: 2017/11/15 08:28:34
ELMER SOLVER (v 8.3) STARTED AT: 2017/11/15 08:28:34
ELMER SOLVER (v 8.3) STARTED AT: 2017/11/15 08:28:34
ELMER SOLVER (v 8.3) STARTED AT: 2017/11/15 08:28:34
ParCommInit:  Initialize #PEs:           20
ParCommInit:  Initialize #PEs:           20
ParCommInit:  Initialize #PEs:           20
ParCommInit:  Initialize #PEs:           20
ParCommInit:  Initialize #PEs:           20
ParCommInit:  Initialize #PEs:           20
ParCommInit:  Initialize #PEs:           20
ParCommInit:  Initialize #PEs:           20
ParCommInit:  Initialize #PEs:           20
ParCommInit:  Initialize #PEs:           20
ParCommInit:  Initialize #PEs:           20
ParCommInit:  Initialize #PEs:           20
ParCommInit:  Initialize #PEs:           20
ParCommInit:  Initialize #PEs:           20
ParCommInit:  Initialize #PEs:           20
ParCommInit:  Initialize #PEs:           20
ParCommInit:  Initialize #PEs:           20
ParCommInit:  Initialize #PEs:           20
ParCommInit:  Initialize #PEs:           20
MAIN: 
MAIN: =============================================================
MAIN: ElmerSolver finite element software, Welcome!
MAIN: This program is free software licensed under (L)GPL
MAIN: Copyright 1st April 1995 - , CSC - IT Center for Science Ltd.
MAIN: Webpage http://www.csc.fi/elmer, Email elmeradm@csc.fi
MAIN: Version: 8.3 (Rev: a51651c, Compiled: 2017-11-15)
MAIN:  Running in parallel using 20 tasks.
MAIN:  MUMPS library linked in.
MAIN: =============================================================
ParCommInit:  Initialize #PEs:           20
MAIN: 
MAIN: 
MAIN: -------------------------------------
MAIN: Reading Model: model_coil.sif
LoadInputFile: Scanning input file: model_coil.sif
LoadInputFile: Loading input file: model_coil.sif
Model Input:  Unlisted keyword: [p re] in section: [initial condition 1]
Model Input:  Unlisted keyword: [p im] in section: [initial condition 1]
Model Input:  Unlisted keyword: [p re {e}] in section: [initial condition 1]
Model Input:  Unlisted keyword: [p im {e}] in section: [initial condition 1]
Model Input:  Unlisted keyword: [show angular frequency] in section: [solver 2]
Model Input:  Unlisted keyword: [p re {e}] in section: [boundary condition 1]
Model Input:  Unlisted keyword: [p im {e}] in section: [boundary condition 1]
Model Input:  Unlisted keyword: [p re] in section: [boundary condition 1]
Model Input:  Unlisted keyword: [p im] in section: [boundary condition 1]
Model Input:  Unlisted keyword: [p re {e}] in section: [boundary condition 2]
Model Input:  Unlisted keyword: [p im {e}] in section: [boundary condition 2]
Model Input:  Unlisted keyword: [p re] in section: [boundary condition 2]
Model Input:  Unlisted keyword: [p im] in section: [boundary condition 2]
Model Input:  Unlisted keyword: [p re {e}] in section: [boundary condition 3]
Model Input:  Unlisted keyword: [p im {e}] in section: [boundary condition 3]
Loading user function library: [MagnetoDynamics]...[WhitneyAVHarmonicSolver_Init0]
Loading user function library: [MagnetoDynamics]...[MagnetoDynamicsCalcFields_Init0]
Loading user function library: [ResultOutputSolve]...[ResultOutputSolver_Init0]
Loading user function library: [MagnetoDynamics]...[MagnetoDynamics_Dummy_Init0]
LoadMesh: Starting
ElmerAsciiMesh: Performing step: 1
LoadMesh: Base mesh name: ./model_coil
LoadMesh: Reading header info from file: ./model_coil/partitioning.20/part.1.header
LoadMesh: Number of nodes in mesh: 4923
LoadMesh: Number of bulk elements in mesh: 25599
LoadMesh: Number of boundary elements in mesh: 2647
LoadMesh: Initial number of max element nodes: 4
ElmerAsciiMesh: Performing step: 2
LoadMesh: Reading nodes from file: ./model_coil/partitioning.20/part.1.nodes
LoadMesh: Performing coordinate mapping
LoadMesh: Dimension of model is: 3
LoadMesh: Dimension of mesh is: 3
ElmerAsciiMesh: Performing step: 3
LoadMesh: Reading bulk elements from file: ./model_coil/partitioning.20/part.1.elements
ElmerAsciiMesh: Performing step: 4
LoadMesh: Reading boundary elements from file: ./model_coil/partitioning.20/part.1.boundary
LoadMesh: Performing node mapping
LoadMesh: Remapping bodies
LoadMesh: Minimum initial body index: 1
LoadMesh: Maximum initial body index: 3
LoadMesh: Remapping boundaries
LoadMesh: Minimum initial boundary index: 3
LoadMesh: Maximum initial boundary index: 5
ElmerAsciiMesh: Performing step: 5
LoadMesh: Reading nodes from file: ./model_coil/partitioning.20/part.1.shared
NonNodalElements: Requested elements require creation of edges
FindMeshEdges: Determining faces in 3D mesh
FindMeshFaces3D: Number of faces found: 52913
FindMeshEdges: Determining edges in 3D mesh
FindMeshEdges3D: Number of edges found: 32235
ElmerAsciiMesh: Performing step: 6
WARNING:: CheckTimer: Requesting time from non-existing timer: LoadMesh
WARNING:: CheckTimer: Requesting time from non-existing timer: LoadMesh
WARNING:: CheckTimer: Requesting time from non-existing timer: LoadMesh
WARNING:: CheckTimer: Requesting time from non-existing timer: LoadMesh
LoadMesh: Loading mesh done
LoadMesh: Elapsed REAL time:     1.8415 (s)
WARNING:: CheckTimer: Requesting time from non-existing timer: LoadMesh
MeshStabParams: Computing stabilization parameters
WARNING:: CheckTimer: Requesting time from non-existing timer: LoadMesh
WARNING:: CheckTimer: Requesting time from non-existing timer: LoadMesh
WARNING:: CheckTimer: Requesting time from non-existing timer: LoadMesh
WARNING:: CheckTimer: Requesting time from non-existing timer: LoadMesh
WARNING:: CheckTimer: Requesting time from non-existing timer: LoadMesh
WARNING:: CheckTimer: Requesting time from non-existing timer: LoadMesh
WARNING:: CheckTimer: Requesting time from non-existing timer: LoadMesh
WARNING:: CheckTimer: Requesting time from non-existing timer: LoadMesh
WARNING:: CheckTimer: Requesting time from non-existing timer: LoadMesh
WARNING:: CheckTimer: Requesting time from non-existing timer: LoadMesh
WARNING:: CheckTimer: Requesting time from non-existing timer: LoadMesh
WARNING:: CheckTimer: Requesting time from non-existing timer: LoadMesh
WARNING:: CheckTimer: Requesting time from non-existing timer: LoadMesh
WARNING:: CheckTimer: Requesting time from non-existing timer: LoadMesh
WARNING:: CheckTimer: Requesting time from non-existing timer: LoadMesh
WARNING:: CheckTimer: Requesting time from non-existing timer: MeshStabParams
WARNING:: CheckTimer: Requesting time from non-existing timer: MeshStabParams
WARNING:: CheckTimer: Requesting time from non-existing timer: MeshStabParams
WARNING:: CheckTimer: Requesting time from non-existing timer: MeshStabParams
WARNING:: CheckTimer: Requesting time from non-existing timer: MeshStabParams
WARNING:: CheckTimer: Requesting time from non-existing timer: MeshStabParams
WARNING:: CheckTimer: Requesting time from non-existing timer: MeshStabParams
WARNING:: CheckTimer: Requesting time from non-existing timer: MeshStabParams
WARNING:: CheckTimer: Requesting time from non-existing timer: MeshStabParams
WARNING:: CheckTimer: Requesting time from non-existing timer: MeshStabParams
MeshStabParams: Elapsed REAL time:     0.0129 (s)
WARNING:: CheckTimer: Requesting time from non-existing timer: MeshStabParams
WARNING:: CheckTimer: Requesting time from non-existing timer: MeshStabParams
MAIN: -------------------------------------
AddSolvers: Setting up 4 solvers
AddSolvers: Setting up solver 1: mgdynamics
WARNING:: CheckTimer: Requesting time from non-existing timer: MeshStabParams
AddEquationBasics: Using procedure: MagnetoDynamics WhitneyAVHarmonicSolver
AddEquationBasics: Setting up solver: mgdynamics
WARNING:: CheckTimer: Requesting time from non-existing timer: MeshStabParams
WARNING:: CheckTimer: Requesting time from non-existing timer: MeshStabParams
WARNING:: CheckTimer: Requesting time from non-existing timer: MeshStabParams
WARNING:: CheckTimer: Requesting time from non-existing timer: MeshStabParams
WARNING:: CheckTimer: Requesting time from non-existing timer: MeshStabParams
WARNING:: CheckTimer: Requesting time from non-existing timer: MeshStabParams
WARNING:: CheckTimer: Requesting time from non-existing timer: MeshStabParams
Loading user function library: [MagnetoDynamics]...[WhitneyAVHarmonicSolver_Init]
Loading user function library: [MagnetoDynamics]...[WhitneyAVHarmonicSolver_bulk]
Loading user function library: [MagnetoDynamics]...[WhitneyAVHarmonicSolver]
AddEquationBasics: Creating standard variable: p[p re:1 p im:1]
OptimizeBandwidth: ---------------------------------------------------------
OptimizeBandwidth: Computing matrix structure for: mgdynamics...done.
OptimizeBandwidth: Half bandwidth without optimization: 37156
OptimizeBandwidth: 
OptimizeBandwidth: Bandwidth Optimization ...done.
OptimizeBandwidth: Half bandwidth after optimization: 5347
OptimizeBandwidth: ---------------------------------------------------------
AddSolvers: Setting up solver 2: mgdynamicscalc
AddEquationBasics: Using procedure: MagnetoDynamics MagnetoDynamicsCalcFields
AddEquationBasics: Setting up solver: mgdynamicscalc
Loading user function library: [MagnetoDynamics]...[MagnetoDynamicsCalcFields_Init]
Loading user function library: [MagnetoDynamics]...[MagnetoDynamicsCalcFields_bulk]
Loading user function library: [MagnetoDynamics]...[MagnetoDynamicsCalcFields]
AddEquationBasics: Creating standard variable: hr_dummy
OptimizeBandwidth: ---------------------------------------------------------
OptimizeBandwidth: Computing matrix structure for: mgdynamicscalc...done.
OptimizeBandwidth: Half bandwidth without optimization: 4909
OptimizeBandwidth: 
OptimizeBandwidth: Bandwidth Optimization ...done.
OptimizeBandwidth: Half bandwidth after optimization: 719
OptimizeBandwidth: ---------------------------------------------------------
AddSolvers: Setting up solver 3: resultoutput
AddEquationBasics: Using procedure: ResultOutputSolve ResultOutputSolver
AddEquationBasics: Setting up solver: resultoutput
Loading user function library: [ResultOutputSolve]...[ResultOutputSolver_Init]
Loading user function library: [ResultOutputSolve]...[ResultOutputSolver_bulk]
Loading user function library: [ResultOutputSolve]...[ResultOutputSolver]
AddSolvers: Setting up solver 4: never
AddEquationBasics: Using procedure: MagnetoDynamics MagnetoDynamics_Dummy
AddEquationBasics: Setting up solver: never
Loading user function library: [MagnetoDynamics]...[MagnetoDynamics_Dummy_Init]
Loading user function library: [MagnetoDynamics]...[MagnetoDynamics_Dummy_bulk]
Loading user function library: [MagnetoDynamics]...[MagnetoDynamics_Dummy]
AddEquationBasics: Creating standard variable: cf_dummy
AddMeshCoordinatesAndTime: Setting mesh coordinates and time
SetInitialConditions: Setting up initial conditions (if any)
MAIN: 
MAIN: -------------------------------------
MAIN:  Steady state iteration:            1
MAIN: -------------------------------------
MAIN: 
ListToCRSMatrix: Matrix format changed from CRS to List
List_ToCRSMatrix: Number of entries in CRS matrix: 3576888
ListToCRSMatrix: Matrix format changed from List to CRS
SingleSolver: Attempting to call solver
SingleSolver: Solver Equation string is: mgdynamics
DefaultStart: Starting solver: mgdynamics
DefUtils::DefaultDirichletBCs: Setting Dirichlet boundary conditions
SetNodalLoads: Checking for nodal loads for variable: p re
SetNodalLoads: Checking for nodal loads for variable: p im
DefUtils::DefaultDirichletBCs: Dirichlet boundary conditions set
SolveSystem: Solving linear system
IterSolver: Using iterative method: bicgstabl
IterSolver: Matrix is complex valued
      50 0.2461E-02
     100 0.1346E-02
     150 0.1210E-02
     200 0.1195E-02
     250 0.5125E-03
     300 0.7406E-03
     350 0.4628E-03
     400 0.2829E-03
     450 0.2746E-03
     500 0.3268E-03
     550 0.2671E-03
     600 0.2380E-03
     650 0.3447E-03
     700 0.2221E-03
     750 0.1756E-03
     800 0.1741E-03
     850 0.1760E-03
     900 0.1730E-03
     950 0.1576E-03
    1000 0.1538E-03
    1050 0.2105E-03
    1100 0.1362E-03
    1150 0.1122E-03
    1200 0.1676E-03
    1250 0.1096E-03
    1300 0.9949E-04
    1350 0.9152E-04
    1400 0.1047E-03
    1450 0.1234E-03
    1500 0.1105E-03
    1550 0.1047E-03
    1600 0.1041E-03
    1650 0.1046E-03
    1700 0.1062E-03
    1750 0.9507E-04
    1800 0.1035E-03
    1850 0.8528E-04
    1900 0.9247E-04
    1950 0.8653E-04
    2000 0.8043E-04
    2050 0.2179E-03
    2100 0.6474E-04
    2150 0.9016E-04
    2200 0.1095E-03
    2250 0.9934E-04
    2300 0.9617E-04
    2350 0.9509E-04
    2400 0.9661E-04
    2450 0.8235E-04
    2500 0.8695E-04
    2550 0.7327E-04
    2600 0.7423E-04
    2650 0.7251E-04
    2700 0.7203E-04
    2750 0.7310E-04
    2800 0.9096E-04
    2850 0.7139E-04
    2900 0.7042E-04
    2950 0.8857E-04
    3000 0.9902E-04
    3050 0.1920E-03
    3100 0.6476E-04
    3150 0.6176E-04
    3200 0.7288E-04
    3250 0.6190E-04
    3300 0.6032E-04
    3350 0.5797E-04
    3400 0.5612E-04
    3450 0.5810E-04
    3500 0.4946E-04
    3550 0.4895E-04
    3600 0.4804E-04
    3650 0.1396E-03
    3700 0.4577E-04
    3750 0.4117E-04
    3800 0.4224E-04
    3850 0.4114E-04
    3900 0.4691E-04
    3950 0.4641E-04
    4000 0.5271E-04
    4050 0.5406E-04
    4100 0.6081E-04
    4150 0.5195E-04
    4200 0.4598E-04
    4250 0.5048E-04
    4300 0.3793E-04
    4350 0.4476E-04
    4400 0.3694E-04
    4450 0.3278E-04
    4500 0.3265E-04
    4550 0.3236E-04
    4600 0.3025E-04
    4650 0.3083E-04
    4700 0.3163E-04
    4750 0.3962E-04
    4800 0.2670E-04
    4850 0.2942E-04
    4900 0.3030E-04
    4950 0.2987E-04
    5000 0.2897E-04
NUMERICAL ERROR:: IterSolve: Failed convergence tolerances.
ComputeNorm: Computing norm of solution
ComputeNorm: Using consistent norm in parallel
ComputeChange: NS (ITER=1) (NRM,RELC): ( 0.12879695E-01  2.0000000     ) :: mgdynamics
DefaultFinish: Finished solver: mgdynamics
ComputeNorm: Computing norm of solution
ComputeNorm: Using consistent norm in parallel
ComputeChange: SS (ITER=1) (NRM,RELC): ( 0.12879695E-01  2.0000000     ) :: mgdynamics
ListToCRSMatrix: Matrix format changed from CRS to List
List_ToCRSMatrix: Number of entries in CRS matrix: 58746
ListToCRSMatrix: Matrix format changed from List to CRS
SingleSolver: Attempting to call solver
SingleSolver: Solver Equation string is: mgdynamicscalc
MagnetoDynamicsCalcFields: ------------------------------
MagnetoDynamicsCalcFields: Computing postprocessed fields
MagnetoDynamicsCalcFields: Solving for field: magnetic flux density[magnetic flux density re:3 magnetic flux density im:3]
SolveSystem: Solving linear system
IterSolver: Using iterative method: cg
       1 0.2763E+00
       2 0.9463E-01
       3 0.3261E-01
       4 0.1134E-01
       5 0.3905E-02
       6 0.1349E-02
       7 0.4655E-03
       8 0.1634E-03
       9 0.5742E-04
      10 0.2031E-04
      11 0.6916E-05
      12 0.2416E-05
      13 0.8244E-06
      14 0.2887E-06
      15 0.1029E-06
      16 0.3638E-07
      17 0.1274E-07
      18 0.4528E-08
      18 0.4528E-08
ComputeNorm: Computing norm of solution
ComputeNorm: Using consistent norm in parallel
ComputeChange: NS (ITER=1) (NRM,RELC): ( 0.49814940E-02  2.0000000     ) :: mgdynamicscalc
SolveSystem: Solving linear system
IterSolver: Using iterative method: cg
       1 0.2728E+00
       2 0.9401E-01
       3 0.3240E-01
       4 0.1141E-01
       5 0.3901E-02
       6 0.1353E-02
       7 0.4690E-03
       8 0.1646E-03
       9 0.5791E-04
      10 0.2059E-04
      11 0.7076E-05
      12 0.2401E-05
      13 0.8335E-06
      14 0.2917E-06
      15 0.1044E-06
      16 0.3663E-07
      17 0.1282E-07
      18 0.4570E-08
      18 0.4570E-08
ComputeNorm: Computing norm of solution
ComputeNorm: Using consistent norm in parallel
ComputeChange: NS (ITER=2) (NRM,RELC): ( 0.53059240E-02 0.63073163E-01 ) :: mgdynamicscalc
SolveSystem: Solving linear system
IterSolver: Using iterative method: cg
       1 0.2114E+00
       2 0.7051E-01
       3 0.2333E-01
       4 0.8311E-02
       5 0.2930E-02
       6 0.1017E-02
       7 0.3535E-03
       8 0.1247E-03
       9 0.4388E-04
      10 0.1494E-04
      11 0.5123E-05
      12 0.1774E-05
      13 0.6129E-06
      14 0.2164E-06
      15 0.7511E-07
      16 0.2619E-07
      17 0.9120E-08
      17 0.9120E-08
ComputeNorm: Computing norm of solution
ComputeNorm: Using consistent norm in parallel
ComputeChange: NS (ITER=3) (NRM,RELC): ( 0.11765420E-01 0.75676477     ) :: mgdynamicscalc
SolveSystem: Solving linear system
IterSolver: Using iterative method: cg
       1 0.3048E+00
       2 0.1024E+00
       3 0.3479E-01
       4 0.1204E-01
       5 0.4197E-02
       6 0.1459E-02
       7 0.5134E-03
       8 0.1787E-03
       9 0.6289E-04
      10 0.2217E-04
      11 0.7453E-05
      12 0.2555E-05
      13 0.9017E-06
      14 0.3163E-06
      15 0.1115E-06
      16 0.3892E-07
      17 0.1365E-07
      18 0.4860E-08
      18 0.4860E-08
ComputeNorm: Computing norm of solution
ComputeNorm: Using consistent norm in parallel
ComputeChange: NS (ITER=4) (NRM,RELC): ( 0.83639954E-02 0.33795559     ) :: mgdynamicscalc
SolveSystem: Solving linear system
IterSolver: Using iterative method: cg
       1 0.3052E+00
       2 0.1020E+00
       3 0.3490E-01
       4 0.1224E-01
       5 0.4225E-02
       6 0.1472E-02
       7 0.5177E-03
       8 0.1799E-03
       9 0.6304E-04
      10 0.2237E-04
      11 0.7652E-05
      12 0.2541E-05
      13 0.9052E-06
      14 0.3198E-06
      15 0.1126E-06
      16 0.3887E-07
      17 0.1368E-07
      18 0.4892E-08
      18 0.4892E-08
ComputeNorm: Computing norm of solution
ComputeNorm: Using consistent norm in parallel
ComputeChange: NS (ITER=5) (NRM,RELC): ( 0.92028088E-02 0.95499821E-01 ) :: mgdynamicscalc
SolveSystem: Solving linear system
IterSolver: Using iterative method: cg
       1 0.2278E+00
       2 0.7598E-01
       3 0.2564E-01
       4 0.9030E-02
       5 0.3154E-02
       6 0.1103E-02
       7 0.3861E-03
       8 0.1355E-03
       9 0.4809E-04
      10 0.1661E-04
      11 0.5520E-05
      12 0.1889E-05
      13 0.6434E-06
      14 0.2244E-06
      15 0.7954E-07
      16 0.2761E-07
      17 0.9685E-08
      17 0.9685E-08
ComputeNorm: Computing norm of solution
ComputeNorm: Using consistent norm in parallel
ComputeChange: NS (ITER=6) (NRM,RELC): ( 0.14841054E-01 0.46899660     ) :: mgdynamicscalc
MagnetoDynamicsCalcFields: Solving for field: magnetic field strength[magnetic field strength re:3 magnetic field strength im:3]
SolveSystem: Solving linear system
IterSolver: Using iterative method: cg
       1 0.6852E-01
       2 0.2178E-01
       3 0.6910E-02
       4 0.2383E-02
       5 0.8665E-03
       6 0.2945E-03
       7 0.1035E-03
       8 0.3538E-04
       9 0.1278E-04
      10 0.4476E-05
      11 0.1577E-05
      12 0.5456E-06
      13 0.1966E-06
      14 0.7054E-07
      15 0.2506E-07
      16 0.8854E-08
      16 0.8854E-08
ComputeNorm: Computing norm of solution
ComputeNorm: Using consistent norm in parallel
ComputeChange: NS (ITER=7) (NRM,RELC): (  2444.8755      1.9999757     ) :: mgdynamicscalc
SolveSystem: Solving linear system
IterSolver: Using iterative method: cg
       1 0.6834E-01
       2 0.2186E-01
       3 0.6922E-02
       4 0.2384E-02
       5 0.8687E-03
       6 0.2979E-03
       7 0.1028E-03
       8 0.3560E-04
       9 0.1267E-04
      10 0.4483E-05
      11 0.1575E-05
      12 0.5473E-06
      13 0.1985E-06
      14 0.7134E-07
      15 0.2534E-07
      16 0.9072E-08
      16 0.9072E-08
ComputeNorm: Computing norm of solution
ComputeNorm: Using consistent norm in parallel
ComputeChange: NS (ITER=8) (NRM,RELC): (  2478.5297     0.13671125E-01 ) :: mgdynamicscalc
SolveSystem: Solving linear system
IterSolver: Using iterative method: cg
       1 0.3863E-01
       2 0.1010E-01
       3 0.3363E-02
       4 0.1160E-02
       5 0.4071E-03
       6 0.1431E-03
       7 0.4954E-04
       8 0.1650E-04
       9 0.5870E-05
      10 0.2096E-05
      11 0.7060E-06
      12 0.2480E-06
      13 0.8258E-07
      14 0.2904E-07
      15 0.1044E-07
      16 0.3698E-08
      16 0.3698E-08
ComputeNorm: Computing norm of solution
ComputeNorm: Using consistent norm in parallel
ComputeChange: NS (ITER=9) (NRM,RELC): (  7866.6056      1.0416637     ) :: mgdynamicscalc
SolveSystem: Solving linear system
IterSolver: Using iterative method: cg
       1 0.7012E-01
       2 0.2196E-01
       3 0.6875E-02
       4 0.2377E-02
       5 0.8748E-03
       6 0.2974E-03
       7 0.1047E-03
       8 0.3530E-04
       9 0.1294E-04
      10 0.4588E-05
      11 0.1596E-05
      12 0.5647E-06
      13 0.2048E-06
      14 0.7426E-07
      15 0.2688E-07
      16 0.9718E-08
      16 0.9718E-08
ComputeNorm: Computing norm of solution
ComputeNorm: Using consistent norm in parallel
ComputeChange: NS (ITER=10) (NRM,RELC): (  2245.3799      1.1117946     ) :: mgdynamicscalc
SolveSystem: Solving linear system
IterSolver: Using iterative method: cg
       1 0.6982E-01
       2 0.2218E-01
       3 0.6912E-02
       4 0.2399E-02
       5 0.8789E-03
       6 0.3005E-03
       7 0.1050E-03
       8 0.3570E-04
       9 0.1288E-04
      10 0.4583E-05
      11 0.1598E-05
      12 0.5617E-06
      13 0.2027E-06
      14 0.7296E-07
      15 0.2633E-07
      16 0.9525E-08
      16 0.9525E-08
ComputeNorm: Computing norm of solution
ComputeNorm: Using consistent norm in parallel
ComputeChange: NS (ITER=11) (NRM,RELC): (  2246.3262     0.42133213E-03 ) :: mgdynamicscalc
SolveSystem: Solving linear system
IterSolver: Using iterative method: cg
       1 0.2644E-01
       2 0.6812E-02
       3 0.2298E-02
       4 0.7887E-03
       5 0.2809E-03
       6 0.9851E-04
       7 0.3329E-04
       8 0.1121E-04
       9 0.3956E-05
      10 0.1410E-05
      11 0.4791E-06
      12 0.1673E-06
      13 0.5851E-07
      14 0.2086E-07
      15 0.7625E-08
      15 0.7625E-08
ComputeNorm: Computing norm of solution
ComputeNorm: Using consistent norm in parallel
ComputeChange: NS (ITER=12) (NRM,RELC): (  9523.2941      1.2365680     ) :: mgdynamicscalc
MagnetoDynamicsCalcFields: Solving for field: current density[current density re:3 current density im:3]
SolveSystem: Solving linear system
IterSolver: Using iterative method: cg
       1 0.1401E+00
       2 0.4933E-01
       3 0.1584E-01
       4 0.5592E-02
       5 0.1867E-02
       6 0.6669E-03
       7 0.2309E-03
       8 0.7946E-04
       9 0.2819E-04
      10 0.9892E-05
      11 0.3362E-05
      12 0.1133E-05
      13 0.3890E-06
      14 0.1368E-06
      15 0.4868E-07
      16 0.1706E-07
      17 0.6027E-08
      17 0.6027E-08
ComputeNorm: Computing norm of solution
ComputeNorm: Using consistent norm in parallel
ComputeChange: NS (ITER=13) (NRM,RELC): (  4600578.7      1.9917370     ) :: mgdynamicscalc
SolveSystem: Solving linear system
IterSolver: Using iterative method: cg
       1 0.1393E+00
       2 0.4890E-01
       3 0.1575E-01
       4 0.5546E-02
       5 0.1862E-02
       6 0.6656E-03
       7 0.2291E-03
       8 0.7974E-04
       9 0.2820E-04
      10 0.9899E-05
      11 0.3344E-05
      12 0.1137E-05
      13 0.3909E-06
      14 0.1360E-06
      15 0.4852E-07
      16 0.1710E-07
      17 0.5981E-08
      17 0.5981E-08
ComputeNorm: Computing norm of solution
ComputeNorm: Using consistent norm in parallel
ComputeChange: NS (ITER=14) (NRM,RELC): (  4337283.2     0.58916881E-01 ) :: mgdynamicscalc
SolveSystem: Solving linear system
IterSolver: Using iterative method: cg
       1 0.1461E+00
       2 0.5297E-01
       3 0.1748E-01
       4 0.6241E-02
       5 0.2178E-02
       6 0.7593E-03
       7 0.2660E-03
       8 0.9273E-04
       9 0.3205E-04
      10 0.1120E-04
      11 0.3957E-05
      12 0.1380E-05
      13 0.4784E-06
      14 0.1682E-06
      15 0.5929E-07
      16 0.2073E-07
      17 0.7201E-08
      17 0.7201E-08
ComputeNorm: Computing norm of solution
ComputeNorm: Using consistent norm in parallel
ComputeChange: NS (ITER=15) (NRM,RELC): (  2129485.5     0.68281328     ) :: mgdynamicscalc
SolveSystem: Solving linear system
IterSolver: Using iterative method: cg
       1 0.1345E+00
       2 0.4307E-01
       3 0.1348E-01
       4 0.4918E-02
       5 0.1597E-02
       6 0.5757E-03
       7 0.2019E-03
       8 0.6770E-04
       9 0.2384E-04
      10 0.8512E-05
      11 0.2882E-05
      12 0.9817E-06
      13 0.3415E-06
      14 0.1218E-06
      15 0.4299E-07
      16 0.1509E-07
      17 0.5323E-08
      17 0.5323E-08
ComputeNorm: Computing norm of solution
ComputeNorm: Using consistent norm in parallel
ComputeChange: NS (ITER=16) (NRM,RELC): (  4090863.4     0.63063277     ) :: mgdynamicscalc
SolveSystem: Solving linear system
IterSolver: Using iterative method: cg
       1 0.1343E+00
       2 0.4286E-01
       3 0.1351E-01
       4 0.4897E-02
       5 0.1595E-02
       6 0.5738E-03
       7 0.2001E-03
       8 0.6789E-04
       9 0.2383E-04
      10 0.8444E-05
      11 0.2859E-05
      12 0.9764E-06
      13 0.3369E-06
      14 0.1202E-06
      15 0.4297E-07
      16 0.1499E-07
      17 0.5205E-08
      17 0.5205E-08
ComputeNorm: Computing norm of solution
ComputeNorm: Using consistent norm in parallel
ComputeChange: NS (ITER=17) (NRM,RELC): (  4030807.1     0.14789144E-01 ) :: mgdynamicscalc
SolveSystem: Solving linear system
IterSolver: Using iterative method: cg
       1 0.1299E+00
       2 0.4510E-01
       3 0.1491E-01
       4 0.5209E-02
       5 0.1841E-02
       6 0.6378E-03
       7 0.2252E-03
       8 0.7921E-04
       9 0.2731E-04
      10 0.9462E-05
      11 0.3367E-05
      12 0.1172E-05
      13 0.4072E-06
      14 0.1452E-06
      15 0.5181E-07
      16 0.1834E-07
      17 0.6412E-08
      17 0.6412E-08
ComputeNorm: Computing norm of solution
ComputeNorm: Using consistent norm in parallel
ComputeChange: NS (ITER=18) (NRM,RELC): (  1937606.3     0.70142620     ) :: mgdynamicscalc
MagnetoDynamicsCalcFields: Solving for field: harmonic loss linear
SolveSystem: Solving linear system
IterSolver: Using iterative method: cg
       1 0.2859E+00
       2 0.9686E-01
       3 0.3294E-01
       4 0.1138E-01
       5 0.3960E-02
       6 0.1373E-02
       7 0.4801E-03
       8 0.1674E-03
       9 0.5952E-04
      10 0.2146E-04
      11 0.7384E-05
      12 0.2539E-05
      13 0.8708E-06
      14 0.3030E-06
      15 0.1066E-06
      16 0.3748E-07
      17 0.1352E-07
      18 0.4670E-08
      18 0.4670E-08
ComputeNorm: Computing norm of solution
ComputeNorm: Using consistent norm in parallel
ComputeChange: NS (ITER=19) (NRM,RELC): (  4.5782480      1.9999905     ) :: mgdynamicscalc
MagnetoDynamicsCalcFields: Solving for field: harmonic loss quadratic
SolveSystem: Solving linear system
IterSolver: Using iterative method: cg
       1 0.2859E+00
       2 0.9686E-01
       3 0.3294E-01
       4 0.1138E-01
       5 0.3960E-02
       6 0.1373E-02
       7 0.4801E-03
       8 0.1674E-03
       9 0.5952E-04
      10 0.2146E-04
      11 0.7384E-05
      12 0.2539E-05
      13 0.8708E-06
      14 0.3030E-06
      15 0.1066E-06
      16 0.3748E-07
      17 0.1352E-07
      18 0.4670E-08
      18 0.4670E-08
ComputeNorm: Computing norm of solution
ComputeNorm: Using consistent norm in parallel
ComputeChange: NS (ITER=20) (NRM,RELC): (  787.45865      1.9768786     ) :: mgdynamicscalc
MagnetoDynamicsCalcFields:  Eddy current power:    14.882337671334863
MagnetoDynamicsCalcFields:  (Electro)Magnetic Field Energy:    1.8337616524987201E-003
MagnetoDynamicsCalcFields: Harmonic Loss Linear by components
MagnetoDynamicsCalcFields: Loss for cos mode:    2.066E-05
MagnetoDynamicsCalcFields: Loss for sin mode:    6.073E-05
MagnetoDynamicsCalcFields: Total loss:    8.139E-05
MagnetoDynamicsCalcFields: Harmonic Loss Quadratic by components
MagnetoDynamicsCalcFields: Loss for cos mode:    3.554E-03
MagnetoDynamicsCalcFields: Loss for sin mode:    1.045E-02
MagnetoDynamicsCalcFields: Total loss:    1.400E-02
MagnetoDynamicsCalsFields: Harmonic loss for bodies was saved to file: Loss.dat
MagnetoDynamicsCalcFields:  Angular Frequency:    10807.078728300001
ComputeNorm: Computing norm of solution
ComputeNorm: Using consistent norm in parallel
ComputeChange: SS (ITER=1) (NRM,RELC): (  787.45865      2.0000000     ) :: mgdynamicscalc
SingleSolver: Attempting to call solver
SingleSolver: Solver Equation string is: resultoutput
ResultOutputSolver: -------------------------------------
ResultOutputSolve: Saving with prefix: emag.
ResultOutputSolver: Working on mesh: model_coil
ResultOutputSolver: Dimension of mesh is: 3
ResultOutputSolver: Creating list for saving - if not present
CreateListForSaving: Field Variables for Saving
CreateListForSaving: Scalar Field 1: p re
CreateListForSaving: Scalar Field 2: p im
CreateListForSaving: Scalar Field 3: harmonic loss linear
CreateListForSaving: Scalar Field 4: harmonic loss quadratic
CreateListForSaving: Scalar Field 5: nodal joule heating
CreateListForSaving: Scalar Field 6: harmonic loss linear e
CreateListForSaving: Scalar Field 7: harmonic loss quadratic e
CreateListForSaving: Vector Field 1: magnetic flux density re
CreateListForSaving: Vector Field 2: magnetic flux density im
CreateListForSaving: Vector Field 3: magnetic field strength re
CreateListForSaving: Vector Field 4: magnetic field strength im
CreateListForSaving: Vector Field 5: current density re
CreateListForSaving: Vector Field 6: current density im
CreateListForSaving: Vector Field 7: magnetic flux density re e
CreateListForSaving: Vector Field 8: magnetic flux density im e
CreateListForSaving: Vector Field 9: magnetic field strength re e
CreateListForSaving: Vector Field 10: magnetic field strength im e
CreateListForSaving: Vector Field 11: current density re e
CreateListForSaving: Vector Field 12: current density im e
ResultOutputSolver: Saving in unstructured VTK XML (.vtu) format
VtuOutputSolver: Using single precision arithmetics in output!
VtuOutputSolver: Saving results in VTK XML format with prefix: emag.
VtuOutputSolver: Saving number of partitions: 20
VtuOutputSolver: Number of active elements 28246 out of 28246
VtuOutputSolver: Number of geometry nodes 4923 out of 4923
VtuOutputSolver: Total number of geometry nodes to save:  105308
VtuOutputSolver: Total number of dof nodes to save:  105308
VtuOutputSolver: Total number of elements to save:  573145
VtuOutputSolver: Full filename base is: ./model_coil/emag.
VtuOutputSolver: Setting offset for boundary entities: 100
VtuOutputSolver: Writing the pvtu file: ./model_coil/emag.0001.pvtu
WritePvtuFile: Number of active partitions is 20 (out of 20)
VtuOutputSolver: Writing the vtu file: ./model_coil/emag.0001par0001.vtu
AscBinWriteInit: Initializing buffered ascii/binary writing
AscBinWriteInit: Writing in binary
AscBinWriteInit: Writing in single precision
AscBinWriteInit: Writing to unit number: 58
AscBinWriteInit: Size of buffer is: 28246
VtuOutputSolver: Writing nodal fields
VtuOutputSolver: Saving variable: p re
VtuOutputSolver: Saving variable: p im
VtuOutputSolver: Saving variable: harmonic loss linear
VtuOutputSolver: Saving variable: harmonic loss quadratic
VtuOutputSolver: Saving variable: nodal joule heating
VtuOutputSolver: Saving variable: harmonic loss linear e
VtuOutputSolver: Saving variable: harmonic loss quadratic e
VtuOutputSolver: Saving variable: magnetic flux density re
VtuOutputSolver: Saving variable: magnetic flux density im
VtuOutputSolver: Saving variable: magnetic field strength re
VtuOutputSolver: Saving variable: magnetic field strength im
VtuOutputSolver: Saving variable: current density re
VtuOutputSolver: Saving variable: current density im
VtuOutputSolver: Saving variable: magnetic flux density re e
VtuOutputSolver: Saving variable: magnetic flux density im e
VtuOutputSolver: Saving variable: magnetic field strength re e
VtuOutputSolver: Saving variable: magnetic field strength im e
VtuOutputSolver: Saving variable: current density re e
VtuOutputSolver: Saving variable: current density im e
VtuOutputSolver: Number of nodal fields written: 11
VtuOutputSolver: Writing elemental fields
WriteVtuFile: Writing variable: harmonic loss linear e
WriteVtuFile: Writing variable: harmonic loss quadratic e
WriteVtuFile: Writing variable: magnetic flux density re e
WriteVtuFile: Writing variable: magnetic flux density im e
WriteVtuFile: Writing variable: magnetic field strength re e
WriteVtuFile: Writing variable: magnetic field strength im e
WriteVtuFile: Writing variable: current density re e
WriteVtuFile: Writing variable: current density im e
VtuOutputSolver: Number of elemental fields written: 8
VtuOutputSolver: Writing entity IDs for bodies and boundaries
VtuOutputSolver: Writing coordinates for each used node
VtuOutputSolver: Writing the elemental connectivity data
VtuOutputSolver: Writing nodal fields
VtuOutputSolver: Saving variable: p re
VtuOutputSolver: Saving variable: p im
VtuOutputSolver: Saving variable: harmonic loss linear
VtuOutputSolver: Saving variable: harmonic loss quadratic
VtuOutputSolver: Saving variable: nodal joule heating
VtuOutputSolver: Saving variable: harmonic loss linear e
VtuOutputSolver: Saving variable: harmonic loss quadratic e
VtuOutputSolver: Saving variable: magnetic flux density re
VtuOutputSolver: Saving variable: magnetic flux density im
VtuOutputSolver: Saving variable: magnetic field strength re
VtuOutputSolver: Saving variable: magnetic field strength im
VtuOutputSolver: Saving variable: current density re
VtuOutputSolver: Saving variable: current density im
VtuOutputSolver: Saving variable: magnetic flux density re e
VtuOutputSolver: Saving variable: magnetic flux density im e
VtuOutputSolver: Saving variable: magnetic field strength re e
VtuOutputSolver: Saving variable: magnetic field strength im e
VtuOutputSolver: Saving variable: current density re e
VtuOutputSolver: Saving variable: current density im e
VtuOutputSolver: Writing elemental fields
AscBinWriteInit: Terminating buffered ascii/binary writing
VtuOutputSolver: All done for now
ResultOutputSolver: -------------------------------------
ReloadInputFile: Realoading input file
LoadInputFile: Loading input file:
ElmerSolver: *** Elmer Solver: ALL DONE ***
ElmerSolver: The end
SOLVER TOTAL TIME(CPU,REAL):       784.03      840.02
ELMER SOLVER FINISHED AT: 2017/11/15 08:42:34
The results looks like this:
Auswahl_387.png
(302.38 KiB) Not downloaded yet
I expected something more smooth and two-dimensional.

Note that I varied also the mesh resolution but I have the feeling that the problem is that I specified something inconsistently.
The input file looks like this:

Code: Select all

CHECK KEYWORDS "Warn"

Header
   Mesh DB "." "model_coil"
End

$ curr = 210.
$ f = 1720
$ omega = 2*pi*f


Simulation
   Max Output Level = 10
   Coordinate System = "Cartesian 3D"
   Coordinate Mapping(3) = 1 2 3 
   
   Simulation Type = Steady
   Steady State Max Iterations = 1
   Output Intervals(1) = 0
   
   Angular Frequency = Real $ omega
End

Initial Condition 1
 P re = Real 0
 P im = Real 0
 P re {e} = Real 0
 P im {e} = Real 0
End

Constants
   Permittivity of Vacuum = 8.8542e-12
End

!Solver 1
!   Exec Solver = Before Simulation
!
!   Procedure = "StatCurrentSolve" "StatCurrentSolver"
!   Equation = "Stat Current Solver"
!   Variable = Potential
!   Variable DOFs = 1
!   Calculate Volume Current = True
!   Calculate Joule Heating = False
!   Current Control = Real $ curr
!   Linear System Solver = Iterative
!   Linear System Iterative Method = CG
!   Linear System Max Iterations = 1000
!   Linear System Convergence Tolerance = 1.0e-8
!   Linear System Preconditioning = ILU0
!   Linear System Abort Not Converged = True
!   Linear System Residual Output = 1
!End

Solver 1
   Equation = "MGDynamics"
   Variable =  P[P re:1 P im:1]

   Procedure = "MagnetoDynamics" "WhitneyAVHarmonicSolver"
   !Fix Input Current Density = Logical True
   Angular Frequency = $ omega

   Linear System Symmetric = Logical true
   Linear System Complex = Logical True
   Linear System Solver = "Iterative"
   Linear System Preconditioning = None !ilu
   Linear System Convergence Tolerance = 1e-12
   Linear System Residual Output = 50
   Linear System Max Iterations = 5000
   Linear System Iterative Method = BiCGStabL
   BiCGstabl polynomial degree = 4
   Linear System Direct Method = Umfpack

   Steady State Convergence Tolerance = 1e-6
   Linear System Abort Not Converged = False
End



Solver 2
   Equation = "MGDynamicsCalc"
   Procedure = "MagnetoDynamics" "MagnetoDynamicsCalcFields"
   Linear System Symmetric = True
   Potential Variable = String "P"

   Angular Frequency = $ omega
   Show Angular Frequency = Logical True

   Calculate Magnetic Field Strength = Logical True
   Calculate Current Density = Logical True
   Calculate Harmonic Loss = Logical True
   Calculate Nodal Heating = Logical True
   Calculate Elemental Fields = Logical True
   Harmonic Loss Filename = File "Loss.dat"
  
   Steady State Convergence Tolerance = 0
   Linear System Solver = "Iterative"
   Linear System Preconditioning = None
   Linear System Residual Output = 1
   Linear System Max Iterations = 5000
   Linear System Iterative Method = CG
   Steady State Convergence Tolerance = 1e-6
   Linear System Convergence Tolerance = 1.0e-8
End

Solver 3
   Exec Solver = String "after all"    
   exec interval = 1 
   Equation = String "ResultOutput" 
   Procedure = file "ResultOutputSolve" "ResultOutputSolver" 
   Save Geometry Ids = Logical True
   Output File Name = file "emag." 
   Output Format = String "vtu"
   Binary Output = True
   Single Precision = True
End 



Body 1
   Name = "Coil"
   Target Bodies(1) = 1
   Material = 2
   
   Equation = 1
   !Body Force = 1
   Initial Condition = 1
End

Body 2
   Name = "Iron"
   Target Bodies(1) = 2
   Material = 3
   
   Equation = 1
   Initial Condition = 1
End

Body 3
   Name = "Air"
   Target Bodies(1) = 3
   Material = 1
   
   Equation = 1
   Initial Condition = 1
End





Equation 1
  Name = "Mag"
  Active Solvers(2) = 1 2 
End

!Equation 2
!  Name = "Mag+Current"
!  Active Solvers(3) = 1 2 3
!End

Material 1
  Name = "Air"
  Electric Conductivity = 0.0
  Relative Permeability = 1.0
  Relative Permittivity = 1.0
End

Material 2
  Name = "Copper"
  Electric Conductivity = 58.14e6
  Relative Permeability = 1.0
  Relative Permittivity = 1.0
End

Material 3
  Name = "Iron"
  
  Electric Conductivity = 8.6e6
  !Electric Conductivity(3) = 8.6e6 8.6e6 0 ! conductivity only in XY-Plane
  Relative Permeability = 1000.
  Relative Permittivity = 1.0
  
  Harmonic Loss Linear Coefficient = Real 1.0
  Harmonic Loss Quadratic Coefficient = Real 0.1
End



Boundary Condition 1
   Name = "Current In"
   Target Boundaries(1) = 1
   
   P re {e} = Real 0
   P im {e} = Real 0
   
   P re = Real 0.293
   P im = Real 0 

   !Potential = 0.001
   
   !Electric Current Density = Real $ curr / 2.4e-4
End

Boundary Condition 2
   Name = "Current Out"
   Target Boundaries(1) = 2
   
   P re {e} = Real 0
   P im {e} = Real 0
      
   P re = Real 0
   P im = Real 0 
   
   !Potential = 0.0
   !Electric Current Density = Real $ -curr / 2.4e-4
End

Boundary Condition 3
   Name = "BCn Flux Parallel"
   Target Boundaries(1) = 3
   
   P re {e} = Real 0
   P im {e} = Real 0
End

!Body Force 1
!  Name = "Current Density"
!
!  Current Density 1 = Equals Volume current 1
!  Current Density 2 = Equals Volume current 2
!  Current Density 3 = Equals Volume current 3
!End


RUN
hkr
Posts: 5
Joined: 13 Nov 2017, 00:37
Antispam: Yes

Re: Convergence of WhitneyAVHarmonicSolver

Post by hkr »

@mb5: Maybe you skipped the "-autoclean" or "-removelowdim" options from ElmerGrid? If the former is not given, the BCs will not be set properly and the solver will not see any exitation.
hkr
Posts: 5
Joined: 13 Nov 2017, 00:37
Antispam: Yes

Re: Convergence of WhitneyAVHarmonicSolver

Post by hkr »

Just reduced the problem size of "model_simpl_full" (by reducing domain height, not resolution) and ran the case in serial: and it converges!

Might this be related to a warning which occurs during decomposition? What does it mean actually?

Code: Select all

Starting program Elmergrid
Elmergrid reading in-line arguments
Lower dimensional boundaries will be removed
Lower dimensional boundaries will be removed
Materials and boundaries will be renumbered
Nodes that do not appear in any element will be removed
The mesh will be partitioned geometrically to 4 partitions.
Output will be saved to file model_simpl_full.

Elmergrid loading data:
-----------------------
Format chosen using the first line: $MeshFormat
Loading mesh in Gmsh format 2.0 from file model_simpl_full.msh
Allocating for 8803 knots and 60569 elements.
Moving bulk elements to boundary elements
Leading bulk elementtype is 504
Trailing bulk elementtype is 303
There are 5982 (out of 60569) lower dimensional elements.
Node 516 belongs to maximum of 68 elements
Found 5304 side elements that have two parents.
Found correctly 5982 side elements.
Parent elements were reordered up to indx 54587.
Moved 54587 elements (out of 60569) to new positions
Successfully read the mesh from the Gmsh input file.

Elmergrid creating and manipulating meshes:
-------------------------------------------
Scaling mesh with vector [0.001 0.001 0.001]
Removing lower dimensional boundaries
Maximum elementtype is 504 and dimension 3
Removed 0 (out of 5982) less than 3D boundary elements
All 8803 nodes were used by the mesh elements
Initial boundary interval [1,3]
Numbering of boundary types is already ok
Initial body interval [4,6]
body index changed 4 -> 1 in 8909 elements
body index changed 5 -> 2 in 21345 elements
body index changed 6 -> 3 in 24333 elements
Mapping material types from [4 6] to [1 3]

Elmergrid partitioning meshes:
------------------------------
PartitionSimpleElements
connect: 0 0
Making a simple partitioning for 54587 elements in 3-dimensions.
Ordering in the 2nd direction.
Ordering in the 3rd direction.
Creating an inverse topology of the finite element mesh
There are from 2 to 68 connections in the inverse topology.
Each node is in average in 24.804 elements
Number of nodal partitions: 4
Set the node partitions by the dominating element partition.
There are from 4009 to 4069 nodes in the 4 partitions.
Successfully made a partitioning with 13646 to 13648 elements.
Optimizing the partitioning at boundaries.
Round 1: 44 bulk elements with BCs removed from interface.
Round 2: 2 bulk elements with BCs removed from interface.
Round 3: 2 bulk elements with BCs removed from interface.
Round 4: 2 bulk elements with BCs removed from interface.
Round 5: 2 bulk elements with BCs removed from interface.
Round 6: 2 bulk elements with BCs removed from interface.
Round 7: 2 bulk elements with BCs removed from interface.
Round 8: 2 bulk elements with BCs removed from interface.
Round 9: 2 bulk elements with BCs removed from interface.
Round 10: 2 bulk elements with BCs removed from interface.
Ownership of 42 parents was changed at BCs
Optimizing for 4 partitions
Creating a table showing all parenting partitions of nodes.
Nodes belong to 4 partitions in maximum
There are 1474 shared nodes which is 16.74 % of all nodes.
The initial owner was not any of the elements for 0 nodes
Checking partitioning before optimization
Checking for partitioning
Information on partition bandwidth
Distribution of elements, nodes and shared nodes
     partition  elements   nodes      shared    
     1          13656      2242       385       
     2          13637      2193       411       
     3          13650      2195       380       
     4          13644      2173       402       
Average number of elements in partition 2.201e+03
Maximum deviation in ownership 69
Average deviation in ownership 2.532e+01
Average relative deviation 1.15 %
Checking for problematic sharings
Optimizing sharing for 4 partitions
Changed the ownership of 5 nodes
There shouldn't be any problematic sharings, knock, knock...
The partitioning was optimized: 5
Checking partitioning after optimization
Checking for partitioning
Information on partition bandwidth
Distribution of elements, nodes and shared nodes
     partition  elements   nodes      shared    
     1          13656      2241       386       
     2          13637      2195       409       
     3          13650      2196       379       
     4          13644      2171       404       

Elmergrid saving data with method 2:
-------------------------------------
Saving Elmer mesh in partitioned format
Number of boundary nodes at the boundary: 3044
Reusing existing subdirectory: partitioning.4
Saving mesh in parallel ElmerSolver format to directory model_simpl_full/partitioning.4.
Nodes belong to 4 partitions in maximum
Saving mesh for 4 partitions
   part  elements   nodes      shared   bc elems indirect
   1     13656      2241       386      1545     0       
   2     13637      2195       409      1455     0       
   3     13650      2196       379      1485     0       
   4     13644      2171       404      1498     0       
Nodes needed in maximum 1 boundary elements
----------------------------------------------------------------------------------------------
   ave   13646.8    2200.8     394.5    1495.8   0.0     
************************* Warning ****************************
Number or boundary elements split at between parents: 2
This could be a problem for internal jump conditions
You could try to use '-halobc' flag as remedy with ElmerSolver.
**************************************************************
Writing of partitioned mesh finished

Thank you for using Elmergrid!
Send bug reports and feature wishes to elmeradm@csc.fi
Post Reply