Solvers and Scalability

45 minutes intermediate


  Lesson Objectives

  Learn about MFEM's parallel scalability.
  Learn about MFEM's support for efficient solvers and preconditioners.

  Note

Please complete the   Getting Started and Finite Element Basics pages before this lesson.

MFEM is designed to be highly scalable and efficient on a wide variety of platforms: from laptops to GPU-accelerated supercomputers. The solvers described in this lesson play a critical role in this parallel scalability.


  Scalable algebraic multigrid preconditioners from hypre

MFEM comes with a large number of example codes that demonstrate different physical applications, finite element discretizations, and linear solvers:

The parallel versions of these examples (ex1p, ex2p, ex3p, and ex4p) each use suitable algebraic multigrid (AMG) preconditioners from the hypre solvers library. We describe sample runs with each of these examples in more details below.


  Example 1: Poisson problem and AMG


  Example 2: Linear Elasticity

                         +----------+----------+
            boundary --->| material | material |<--- boundary
            attribute 1  |    1     |    2     |     attribute 2
            (fixed)      +----------+----------+     (pull down)

  Warning

Using higher-order elements can quickly become computationally expensive. See the section below on Low-order-refined methods for a more efficient approach.

  Note

Remember to recompile the example after editing the source code (make ex2p).

  Examples 3 and 4: the de Rham Complex

  Note

Remember to build the examples first: make ex3 ex4 ex3p ex4p

  MFEM's native Multigrid solver

  Warning

Each additional order refinement increases the order by a factor of 2. This quickly becomes computationally expensive, so be careful when increasing the order refinements.

  Low-order-refined methods

  • Examples 1, 2, 3, and 4 used algebraic methods applied to the discretization matrix for each of the problems. Example 26 showed how to use geometric multigrid together with matrix-free methods.

  • Low-order-refined (LOR) is an alternative matrix-free methodology for solving these problems. The LOR solvers miniapp provides matrix-free solvers for the same problems solved in Examples 1, 3, and 4.

  • Go to the LOR solvers miniapp directory: cd ~/mfem/miniapps/solvers

  • Run make plor_solvers to build the parallel LOR solvers miniapp.

  • The --fe-type (or -fe) command line argument can be used to choose the problem type.

    • -fe h solves an $H^1$ problem (Poisson, equivalent to ex1).

    • -fe n solves a Nedelec problem (Maxwell in $H(\mathrm{curl})$, equivalent to ex3).

    • -fe r solves a Raviart-Thomas problem (grad-div in $H(\mathrm{div})$, equivalent to ex4).

  • As usual, the --mesh (-m) argument can be used to choose the mesh file. (Keep in mind that MFEM's meshes in the data directory are now found in ../../data relative to the miniapp directory.)

  • The number of mesh refinements in serial and parallel can be controlled with the --refine-serial and --refine-parallel (-rs and -rp) command line arguments

  • The polynomial degree can be controlled with the --order (-o) argument.

  • Compare the performance of high-order problems with plor_solvers to that of Examples 1, 3, and 4. Here are some sample runs to compare:

    //  2D, 5th order, 256,800 DOFs
    mpirun -np 8 ./plor_solvers -fe n -m ../../data/star.mesh -rs 2 -rp 2 -o 5 -no-vis
    mpirun -np 8 ../../examples/ex3p -m ../../data/star.mesh -o 5
    
    // 3D, 2nd order, 2,378,016 DOFs
    mpirun -np 24 ./plor_solvers -fe n -m ../../data/fichera.mesh -rs 2 -rp 2 -o 3 -no-vis
    mpirun -np 24 ../../examples/ex3p -m ../../data/fichera.mesh -o 3
    
  • For more details on how LOR solvers work in MFEM, see the High-Order Matrix-Free Solvers talk (PDF, video) from the 2021 MFEM community workshop.


  Additional solver integrations

In addition to the hypre AMG solvers and MFEM's built-in solvers illustrated above, MFEM also integrates with a number of third-party solver libraries, including:

  • PETSc — see the ~/mfem/examples/petsc directory

  • SuperLU — see the ~/mfem/examples/superlu directory

  • STRUMPACK — see ~/mfem/examples/ex11p.cpp

  • Ginkgo — see the ~/mfem/examples/ginkgo directory

  • AmgX — see the ~/mfem/examples/amgx directory

Most third-party libraries are not pre-installed in the AWS image, but you can still peruse the example source code to see the capabilities of the various integrations.

You can check the containers repository to see which third-party libraries are available for the image you chose. As of December 2023, we pre-install PETSc and SuperLU for the CPU images and AmgX for the CUDA images.

  Note

If you install MFEM locally, you can enable these third-party solver library integrations with the MFEM_USE_* configuration variables, e.g., by specifying MFEM_USE_PETSC=YES.

  Questions?

Ask for help in the tutorial Slack channel.

  Next Steps

Depending on your interests pick one of the following lessons:

Back to the MFEM tutorial page