Local Docker Container
15 minutes basic
You don't need a cloud instance to run the MFEM tutorial. Instead, you can directly run the MFEM Docker container on a computer available to you.
The mfem/developer
containers has been specifically created to kickstart the exploration of
MFEM and its capabilities in a variety of computing environments: from the cloud
(like AWS), to HPC clusters, and your own laptop.
There are CPU
and GPU
variations of the image, we will refer to it generically
as mfem/developer
during the tutorial.
Below are instructions on how to start the container on Linux and macOS, and how to use it to run the tutorial locally.
You can also use the container (and similar commands) to setup your own cloud instance. See for example this AWS script.
Linux
Depending on your Linux distribution, you have to first install Docker. See the official instructions for e.g. Ubuntu.
Once the installation is complete and the docker
command is in your path,
pull the prebuilt mfem/developer-cpu
container with:
docker pull ghcr.io/mfem/containers/developer-cpu:latest
Depending on your connection, this may take a while to download and extract (the image is about 2GB).
To start the container, run:
docker run --cap-add=SYS_PTRACE -p 3000:3000 -p 8000:8000 -p 8080:8080 ghcr.io/mfem/containers/developer-cpu:latest
You can later stop this by pressing Ctrl-C. See the docker documentation for more details.
We provide two variations of our containers that are configured with CPU or CPU and GPU capabilities. If you have an NVIDIA supported CUDA GPU you have to install the NVIDIA Container Toolkit.
Our CUDA images are built with the sm_70
compute capability by default. If your GPU is
an sm_70
you can use the prebuilt mfem/developer-cuda-sm70
image with:
docker pull ghcr.io/mfem/containers/developer-cuda-sm70:latest
To start the container use
docker run --gpus all --cap-add=SYS_PTRACE -p 3000:3000 -p 8000:8000 -p 8080:8080 ghcr.io/mfem/containers/developer-cuda-sm70:latest
If you need a different compute capability, you can clone the mfem/containers
repository and build an image e.g., forsm_80
, with
git clone git@github.com:mfem/containers.git
cd containers
docker-compose build --build-arg cuda_arch_sm=80 cuda && docker image tag cuda:latest cuda-sm80:latest
docker-compose build --build-arg cuda_arch_sm=80 cuda-tpls && docker image tag cuda-tpls:latest cuda-tpls-sm80:latest
This automatically builds all libraries with the correctly supported CUDA compute capability.
Note
macOS
On macOS we recommend using Podman. See the official installation instructions here.
After installing it, use the following commands to create a Podman machine and pull the mfem/developer
container:
podman machine init
podman pull ghcr.io/mfem/containers/developer-cpu:latest
Both of these can take a while, depending on your hardware and network connection.
To start the virtual machine and the container in it, run:
podman machine start
podman run --cap-add=SYS_PTRACE -p 3000:3000 -p 8000:8000 -p 8080:8080 ghcr.io/mfem/containers/developer-cpu:latest
You can later stop these by pressing Ctrl-C and typing podman machine stop
.
Note
Running the tutorial locally
Once the mfem/developer
container is running, you can proceed with the
Getting Started page using the
following IP
: 127.0.0.1
. You can alternatively use localhost
for the IP
.
In particular, the VS Code and GLVis windows can be accessed at localhost:3000 and localhost:8000/live respectively.
Furthermore, you can use the above pages from any other devices (tablets, phones) that are connected to the same network as the machine running the container.
For example you can run an example from the VS Code terminal on your laptop and visualize the results on a GLVis window on your phone.
To connect other devices, first run hostname -s
to get the local host name and then
use that {hostname}
for the IP
in the rest of the tutorial.
Questions?
Next Steps
Back to the MFEM tutorial page