Building Docker Multi-Arch Images with GitLab CI/CD
In one project I was working on, we migrated a Ruby on Rails application to the AWS Elastic Container Service. In this project, we used AWS Graviton2. AWS’s own silicon is based on the ARM Neoverse cores (Alpine ALC12B00) 64-bit ARM processor. After that decision, we looked at the entire stack to see which tools we needed to adapt. One tool that we use for the building and deployment process is a container base image. This image was not yet ready to run on the AWS Graviton processor architecture. In this post, I describe the work that I did to make the container base image available for both x64 and ARM architecture.
You can find the code I am referencing in the blog post in this GitLab Repository.
To build a Docker Image that supports multiple processor architectures, an image
has to be created for each architecture. Conveniently, the
docker buildx command
already supports Multi-Arch builds. In the following section, I describe how to
make sure that the GitLab CI/CD pipeline can run this command. After that, I
explain how to create the images and push them to the GitLab Container Registry.
Multi-Arch images can be built with the Docker CLI plugin
buildx. In the GitLab CI/CD pipeline, we use
the docker image, that contains the
command, as our base image. However, the needed plugin is not part of that
image. So I had to decide how to install the buildx plugin and make it available
during the GitLab CI/CD pipeline runs. I identified two ways how you can achieve
this: Either you install and configure buildx during every run. Or you create a
Docker image that you can reuse during each run. I did the second and created
56-build-base image that is based on the
There were two steps I had to do. First, I had to create the
And second, add the code to the GitLab CI/CD configuration, to build and push
the image. As you can see in the
file, we already use the
docker buildx command to build and push the image.
This is a typical chicken and egg problem, that you often face during
bootstrapping. To solve this, I build the image once on my laptop and pushed it
to the registry.
With the base image in place, I could work on the Docker image that needs to run on x64 and ARM. There are two main things I had to consider. First, I needed to ensure that the Dockerfile works for both architectures or that I have one Dockerfile for each architecture. And second, to build the images and push them to the registry with the proper configuration.
The easiest part would be to write one Dockerfile for all architectures.
Unfortunately, this is not always possible, so we have to evaluate the best
solution on a case-by-case basis. Here, I used one file and pass arguments for
the architecture-specific configurations. If you look at the
you can see that I use the Alpine Linux package manager
apk to install some
packages. This is always the best way, as you don’t have to deal with finding
the correct package for each architecture. Yet, this does not always work. In my
case, I had to install a specific version of terraform, for which I had to
download the binary. These binaries come in separate versions for each
architecture. To solve this problem, I passed the variable for the URL and the
filename to the Dockerfile. With this, I could keep one file and move the
configuration one layer up to the
With the Dockerfile in place, the last step is to create and push the manifest.
The code for this is in the
have the advantage of keeping your code outside your CI/CD specification file.
This makes development and troubleshooting easier on your local machine. As I
pass arguments for the various architectures to the Dockerfile, we have to run
docker buildx build command multiple times. If you have a Dockerfile that
supports multiple architectures, you can use the
--platform argument to pass all
the needed architectures. Please note that one has to push the
architecture-specific images with a unique image tag. This has to be a different
tag than the one specified in the
docker manifest create command. After the
manifest is created, the
docker manifest push command pushes it to the registry.
With the Dockerfile and the Makefile in place, the last step was to add a
job to the
gitlab-ci.yml specification. Please note, that you need to log in to the
Gitlab Container Registry. You can see in the
gitlab-ci.yml file, that we are
docker login command for this.
In the meantime, I have recorded a video that covers this topic and contains some updates. So head over and take a look at it
For more information about the topic, you can head over to the following links: