Configurable devcontainers with Docker targets
🎊 Happy New Year! 🎊
A bit late to the party, but this is my first post of 2024 so if you’re reading this: I hope you have a great year!
This weekend, I found out about the build.target
property in devcontainer.json
files and it filled in a massive knowledge gap for me: adding conditional logic in your devcontainer image.
This could be useful when:
- You have a monorepo and the projects have slightly differing requirements for their development environments.
- You have many complex dependencies and want to opt-in to specific ones.
- Different developers want different levels of ricing 😋
I became motivated to do this when I started adding library dependencies built from source to an experimental repository. I didn’t want to require having those dependencies in the Docker image when I worked on other parts of the code.
This brief post contains a minimal (and somewhat contrived) example of this feature. It assumes you have some familiarity with devcontainers.
Let’s start with a super basic devcontainer, assuming the following file structure:
1
2
3
4
/some_repository
/.devcontainer
devcontainer.json
Dockerfile
Initial contents of devcontainer.json
:
1
2
3
4
5
6
{
"name": "targets-example",
"build": {
"dockerfile": "Dockerfile"
}
}
Initial contents of Dockerfile
:
1
2
3
4
5
6
7
8
FROM ubuntu:22.04
RUN apt update && \
apt install --no-install-recommends -y \
gcc \
g++ \
gdb \
git
If you open this as a devcontainer in VS Code, you get a simple Ubuntu + GCC development environment. For the sake of brevity, let’s assume this is sufficient for our hypothetical project. We also don’t mind whatever versions of tools we get from apt
. This devcontainer image (on my machine at the time of writing) is 350MB, which isn’t too big for a development environment.
Now, let’s imagine we want to support more compilers! If we add just clang
to the list above we’re up to 729 MB. That’s still not too bad though, and it’s likely we’ll be adding new tools of a similar size from time-to-time anyway.
However, if we decide to start using the Intel compilers (as per their apt documentation), we’re up to a whopping 15.6 GB! Ok, there are some tricks to slim that down a little… but it will still get much bigger and we’ll be well into the GBs.
If we have a considerable group of developers or consumers that really want to use Intel toolkits, it would be much nicer to give them the option to do so as a first class feature.
This is where the build.target
property in devcontainer.json
comes in clutch. We can:
- Make our
gcc
/clang
base a Docker target. - Set this to be the default value of the
build.target
property indevcontainer.json
. - Extract our Intel toolchain installation logic to a new Docker target.
To do this, we update our devcontainer files like so:
devcontainer.json
:
1
2
3
4
5
6
7
{
"name": "targets-example",
"build": {
"dockerfile": "Dockerfile",
"target": "base"
}
}
Dockerfile
:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
FROM ubuntu:22.04 as base
RUN apt update && \
apt install --no-install-recommends -y \
gcc \
g++ \
git \
wget \
gpg \
ca-certificates
FROM base as intel
RUN wget -O- https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB \
| gpg --dearmor | tee /usr/share/keyrings/oneapi-archive-keyring.gpg > /dev/null && \
echo "deb [signed-by=/usr/share/keyrings/oneapi-archive-keyring.gpg] https://apt.repos.intel.com/oneapi all main" \
| tee /etc/apt/sources.list.d/oneAPI.list && \
apt update && \
apt install --no-install-recommends -y \
intel-basekit \
intel-hpckit
Our Intel lovers can now modify the value of the build.target
in devcontainer.json
to intel
to opt into using those development tools. They will have to wait for the image to rebuild (and presumably go an make a lot of coffee…) but we will have given them first class support for their toolchain without forcing everyone else to endure the increases in container size and build time.
It should go without saying that this is only one concrete example of when using this feature might make sense. Some other examples are:
- Supporting multiple external base images, e.g. different Linux distros
- Installing libraries that have complex dependencies, or are built from source
- Installing different development tools in a multi-language repository
- Providing access to experimental tooling or base images
However, for the sake of completeness, not all of these problems above need to be solved with tooling. For example, one could contribute a hard to build dependency to a package manager or split code into multiple repositories. Both of these decisions depend on your organisation.
I hope you found this quick post useful!