Containerize your tooling

Mariusz Walczyk

12 Feb 2024.7 minutes read

Containerize your tooling webp image


In the world of DevOps, managing a plethora of tools can often feel overwhelming. To simplify this, let's explore how Docker, combined with the lightweight containers, can streamline your workflow. Think of this as creating a multifunctional toolset tailored to your specific needs.

Generic and extendable Docker image for DevOps work

In many cases, we need to use different tools for DevOps work. It’s not always possible to install all the tools on the local machine. In that case, we can use Docker images with all the tools needed. This image can be used as a base image for other images if we need to add some other tools later. The preferred image for many DevOps is Alpine Linux. It’s small and fast. To run alpine linux as a docker container, we can use the command below:

docker run -it --rm -v ${HOME}:/root/ -v ${PWD}:/work -w /work --net host alpine sh

After running this command, we will be logged in as root user in the container. Now let's explain the syntax of the command for those who are not familiar with Docker:

  • -it runs the container in interactive mode and allocates a pseudo-TTY, in other words, we’ll be able to use the terminal inside the container
  • --rm remove the container after it exits, as we don't want it dangling around when we exit the container
  • -v mount the local directory to the container so we can access local files from the container. Change the local directory to the directory you want to mount to the container
  • -w will set the host working directory to the container working directory, so we don't have to use the cd command to change the directory
  • --net host instructs docker to use the host network stack inside the container.

Containers created from Alpine Linux image (default latest tag) contain only basic tools like ls, cd, cp, mv, etc. To install other tools, we need to use apk package manager. There is a variability of packages available for Alpine Linux. You can find the list of packages a part of Alpine Linux here, or you can use the command below to find the package you need:

apk search --no-cache -f <package_name>

For example, to s curl we need to use the command below:

apk add --no-cache search -f curl

And then to install curl package use the command:

apk add --no-cache curl

--no-cache option will prevent apk from caching the index locally, which is recommended for Docker images

After quitting the container using the exit command, we can use the line below to save the container as an image:

docker commit <container_id> <image_name>

Now, we can use the image we created to run the container with all the tools we need. After using this method to create multiple images, we will end up with a library of images with different tools installed. You can view the list of images with the docker imagescommand. My list of images contains the following tools: kubeclt, helm, awscli, kubectl, velero, flux, istio. But it will definitely grow in the future.

Keep in mind that there are also downsides to using Alpine Linux as a base image, one of which is the absence of the GNU C Library (glibc). This exclusion results in the unavailability of certain applications in the Alpine Linux repository, such as the Chef client. To install software that depends on glibc, like the Chef client, one option is to use a base image that includes glibc. Several Linux distributions, including Ubuntu, Debian, and CentOS, offer such ima

Toolkit for use on a Kubernetes cluster

Docker Images created in the previous section can be used in the local environment. But in many cases we need to use the tools on a Kubernetes cluster. For example, we need to use a mysql client to connect to a database running on the cluster or managed by a cloud provider and accessible from the cluster. In that case, we need to use images pushed to a container registry accessible from the cluster. In my case, I use AWS ECR. To push the image to ECR, we need to tag it with the ECR repository URI first.

aws ecr get-login-password --region <region> | docker login --username AWS --password-stdin <ecr_repository_uri>
docker tag <image_name> <ecr_repository_uri>
docker push <ecr_repository_uri>

AWS ECR is not the only option. Every cloud provider has its own container registry (Google Cloud Registry, Azure Container Registry, etc.). Also there are public container registries like Docker Hub, Quay, etc. Which can be used with any Kubernetes cluster.

Declarative approach to build images

In case we don't want to build and host our own images, we can use Nixery. Nixery is a container registry which uses the Nix package manager to build images. With a declarative approach, we can select the applications to be installed in the image by adding the tools we need to the image name. For example, we need to use helm, kubectl, jq, yq, git, etc. We can specify the following image name: We can use it the same way as any other image:

docker run -it --rm -v ${HOME}:/root/ -v ${PWD}:/work -w /work --net host bash

It might take a while to download an image, at least that was the case for me. Also, the image size is quite big (427MB). However, the ease of use and the fact that we don't need to build and host our own images is a big plus.

Real world example


On my latest project, I needed to use the chart-testing tool to test helm charts. I didn't want to install it on my local machine, so I created a docker image with a chart-testing tool installed. I used the image to run the tests on the helm charts. The image can be found here. To run the tests I used the command below:

$ docker run -it --rm -v ${HOME}:/root/ -v ${PWD}:/work -w /work --net host ct lint --charts=charts/my-helm-chart-source --validate-maintainers=false
Linting charts...
Version increment checking disabled.

 Charts to be processed:
 my-helm-chart-source => (version: "0.4.8", path: "charts/my-helm-chart-source")

Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "my-helm-chart-source" chart repository
Update Complete. ⎈Happy Helming!⎈
Saving 1 charts
Downloading intouch-libchart from repo
Deleting outdated charts
Linting chart "my-helm-chart-source => (version: \"0.4.8\", path: \"charts/my-helm-chart-source\")"
Validating /work/charts/my-helm-chart-source/Chart.yaml...
Validation success! 👍
==> Linting charts/my-helm-chart-source
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, 0 chart(s) failed

 ✔︎ my-helm-chart-source => (version: "0.4.8", path: "charts/my-helm-chart-source")
All charts linted successfully

As you noticed I didn't execute the shell command inside the container. I used the command to run the chart-testing tool directly. With the local directory used as a volume, I was able to access the helm chart files from the container. If needed I can also save the output of the command to the local directory, so no need to copy the files from the container.

Specific version of application available in binary format

In some cases we need to use a tool with a specific version. For example, I needed to use the istioctl tool with version 1.16.6. The latest version of the tool is 1.19.3 . I didn't want to install the tool on my local machine, so I created a docker image with the istioctl tool installed. The image can be found on the github releases page. To run the tool I used the command below:

docker run -it --rm -v ${HOME}:/root/ -v ${PWD}:/work -w /work --net host alpine sh
apk install --no-cache curl
curl -o /tmp/istioctl-1.16.6-linux-amd64.tar.gz
tar -C /tmp/ -zxvf /tmp/istioctl-1.16.6-linux-amd64.tar.gz
mv /tmp/istioctl /usr/local/bin/istioctl
chmod +x /usr/local/bin/istioctl

After running the commands above I was able to generate the manifest for the istio installation:

istioctl manifest generate -f istio-operator.yaml > istio-1.19.3.yaml

As I needed to use the tool only once but for multiple versions of istio, I didn't save the image. I just used the commands above to install the tool and generate the manifest.


To sum up, Docker offers a powerful solution for managing DevOps tools, especially when paired with Alpine Linux. With prepared images, we can keep all the tools we need in one place and use them when needed. We don't need to install them on our local machine. This approach not only simplifies your workflow but also ensures efficiency whether you're working locally, on Kubernetes clusters, or in CI/CD pipelines. Give it a try and watch your toolkit transform!

Reviewed by: Paweł Maszota

Blog Comments powered by Disqus.