Preview environment for Code reviews, part 4: Docker image in Artifact Registry

Mateusz Palichleb

13 Feb 2023.6 minutes read

Preview environment for Code reviews, part 4: Docker image in Artifact Registry webp image

Continuation of the Reverse-proxy topic

This blog post is the fourth part of a series of articles that cover building a separate Preview environment for each Code Review (Merge Request).

In the previous article we have successfully configured and tested the Reverse-proxy server in the local environment using the Docker container. We are still missing a deployment to the cloud to make the Docker image available to the Kubernetes cluster that the client company has.

In this article, we will focus precisely on this missing element, i.e. the publication of the Docker image in the cloud (in this case, it will be a Google Artifact Registry repository). We will also describe initially what the infrastructure looks like for a client to deploy Docker images from the cloud to a Kubernetes cluster.

How does the CD solution work in the K8s cluster?

Let's take a brief look at what Continuous Deployment (CD) looks like for the services in the company's Kubernetes cluster.

DevOps tools used there include:

  • Terraform - an industry standard for IaC (Infrastructure as Code)
  • Kustomize - Kubernetes configuration management, allows us to build configuration files based on filled-out templates
  • Flux - tool for GitOps, whole project infrastructure info is managed & versioned by the git versioning system.

The CD is based on the fact that any Docker image is stored in the GCP cloud (more specifically, in GAR - Google Artifact Registry).
When a new version of it appears (with the appropriate incremented tag) causes the image to be automatically replaced in the deployment configuration in k8s, which implies a re-deployment with the already new image.

The Fluxbot (Flux) tool takes care of monitoring new image versions, while Kustomize is indirectly used to generate the new configuration.

Fluxbot performs a commit in the Git repository, which has the purpose of managing the cluster where the new configuration is applied. At this point, the responsive pipeline launches with the new deployment (via GitHub Actions). Here all the “magic” is done.

What does this mean in our case? That we need our Docker image for the Reverse-proxy server in Nginx to be uploaded to the GAR repository as well, so that it matches our infrastructure for CD.

Reverse-proxy server Git repository on GitHub

Choice of provider for a Git repository

Note that the client company uses two service providers for different Git repositories: GitHub and GitLab:

  • The Nice-app application files are located under the GitLab repository (for which we created the CI/CD pipelines in article no. 2)
  • GitHub is also available as an option (used mainly for infrastructure projects)

Our Reverse-proxy project falls under the category of infrastructure. Because the company uses GitHub to store infrastructure projects that go to K8s cluster we maintain consistency and also create a new repository in GitHub.

Creating a repository

First, we need to create a new Git repository, just for storing there the Dockerfile and the nginx.conf configuration.

We then add the Dockerfile and nginx.conf files we defined earlier to the main branch, commit and push to GitHub.

This way we have a base on which we can build a CI environment (via GitHub Actions) that will upload the Docker image to the GCP GAR automatically every time there is a change on the main branch.

Docker image repository inside the GCP Google Artifact Registry

To be able to upload images to a repository, you first need to create one in GCP GAR. To do this, we open the Google Cloud Console, search for the service artifact registry, then go to its page. By clicking on the Create Repository button, we can add a new repository, e.g. with the name "nice-app/staging-reverse-proxy".


Automatic publication of the Docker image to the GCP GAR

With the GAR repository created, we can add the CI configuration in GitHub Actions for the Reverse-proxy project. The file should be attached to a subdirectory in the GitHub repository with the path .github/workflows/google.yml. Generators for such configurations are also available there if you would like to start on your own integrating with various services (AWS, GCP, etc.).

Contents of the google.yml file:

name: Build and Publish to GAR

    branches: [ "main" ]

  PROJECT_ID: ${{ secrets.GCP_PROJECT_ID }}
  GAR_LOCATION: europe-west2
  REPOSITORY: nice-app
  IMAGE: staging-reverse-proxy

    name: Setup, Build, Publish, and Deploy
    runs-on: ubuntu-latest
    environment: production

      contents: 'read'
      id-token: 'write'

      - name: Checkout
        uses: actions/checkout@v3

      # Authenticate to Google Cloud via credentials json
      - id: 'auth'
        name: Set up Cloud SDK
        uses: google-github-actions/auth@v0
          credentials_json: ${{ secrets.GCP_SA_KEY }}
          token_format: 'access_token'

      # Login to docker in Google Artifact Registry
      - name: Docker configuration
        run: |-
          echo ${{steps.auth.outputs.access_token}} | docker login -u oauth2accesstoken --password-stdin https://$
      # Build the Docker image
      - name: Build
        run: |-
          export BUILD_TAG="${GITHUB_SHA:0:7}-$(date +%s)"
          echo "build_tag=$BUILD_TAG" >> $GITHUB_ENV # make it available for next steps
          docker build \
            --tag "$$PROJECT_ID/$REPOSITORY/$IMAGE:$BUILD_TAG" \
            --build-arg GITHUB_SHA="$GITHUB_SHA" \
            --build-arg GITHUB_REF="$GITHUB_REF" \
      # Push the Docker image to Google Artifact Registry
      - name: Publish
        run: |-
          docker push "$$PROJECT_ID/$REPOSITORY/$IMAGE:${{ env.build_tag }}"

Let us now describe what is happening here, step by step:

  1. Build is only triggered when a change takes place in the main branch. This is standard behavior, where we treat this branch as production.
  2. We define the variables that will be used in the next steps of the build. This is, of course, the path to the Docker repository created earlier in the GCP GAR, its location, and the project ID in the GCP cloud (where the k8s cluster is located).

The ${{ secrets.GCP_PROJECT_ID }} clause means that the CI will load the value of the variable from the secrets added to the GitHub repository (these can be added as repository Administrator). This avoids sharing sensitive data in the project code.

  1. We then use in the first build step the so-called action actions/checkout@v3, which is a template in GitHub actions that allows access to the files of this repository.
  2. In the next build step using action google-github-actions/auth@v0 we authenticate ourselves in the GCP cloud, where we also use the secret that was set inside the GitHub repository by the administrator.
  3. We log into Docker at Google Artifact Registry (which we have already gained access to).
  4. We build the Docker image from the files contained in the repository. This stage is divided into several steps:
    a) We create the variable BUILD_TAG="${GITHUB_SHA:0:7}-$(date +%s)". It is the name of the tag that the image will be tagged with when publishing this later to the GCP GAR. It is the concatenation of the first 8 characters from the $GITHUB_SHA variable (the commit SHA that triggered the workflow) and the timestamp of the current time in UNIX format. We will use this timestamp coefficient in the future to sort the Docker images according to the latest tags.
    b) We export it additionally to the global variable space in GitHub Actions so that this tag is also available in subsequent steps.
    c) We build a Docker image labeled with the appropriate tags.
  5. We publish the built docker image to the GCP GAR repository and tag it with the tag previously generated.

    Triggering job on GitHub Actions

We are testing the solution, now with a new commit to the main branch, a build should create in the GitHub Actions tab that uploaded this image to the repository.
Effect of image deployment to the GCP GAR. We see several recently uploaded images tagged with a tag built according to the guidelines (e.g. 5c48100-1666639074, where the first member is the SHA abbreviation of the commit and the second is the UNIX time).

Brief summary

Another step behind us! This is the point at which the server image will be available for use in the Kubernetes cluster that the client has (because it has access to the repositories in the GCP GAR).
In the next article (part 5), we will prepare manifests for the Kubernetes cluster, which will use the docker image for Reverse-proxy in the new Pod and expose it to the world (via Ingress and Service objects). This will be the final piece we are missing for a complete solution.

Read the fifth and final part of the series here.

Blog Comments powered by Disqus.