Contents

Preview environment for Code reviews, part 5: Deployment in the Kubernetes cluster

Mateusz Palichleb

13 Mar 2023.9 minutes read

Preview environment for Code reviews, part 5: Deployment in the Kubernetes cluster webp image

Last missing element: Deployment of Reverse-proxy inside Kubernetes

This blog post is the fifth (final) part of a series of articles that cover building a separate Preview environment for each Code Review (Merge Request).

In the previous article we have successfully created a CI pipeline that publishes the Reverse-proxy server to the repository in the GCP Google Artifact Registry service.

This enables the Docker image to be available in the Kubernetes cluster that the client company has.

1

As you can see, we are still missing Continuous Deployment in the Kubernetes cluster itself, along with the Deployment, Service, and Ingress manifests. In this article, we will focus precisely on the last missing element, namely the CD of Reverse-proxy inside Kubernetes.

Configuration for Kubernetes

The first thing we will create will be a Reverse-proxy configuration for the K8S, which should be added to the cluster management repository (it describes the infrastructure of all projects in the Kubernetes cluster as IaC - Infrastructure as Code). This is needed to define the behavior of the various components so that they meet our requirements.

“Deployment” manifest

Firstly, we need a definition of the deployment itself, which will keep the container alive. :)

File deployment.yml:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nice-app-staging-proxy
    app.kubernetes.io/name: nice-app-staging-proxy
  name: nice-app-staging-proxy
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nice-app-staging-proxy
  strategy: {}
  template:
    metadata:
      labels:
        app: nice-app-staging-proxy
    spec:
      containers:
        - image: europe-west2-docker.pkg.dev/nice-app/nice-app/staging-reverse-proxy:5c48100-1666639074 # {"$imagepolicy": "flux-system:nice-app-staging-proxy"}
          imagePullPolicy: Always
          readinessProbe:
            httpGet:
              path: /
              port: 80
          livenessProbe:
            httpGet:
              path: /
              port: 80
          name: nice-app-staging-proxy
          ports:
            - containerPort: 80
              name: http
      imagePullSecrets:
        - name: gcr-pull
status: {}

First of all, here we define a Docker image that is uploaded to the GCP GAR with its latest tag and a comment that tells the Flux tool where to make changes to the image (the so-called image-policy). This tag will be changed automatically as new commits are added to the repository by Fluxbot when a new version of the image occurs in the GCP GAR repository.

BTW: URLs for readinessProbe and livenessProbe are also required for the service to work correctly. We unify all labels and metadata into a nice-app-staging-proxy.

“Service” manifest

We also need to define a “Service” manifest, which makes the Pods created by the Deployment visible inside the cluster by other services (visible in terms of the local network).

File service.yml:

apiVersion: v1
kind: Service
metadata:
  name: nice-app-staging-proxy
  namespace: default
  labels:
    app: nice-app-staging-proxy
spec:
  type: ClusterIP
  selector:
    app: nice-app-staging-proxy
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: http
status:
  loadBalancer: {}

In this case, we also unify the metadata to the same name nice-app-staging-proxy. The service type is a ClusterIP. It is worth noting that we expose port 80 as HTTP (so far without encryption, as it is unnecessary here for internal communication).

“Ingress” manifest

For the Service to be visible externally (indirectly to the next element, i.e. the LoadBalancer), a manifest definition is needed for the Ingress object.

File ingress.yml:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nice-app-staging-proxy
  namespace: default
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: letsencrypt-issuer
  labels:
    app: nice-app-staging-proxy
    app.kubernetes.io/name: nice-app-staging-proxy
spec:
  rules:
    - host: "*.staging.nice-app.com"
      http:
        paths:
          - backend:
              service:
                name: nice-app-staging-proxy
                port:
                  number: 80
            path: /
            pathType: Prefix
  tls:
    - hosts:
        - "*.staging.nice-app.com"
      secretName: nice-app-staging-proxy-tls
status:
  loadBalancer: {}

Here we also unify the metadata, and define the hosts and subdomains that will be caught by the LoadBalancer to redirect traffic to this endpoint.

Communication between the LoadBalancer and the Service takes place on port 80, but the LoadBalancer itself manages SSL encryption for requests from the outside world.

In this case, the Issuer Let's encrypt is used to generate SSL certificates for the subdomains that may occur here.

SSL certificates and wildcard subdomains

A small curiosity: in order for SSL certificates to be generated for Wildcard subdomains (with ), a DNS-01 implementation is needed (the so-called "DNS challenge") instead of HTTP-01 (see this post for more technical information).

As the implementation of the DNS-01 is a complex subject, we will not attempt to describe it in this article, the series of which is already quite long anyway.

What happens when cert-manager does not support wildcard? It generates a self-signed SSL certificate, which may not be a trusted certificate, but allows us to access the Preview application page. Since this is used by Developers and not the end-users, we can skip the DNS-01 implementation for this demo.

CD configuration for K8S manifests and GCP image

With the initial configuration for Reverse-proxy in the K8s created, we can go one step further and use this configuration in the repository managing the cluster infrastructure.

To do this, we need to create an image-policy, whose name we indicated earlier in the deployment.yml file, and an image-repository defining where the Docker image repository is located.

File image-policy.yml:

apiVersion: image.toolkit.fluxcd.io/v1alpha2
kind: ImagePolicy
metadata:
  name: nice-app-staging-proxy
  namespace: flux-system
spec:
  filterTags:
    pattern: '^[a-fA-F0-9]+-(?P<ts>.*)'
    extract: '$ts'
  imageRepositoryRef:
    name: nice-app-staging-proxy
  policy:
    numerical:
      order: asc

In this case, note that the namespace is different: flux-system, by virtue of it being a different piece of infrastructure specifically for the Flux tool.

We have an image policy setting where based on a filter in the form of a regexp, we only extract the timestamp, which, when sorted in ascending order, will then allow us to discover which Docker image tag is the most recent (largest timestamp means most recent image).

File image-repository.yml:

apiVersion: image.toolkit.fluxcd.io/v1alpha2
kind: ImageRepository
metadata:
  name: nice-app-staging-proxy
  namespace: flux-system
spec:
  image: europe-west2-docker.pkg.dev/nice-app/nice-app/staging-reverse-proxy
  interval: 1m0s
  secretRef:
    name: gcr-pull

In the case of the repository source, the manifest is very simple and also has a namespace flux-system.

“Kustomize” manifest

Finally, we need to glue all these manifests together. We mentioned at the beginning that the Kustomize tool and Terraform are also used in CD. Well, Terraform needs a new directory with the same name as the metadata of the manifests (i.e. nice-app-staging-proxy) in which it finds the file kustomization.yml, which will load this and populate the entire list of manifest files, which is also in this directory as well.

This is a use case in this particular company for their own K8s cluster in the infrastructure management project repository. In other companies or projects, this may be done differently, but in this case, we have matched an already instantiated solution for the directory and file structure.

Therefore, we need a kustomisation.yml file at the end gluing everything together:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
commonLabels:
  app: nice-app-staging-proxy
resources:
  - deployment.yml
  - service.yml
  - ingress.yml
  - image-policy.yml
  - image-repository.yml

All created manifest files contained in the new directory are added to the repository, so that they will be run during the various builds in GitHub Actions.

Adaptation of Nginx server configuration for Reverse-proxy project for use in Kubernetes cluster

In the earlier test configuration of the Nginx server in the nginx.conf file, we used the DNS resolver IP address that exists locally in the Docker installed on the computer.

resolver 192.168.65.5; # docker localhost DNS resolver

However, in the case of Kubernetes, the situation is different, there may be a DNS resolver there, or there may not be one at all, or access to it may be severe restricted. In our use case of Nice-App, no such DNS resolver was available. We, therefore, decided to use the IP address of one of the publicly available DNS resolvers (in this case Google's Public DNS).

resolver 8.8.8.8 valid=10s ipv6=off; # public Google DNS resolver

Of course, the list can be extended to several servers by specifying an array of IP addresses, but for the purposes of the demo, only one was used. As you can see, two additional options are specified here:

  • valid=10s, which means that the IP address timeout is extended to 10 seconds, this is needed because the public server may have moments of unavailability, maintenance, network delays, etc.
  • ipv6=off, we disable the option to use IPv6 when communicating with the resolver, so we optimize the time to process requests, as we do not need IPv6 for our staging environment

After changing all the lines of code where the IP address of the resolver occurred, the nginx.conf file now looks as follows:

server_names_hash_bucket_size 64;

server {
    listen 80;

    server_name ~(.*)\.staging\.nice-app\.com$;
    set $subdomain $1;
    set $destination_domain 'nice-app-staging.s3-website.eu-central-1.amazonaws.com';

    location ~* (.*\.[A-Za-z]+)$ {
        resolver 8.8.8.8 valid=10s ipv6=off; # public Google DNS resolver
        set_escape_uri $escaped_uri $uri; # OpenResty Nginx module https://github.com/openresty/set-misc-nginx-module#set_escape_uri
        proxy_pass http://$destination_domain/$subdomain$escaped_uri$is_args$args;
    }

    location / {
        resolver 8.8.8.8 valid=10s ipv6=off;
        # adding a trailing slash to the end of URL if it does not exist (needed for proxy_pass to open AWS S3 directory, then to search for index.html inside)
        rewrite (.*[^\/])$ $1/ break;
        proxy_pass http://$destination_domain/$subdomain$uri$is_args$args;
    }
}

These changes need to be committed to the GitHub repository so that a new Docker image with the new Nginx configuration is automatically added to the GCP GAR repository, which will then cause the Flux tool in the company's cluster management repository to do a commit with the new version of the image, resulting in a redeployment in the K8s cluster.

Testing the solution

Congratulations! we have reached the end of our long road. Now it is time to test the solution. To do this, we can, for example, use the cURL tool just as we did in article no. 3, which for the URL previously generated by the Preview environment for the Merge Request in GitLab should return the HTML code of the Nice-App application.

$ curl https://a4f194667b4fa23.staging.nice-app.com/
<html>
<head><title>Nice-app application</title></head>
<body>
<center>Hello!</center>
</body>
</html>

In this case, we had the Docker daemon disabled locally on the computer and had already removed the redirect of the above URL from the /etc/hosts file. So DNS in AWS Route 53 worked, which redirected us to the LoadBalancer in the Kubernetes cluster, where the Reverse-proxy server used a request to a subdirectory in the staging Bucket underneath.

Summary

It was a complicated path (5 blog posts), but as you can see it is achievable for the Developer (provided you read the documentation and think through the problem at the outset before implementation).

We also arrived at some solutions through discovery (e.g. errors at proxy_pass in the Nginx server configuration due to the specific operation of bucket hosting in AWS S3). I hope you found it interesting.

Furthermore, we were able to fit within the assumptions made at the outset, where the client expected to use elements of the infrastructure it currently maintained.

Alternative solutions

Building Preview environments for Code Review can be solved in many different ways. Therefore, I realize that the solution proposed in this series of articles may not be the best option possible.

I encourage you, dear reader, to provide links to other examples of building environments for Preview in the comments. This will certainly enrich the knowledge of those who, perhaps due to budgetary or legal constraints, cannot build Preview in the manner outlined above.

Those with very deep knowledge of Nginx configuration will probably also have something interesting to say.

Since you have read the article up to this point, thank you in advance for your patience and reading the entire series of articles. See you next time!

Blog Comments powered by Disqus.