Running Kafka Connect Cluster as a Native Kubernetes Application

Grzegorz Kocur

30 Nov 2021.7 minutes read

Running Kafka Connect Cluster as a Native Kubernetes Application webp image

Nowadays, it is clear to probably everybody — Kubernetes is the de-facto standard if we are talking about orchestration. There are good reasons for this: it creates an abstraction layer between the system we want to run and underlying hardware or cloud resources.

In this post, I’ll try to describe the way to run a Kafka Connect cluster on the Kubernetes cluster. But not only run: I’ll try to have Kafka Connect as a native Kubernetes application.

Features of a native Kubernetes application

Let’s try to enumerate the native Kubernetes application features:

  1. The application is specifically designed to run on Kubernetes.
  2. The application is entirely managed by Kubernetes APIs, so one can use any tool that “talks” to the API — the first and probably most important example is kubectl.
  3. To fulfil the above requirement, the application provides the Custom Resource Definition and a controller that “understands” the logic from this CRD. The CRDs and the controller make up something called the Kubernetes Operator.
  4. The Operator can create and manage the application using Kubernetes Manifests.
  5. The Operator can manage the application declaratively — so they can use the GitOps model.

Let’s try to deploy the Kafka Connect cluster and check if it fulfils those requirements.


Imagine you are the DevOps Engineer in some company. The company uses the Kubernetes cluster to run the services. There is also a MySQL database as a part of the system.

One day, the CTO comes to you and asks you to get all records from the MySQL database into Kafka Cluster so the company can run some analytics software that consumes Kafka messages and creates some statistics. As an additional requirement, the developers want to use Schema Registry and protobuf serialization. This sounds like a perfect job for Kafka Connect!


I’m assuming we already have the k8s cluster. The Kafka cluster and the MySQL run on k8s. We also use the GitOps model to deploy the applications on the Kubernetes cluster.

In this example, I use the FluxCD as a continuous delivery tool which supports GitOps and the Strimzi Kafka Operator to deploy the Kafka cluster, but one can use any other tools, for example ArgoCD and MSK (the AWS managed Kafka cluster). I use the MySQL image from the Debezium tutorial.

You can find the GitOps repository that describes the whole system here. There are a lot of simplifications there but this article is not supposed to answer The Ultimate Question of Life, the Universe, and Everything ;)

The Plan

It turns out the Strimzi Kafka Operator provides several CRDs. Obviously, in the first place is kafka — the object that describes the Kafka Cluster as k8s manifest. But besides Kafka itself, we can also manage such objects as… kafkaconnects or kafkaconnectors!

The latter is especially interesting: normally, when we run a Kafka Connect cluster in a distributed mode, we have to configure the connectors by performing some REST API calls. It spoils our all-is-defined-declaratively world. But the kafkaconnector CRD solves this issue. Stay tuned!

We are on Kubernetes, so everything is inside containers. We will also need a Docker image with all connectors and libraries we need inside.

So the plan is:

  1. Create a proper docker image.
  2. Deploy the Schema Registry.
  3. Create a kafkaconnect manifest.
  4. Create a kafkaconnector manifest.
  5. Make coffee and watch how everything is working.

The Docker image

The goal is to read records from the mysql database. We’ll use use the Debezium connector which reads data from mysql db using the technique called Change Data Capture (CDC). Technically the connector is a part of the mysql system as one of the followers.

The procedure is described in the Strimzi documentation, so one can use it as a reference. Basically we want to add all plugins and libs to the already existing base image provided by Strimzi.

Let’s create a Dockerfile:

USER root:root
COPY ./plugins/ /opt/kafka/plugins/
USER 1001

The plugins directory contains all connectors and converters we need. In our case the requirement was to read from mysql and use the Schema Registry with protobuf serialisation, so we need the debezium mysql plugin and the protobuf converter.

Let’s build the image:

docker build -t gkocur/kafka-connect-debezium-protobuf:1 .

and push it to the docker hub:

docker push gkocur/kafka-connect-debizium-protobuf:1

Although it’s possible (and sometimes useful) to build the docker image manually as described below, it’s now possible to automate this process using Strimzi. You can simply add a list of plugins to the KafkaConnect definition and it will download needed artifacts and bake it into the docker image.

The KafkaConnect yaml file can look like this:

kind: KafkaConnect
  name: my-connect-cluster
  annotations: "true"
  build: (1)
    output: (2)
      type: docker
      pushSecret: my-registry-credentials
    plugins: (3) 
      - name: debezium-postgres-connector
          - type: zip
            sha512sum: ef1620547e6ddf5be010271849b6a87a19f6e6beee93b379c80883815b8f37ec5137095b2c99975d7704cbf957e6a33d76c61109582cad57c7cbbfae43adc86c
      - name: camel-timer
          - type: tgz
            sha512sum: c0102700ae176b1ef078bdb86d640935a1333865e58b870f6b0ef3d9dad067221122aee6bf52852dad209a7c2051f359e0434fc7fbd783fb50e21707c7577ef9
      - name: echo-connector
          - type: jar
            sha512sum: b7da48d5ecd1e4199886d169ced1bf702ffbdfd704d69e0da97e78ff63c1bcece2f59c2c6c751f9c20be73472b8cb6a31b6fd4f75558c1cb9d96daa9e9e603d2

You can find more details about it in this blog post on Strimzi webpage.

Faster, better product delivery. Partner with us to automate in the cloud with on-demand DevOps engineering. Explore the offer >>

Schema Registry

I’ll use the helm chart provided as a part to Confluent Platform. There is only the binary helm package with the whole platform, but I use flux, co I’m able to install the helm chart directly from the source.

Let’s create the GitSource:

kind: GitRepository
  name: confluent
  namespace: flux-system
  interval: 1m0s
    branch: master

And the HelmRelease which is installed from this source:

kind: HelmRelease
  name: schema-registry
  namespace: kafka
  interval: 1m
      chart: ./charts/cp-schema-registry
        kind: GitRepository
        name: confluent
        namespace: flux-system
  releaseName: schema-registry
      enabled: false
      bootstrapServers: kafka-cluster-kafka-bootstrap:9092
      fullnameOverride: sr

The GitSource and HelmRelease are CRDs provided by FluxCD. If you are not familiar with it — try it, it’s a great piece of software!

Create Kafkaconnect

Let’s prepare the kafkaconnect object which uses the custom docker image.

kind: KafkaConnect
  name: connect-cluster
  namespace: kafka
  annotations: "true"
  image: gkocur/kafka-connect-debezium-protobuf:1
  replicas: 1
  bootstrapServers: kafka-cluster-kafka-bootstrap:9092
  config: 1 1 1
    config.providers: file
    config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider
      - name: connector-config
          secretName: mysql-credentials

A few comments about it:

  1. Note the annotation "true". It tells the connector to manage connectors only defined as kafkaconnector manifest! If you create a connector with REST API and this annotation is present and set to true such connector will be deleted.
  2. The config.providers, config.providers.file.class, and externalConfiguation — it allows us to provide a credentials in native k8s way — as a secret which is mounted inside connect pods so we can use it inside the connector configuration. The common and secure way to pass secrets.

After adding this manifest to the gitops repository we have the kafka connect cluster up and running:

$ kubectl get kafkaconnect -n kafka
connect-cluster   1                  True

$ kubectl -n kafka get pods -l
NAME                                      READY   STATUS    RESTARTS   AGE
connect-cluster-connect-66cb786d6-l7t7j   1/1     Running   0          3m4s

But the connect without the connector does… nothing. We have to add the connector and configure it.

Creating Kafka Connector

Fortunately — we can create a connector as a kubernetes manifest:

kind: KafkaConnector
  name: "inventory-connector"
  namespace: kafka
  labels: connect-cluster
  class: io.debezium.connector.mysql.MySqlConnector
  tasksMax: 1
    database.hostname: mysql.mysql
    database.port: "3306"
    database.user: "${file:/opt/kafka/external-configuration/connector-config/}"
    database.password: "${file:/opt/kafka/external-configuration/connector-config/}" "184054" "dbserver1"
    database.whitelist: "inventory"
    database.history.kafka.bootstrap.servers: "kafka-cluster-kafka-bootstrap:9092"
    database.history.kafka.topic: "schema-changes.inventory"
    include.schema.changes: "true"
    key.converter: io.confluent.connect.protobuf.ProtobufConverter
    key.converter.schema.registry.url: http://schema-registry-cp-schema-registry:8081
    value.converter: io.confluent.connect.protobuf.ProtobufConverter
    value.converter.schema.registry.url: http://schema-registry-cp-schema-registry:8081

Let’s analyse it. The most important part is the config map — it’s a connector configuration in yaml format. The content of this part depends on the plugin we use. In our case — the debezium mysql connector — it’s documented here. The interesting part are the credentials — it’s taken from the properties file mounted from the secret, so we don’t need to include credentials in the gitops repository which is always a very bad practice.

Please note the annotation — it tells the operator on which connect cluster this connector should run (yes, we can have multiple!)

Let’s commit this manifest to gitops repo and wait a while.

If everything went well we should see the new kafka topics:

$ bin/ --bootstrap-server kafka-cluster-kafka-bootstrap.kafka:9092 --list

Let’s check the content of one of the topics using protobuf console consumer (please note — you will need a confluent kafka installed)

$ bin/kafka-protobuf-console-consumer  --bootstrap-server kafka-cluster-kafka-bootstrap.kafka:9092 --topic dbserver1.inventory.customers --from-beginning --property schema.registry.url=http://schema-registry-cp-schema-registry.kafka:8081


If something goes wrong one can restart the connector task or the whole connector adding annotation to the connector manifest.

To restart the connector add:

To restart the task with id=0 add:

Thats all!

Using the Strimzi Kafka Operator we can not only run easily the Kafka Cluster but also the Kafka Connect Cluster. The whole connect cluster is managed using k8s manifests, so we can describe it as a code and store it in the gitops repository.
BTW — if you are interested in introducing kafka, kafka connect, gitops model, everything-as-a-code approach or any other technology mentioned in this article — please contact us! We’ll be more than happy to help your organization.


Actually, you don’t have to build the connector image manually — the strimzi operator can do it for you! It’s described in the excellent strimzi documentation and I’ll leave it as a exercise for the reader :)


Blog Comments powered by Disqus.