Starting Kafka from scratch

Camilo Chaves
4 min readApr 9, 2021

IT Infra 101 (Part 1 of 100) — The Kafka Ecosystem

Objective: Run 1 Kafka Broker with Zookeeper on Docker containers, start a console producer and receive the message in a console consumer

Pre-Requisites: Docker must be installed and running

If you don’t know what Kafka is, remember this: “Kafka is like a messaging system in that it lets you publish and subscribe to streams of messages”. It´s a simple definition, which has powerful implications; it works as a distributed system that runs as a scalable cluster. Still doesn’t ring a bell ? Don’t worry. It will.

These 101 series will be my learning path inside the Kafka Ecosystem. To be honest, I am not an expert in IT infra or Kafka, but seeing what it does, and who uses it (Uber, LinkedIn, etc.), I’ve decided to learn it, simply because, I’m tired of writing simple apps like “Sell your Cake online” that communicates to a REST web API talking to MySQL. Today, you have to prepare your IT infra the right way, which means, prepare it for scalability and redundancy. For that you will be using at least a messaging system like Rabbit MQ, or powerful ones, like Kafka, but also containers, CI/CD (in DevOps realm), GitHub actions, Terraform, etc… The learning path will be fully documented on Github; either clone the repository, or open VS Code and create a file called basic_env.yaml with the following code:

“The KAFKA_ADVERTISED_LISTENERS variable is set to localhost:29092. This makes Kafka accessible from outside the container by advertising its location on the Docker host. Also notice that KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR is set to 1. This is required when you are running with a single-node cluster.”

Open windows terminal in the directory of the compose file and type docker-compose -f basic_env.yaml up -d

running a docker compose file with different name than docker.compose.yaml

Check if the containers are running: docker-compose -f basic_env.yaml ps -a

checking if containers are up and running

Create a topic called test inside the container:

  • docker exec -it kafka bash
  • In the container terminal : kafka-topics - -create - - zookeeper zookeeper:2181 -- replication-factor 1 --partitions 1 - - topic test
  • check if it’ s created succesfuly: kafka-topics -- describe --zookeeper zookeeper:2181 - -topic test
entering the container and creating a topic

Now, open windows terminal and split the powershell window in two with alt+shift+plus. On the left window, create the producer and on the right window the consumer (codes below)

Congratulations, you have finished a basic producer and consumer messaging system successfully! Now, what if I told you, you don’t have to create topics by hand? Yes, there’s a GUI to manage Kafka Topics😅 .

First: close all open containers with docker-compose -f basic_env.yaml down

Go back to Visual Studio Code and either edit the basic_env.yaml file or create a new file named docker-compose.yaml and add the code below

Adding KafDrop open source GUI for Topic Management
  • Run: docker-compose up -d 😃 -”Yes you don’t need to type the filename anymore”
  • Check if it’s running: docker ps -a

Open your browser on: localhost:9101, and click the new button

KafDrop running on port 9101
Create a topic

It wouldn’t be much of use to have messaging systems running only with console commands right? On the next article, I’ll be creating a producer and a consumer inside .NET Core.

Cheers!

--

--

Camilo Chaves

.NET Software Developer, Electrical Engineer, Physicist