[Avg. reading time: 11 minutes]

Kafka Software

Free Trial for 30 days (Cloud) https://www.confluent.io/get-started/

Using Docker/Podman

Please install podman-compose (via pip or podman desktop or brew)

Windows/Linux

pip install podman-compose --break-system-packages

MAC

brew install podman-compose

podman-compose allows you to define your entire multi-container environment declaratively in a YAML file.

  • Managing multiple interconnected containers
  • Developing complex applications locally
  • Need reproducible environments
  • Working with teams
  • Want simplified service management

Use podman directly

  • Running single containers
  • Need fine-grained control
  • Debugging specific containers
  • Writing scripts for automation
  • Working with container orchestration platforms

Step 1

mkdir kafka-demo
cd kafka-demmo

Step 2

create a new file docker-compose.yml

version: '3'
services:
  kafka:
    image: docker.io/bitnami/kafka:latest
    container_name: kafka
    ports:
      - "9092:9092"    # client connections
      - "9093:9093"    # controller quorum communication
    environment:
      - KAFKA_KRAFT_MODE=true
      - KAFKA_CFG_NODE_ID=1
      - KAFKA_CFG_PROCESS_ROLES=broker,controller
      - KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@localhost:9093
      - KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
      - KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093
      - KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092
      - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,CONTROLLER:PLAINTEXT
      - ALLOW_PLAINTEXT_LISTENER=yes
      - KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=true
      - KAFKA_CFG_NUM_PARTITIONS=3
      - KAFKA_CFG_DEFAULT_REPLICATION_FACTOR=1
    volumes:
      - kafka_data:/bitnami/kafka

volumes:
  kafka_data:
    driver: local

Step 3

podman-compose up -d

Step 4

Verification

podman container ls

# Check the logs

podman logs kafka

Step 5: Create a new Kafka Topic

# Create a topic with 3 partitions
podman exec -it kafka kafka-topics.sh \
  --create \
  --topic gctopic \
  --bootstrap-server localhost:9092 \
  --partitions 3 \
  --replication-factor 1

Step 6: Producer

podman exec -it kafka kafka-console-producer.sh \
--topic gctopic \
--bootstrap-server localhost:9092 \
--property "parse.key=true" \
--property "key.separator=:"

Step 7: Consumer (Terminal 1)

podman exec -it kafka kafka-console-consumer.sh \
  --topic gctopic \
  --bootstrap-server localhost:9092 \
  --group 123 \
  --property print.partition=true \
  --property print.key=true \
  --property print.timestamp=true \
  --property print.offset=true


Consumer (Terminal 2)

podman exec -it kafka kafka-console-consumer.sh \
  --topic gctopic \
  --bootstrap-server localhost:9092 \
  --group 123 \
  --property print.partition=true \
  --property print.key=true \
  --property print.timestamp=true \
  --property print.offset=true


Consumer (Terminal 3)

podman exec -it kafka kafka-console-consumer.sh \
  --topic gctopic \
  --bootstrap-server localhost:9092 \
  --group 123 \
  --property print.partition=true \
  --property print.key=true \
  --property print.timestamp=true \
  --property print.offset=true

Consumer (Terminal 4)

This “new group” will receive all the messages published across partitions.

podman exec -it kafka kafka-console-consumer.sh \
  --topic gctopic \
  --bootstrap-server localhost:9092 \
  --group 456 \
  --property print.partition=true \
  --property print.key=true \
  --property print.timestamp=true \
  --property print.offset=true

Kafka messages can be produced and consumed in many ways.

  • JAVA
  • Python
  • Go
  • CLI
  • REST API
  • Spark

and so on..

Similar tools

Amazon Kinesis

A cloud-based service from AWS for real-time data processing over large, distributed data streams. Kinesis is often compared to Kafka but is managed, making it easier to set up and operate at scale. It’s tightly integrated with the AWS ecosystem.

Microsoft Event Hubs

A highly scalable data streaming platform and event ingestion service, part of the Azure ecosystem. It can receive and process millions of events per second, making it suitable for big data scenarios.

Google Pub/Sub

A scalable, managed, real-time messaging service that allows messages to be exchanged between applications. Like Kinesis, it’s a cloud-native solution that offers durable message storage and real-time message delivery without the need to manage the underlying infrastructure.

RabbitMQ

A popular open-source message broker that supports multiple messaging protocols. It’s designed for scenarios requiring complex routing, message queuing, and delivery confirmations. It’s known for its simplicity and ease of use but is more traditionally suited for message queuing rather than log streaming.Ver 5.5.3

Last change: 2025-10-15