Font Size:

Apache Kafka Integration

Introduction

Apache Kafka is a high-throughput, distributed event streaming platform ideal for real-time data processing. In Joget DX 9, Kafka integration enables asynchronous background processing, improving scalability and responsiveness for enterprise-grade applications.

Why Use Kafka with Joget DX?

  • Asynchronous Processing: Offload long-running tasks to Kafka consumers, freeing up the main thread.
  • Scalability: Kafka handles high volumes of messages efficiently, making it suitable for large-scale deployments.
  • Resilience: Kafka’s distributed architecture ensures fault tolerance and message durability.
  • Performance Gains: Load tests show increased throughput improvement and reduction in response time when Kafka is used for background processing.

How does it work?

Joget DX with Kafka Cluster

  1. Client transaction request (e.g., start a process and complete an assignment)
  2. Minimal processing is initially performed, e.g., process record creation.
  3. Message sent to the Kafka topic.
  4. Response sent back to the client.
  5. The listener consumes messages from the Kafka topic asynchronously.
  6. The listener performs the actual processing in the background.

Apache Kafka helps in the following ways:

  • Kafka is used as a highly scalable backlog, preventing processing and database bottlenecks.

  • Actual processing is performed asynchronously in the background.

Deploying Apache Kafka

Docker

  • Install using official open source Apache Kafka image from Docker Hub, easiest for local single node development e.g.
    docker run -d -p 9092:9092 --name kafka apache/kafka:latest

  • Install on Docker using the commercial Confluent Platform.

Kubernetes

GUI

Kafka Cluster on K8s using Strimzi

Deploy Kafka Cluster using Strimzi

  1. Create a namespace.

    kubectl create namespace kafka
  2. Install Strimzi.

    kubectl create -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka
  3. Follow the deployment.

     kubectl get pod -n kafka --watch
  4. Follow the logs.

     kubectl logs deployment/strimzi-cluster-operator -n kafka -f
  5. Create a multiple node cluster.

     kubectl apply -f https://strimzi.io/examples/latest/kafka/kraft/kafka-with-dual-role-nodes.yaml -n kafka

Test Kafka Cluster

  1. Run producer.

     kubectl -n kafka run kafka-producer -ti --image=quay.io/strimzi/kafka:0.45.0-kafka-3.9.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic
  2. Run consumer.

    kubectl -n kafka run kafka-consumer -ti --image=quay.io/strimzi/kafka:0.45.0-kafka-3.9.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning

Delete Kafka Cluster

  1. Delete Kafka Cluster 

    kubectl -n kafka delete $(kubectl get strimzi -o name -n kafka) 
    kubectl delete pvc -l strimzi.io/name=my-cluster-kafka -n kafka
  2. Delete Strimzi operator.

    kubectl -n kafka delete -f 'https://strimzi.io/install/latest?namespace=kafka'

Enabling Apache Kafka on Joget DX

To enable Apache Kafka, you can set the -Dwflow.kafkaBroker=host:port flag in JAVA_OPTS.

The default port is 9092 for k8s, while the host is the <service name>.<namespace>. For example, my-cluster-kafka-bootstrap.kafka.

If you want to add more JAVA_OPTS flags to your Joget setup, you can append them to the existing JAVA_OPTS line in the Joget DX startup script:

  1. Navigate to the Joget DX installation directory and open the joget-start.bat file in a text editor.
  2. Find the line that starts with:
    set JAVA_OPTS=
    .properties
     
  3. Add your new flags at the end, separated by spaces. For example, when adding the -Dwflow.kafkaBroker=host:port flag:
    • PLAINTEXT:
      set JAVA_OPTS=-Dwflow.kafkaBroker=host:port
    .properties
     
     
     
    • SASL_PLAINTEXT:
      set JAVA_OPTS=-Dwflow.kafkaBroker=host:port -Dwflow.kafkaSecurity=SASL_SSL -Dwflow.kafkaUsername=username -Dwflow.kafkaPassword=password
    • SASL_SSL:
      set JAVA_OPTS=-Dwflow.kafkaBroker=host:port -Dwflow.kafkaSecurity=SASL_SSL -Dwflow.kafkaUsername=username -Dwflow.kafkaPassword=password
    • Additional Options:
      set JAVA_OPTS=-Dwflow.concurrency=5 -Dwflow.logger=org.joget.eventstream.kafka.KafkaLog
    • Log Configuration Options:
      set JAVA_OPTS=-Dwflow.kafkaBroker=host:port -Dwflow.logger=org.joget.eventstream.kafka.KafkaLog
  1. Configure the Kafka plugin.
    1. In the App Center, navigate to Settings > Manage Plugins > Configurable Plugins.
    2. Double-click Kafka Event Stream Plugin from the list.
    3. Configure the fields and click Submit.

Configuration Settings

Kafka Plugin Configuration Properties

Fields to configure:

  • Broker: host:port of the broker (comma-delimited for multiple brokers)
  • Consumer Concurrency: Number of concurrent consumers for each message listener
  • Security:
    • No Authentication: Apache Kafka deployment without security.
    • SASL_PLAINTEXT: SASL without SSL.
    • SASL_SSL: SASL with SSL.

Kafka Plugin JVM System Properties 

Connect to Kafka without security

  • To enable the Kafka integration, set flag -Dwflow.kafkaBroker=host:port in JAVA_OPTS.

  • Default port is 9092

  • For k8s, the host is the <service name>.<namespace> e.g. my-cluster-kafka-bootstrap.kafka

Connect using SASL authentication

  • To use SASL without SSL, set additional flags -Dwflow.kafkaSecurity=SASL_PLAINTEXT -wflow.kafkaUsername=username -wflow.kafkaPassword=password

  • To use SASL with SSL, set additional flags -Dwflow.kafkaSecurity=SASL_SSL -wflow.kafkaUsername=username -wflow.kafkaPassword=password

Additional Configuration Options

  • To set multiple concurrent consumers for each message listener, set flag -Dwflow.concurrency=number.

Log Configuration Options

  • To use Kafka for logging, configure the wflow.logger system property e.g.
    -Dwflow.kafkaBroker=host:port -Dwflow.logger=org.joget.eventstream.kafka.KafkaLog

  • If enabled, log will output 
    LogUtil Log implementation: org.joget.eventstream.kafka.KafkaLog

Security & Configuration

  • Authentication Options:
    • PLAINTEXT
    • SASL_PLAINTEXT (username/password)
    • SASL_SSL (secure username/password)
  • System Properties:
    • -Dwflow.kafkaBroker=host:port
    • -Dwflow.kafkaSecurity=SASL_SSL
    • -Dwflow.kafkaUsername=username
    • -Dwflow.kafkaPassword=password

Related Documents

Created by Debanraj Last modified by Debanraj on Aug 15, 2025