Font Size:

Joget Clustering using Tomcat Session Replication in Unicast Mode

This guide walks you through setting up a non-sticky session for Joget using Tomcat's built-in Unicast mode in a Docker-based environment.

Whether you're evaluating Joget for High Availability or learning about clustering setups, this article will definitely benefit you.

Clustering Terminology and Core Concepts

Before we begin, here are the main components involved in setting up and maintaining a Joget cluster, including nodes, communication modes, and session management. Note that these terms will be used extensively throughout the article.

Cluster: A group of Joget nodes that collaborate to distribute workloads and replicate users' session data, ensuring Joget remains accessible even if one or more Joget nodes become unavailable.

Session Replication: The process of copying session data between Joget nodes in a cluster.

Failover: Automatic rerouting of a user request to another Joget node when the current Joget node becomes unavailable.

Load Balancing: Distribution of incoming workload across multiple Joget nodes to ensure performance and reliability.

Multicast: A communication method that allows Joget nodes to automatically discover each other by broadcasting messages to a shared multicast IP address.

Unicast: Direct communication between known IP addresses.

Node: A single Joget instance participating in a cluster.

Multicast IP Address: A reserved IP address that allows a single Joget node to broadcast messages to multiple Joget nodes at once, enabling automatic Joget nodes discovery within a cluster.

Why choose Unicast?

In Tomcat clustering, Unicast mode means each node talks directly to the other nodes by explicitly defining the IP addresses of each node in the cluster. This differs from Multicast, where the cluster's node discovery happens automatically.

  • You're working in cloud or containerized environments like AWS, Azure, or GCP, where multicast is often blocked, disabled, or unsupported.
  • You want deterministic cluster membership with static IP assignments.
  • You need fine-grained control over session replication.
Feature Multicast Unicast
Node Discovery Automatic Manual
Communication Type Broadcast (UDP Multicast) Point-to-point (TCP)
Network Overhead Higher (due to broadcasting) Lower
Cloud Compatibility Poor Excellent
Scalability Easy for small clusters Better control for large clusters
Configuration Effort Minimal More manual effort

Context

Before diving in, here’s what you need to be aware of:

Network Connectivity

All Joget instances must be able to directly communicate with each other over the network. In this guide, the Joget nodes and the reverse proxy are running on the same subnet and can explicitly ping each other.

Session Replication

Tomcat stores session data in memory. Each node replicates user sessions from/to other nodes for failover, increasing its memory usage.

Cluster Architecture Overview

We’ll use the following setup:

Component Description
joget-1 Joget DX instance (Node 1)
joget-2 Joget DX instance (Node 2)
joget-db MySQL database
joget-proxy Reverse proxy, also known as Load Balancer
joget-data
Shared folder consumed by joget-1 and joget-2 to host the shared-wflow directory. Works similarly to NFS for a classic on-premise hosted environment.
joget-net
All containers reside in the same network subnet

Cluster Architecture Diagram

Note: Familiarity with Docker, networking, Tomcat and Joget configuration is required.

Install Docker

The easiest way is to install Docker Desktop:

Docker Directory Structure

cluster-demo/
├── docker-compose.yml
├── nginx.conf
├── joget/
│ ├── Dockerfile
│ ├── server.xml
│ └── joget-enterprise-linux-x.x.x.tar.gz <-- manually downloaded

Docker Files

docker-compose.yml
# Warning:
# These configurations are meant for testing or learning clustering basics.
# When configuring for production usage, please refer to the respective documentation(s) from official sources for best practices and recommended settings.
services:
  joget-db:
    image: mysql:8.0-debian
    container_name: joget-db
    hostname: joget-db
    environment:
      MYSQL_ROOT_PASSWORD: root
    networks:
      joget-net:
        ipv4_address: 172.20.0.20
    healthcheck:
      test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-uroot", "-proot"]
      interval: 5s
      timeout: 3s
      retries: 10

  joget-1:
    image: joget:dx
    build:
      context: ./joget
      args:
      # Please consult https://download.joget.org for the edition and version
        EDITION: enterprise
        VERSION: 8.2.0
    container_name: joget-1
    hostname: joget-1
    volumes:
      - shared-data:/root/joget/shared-wflow
    networks:
      joget-net:
        ipv4_address: 172.20.0.11
    depends_on:
      - joget-db
    healthcheck:
      test: ["CMD", "test", "-f", "/root/joget/shared-wflow/.setup_completed"]
      interval: 5s
      timeout: 20s
      retries: 10
    extra_hosts:
      - "localhost:172.20.0.10"

  joget-2:
    image: joget:dx
    container_name: joget-2
    hostname: joget-2
    volumes:
      - shared-data:/root/joget/shared-wflow
    networks:
      joget-net:
        ipv4_address: 172.20.0.12
    depends_on:
      joget-1:
        condition: service_healthy
    extra_hosts:
      - "localhost:172.20.0.10"

  joget-proxy:
    image: nginx:bullseye
    container_name: joget-proxy
    hostname: joget-proxy
    volumes:
      - ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
    depends_on:
      - joget-1
      - joget-2
    networks:
      joget-net:
        ipv4_address: 172.20.0.10
    ports:
      - "8080:8080"

networks:
  joget-net:
    name: joget-net
    driver: bridge
    ipam:
      config:
        - subnet: 172.20.0.0/16

volumes:
  shared-data:
    name: joget-data
nginx.conf
upstream tomcat_cluster {
server 172.20.0.11:8080;
server 172.20.0.12:8080;
}

server {
    listen 8080;
    underscores_in_headers on;

    location = / {
        return 302 /jw/;
    }

    location /jw/ {
        proxy_pass http://tomcat_cluster/jw/;

        proxy_set_header Host              $http_host;
        proxy_set_header X-Real-IP         $remote_addr;
        proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # Required for websocket connectivity
        proxy_http_version  1.1;
        proxy_set_header    Upgrade $http_upgrade;
        proxy_set_header    Connection "upgrade";

        proxy_set_header X-NginX-Proxy  true;
        proxy_redirect off;

        proxy_connect_timeout 120s;
        proxy_send_timeout 120s;
        proxy_read_timeout 120s;
        send_timeout 120s;
    }

    location / {
        return 404;
    }
}
joget/Dockerfile
# Warning:
# These configurations are meant for testing or learning clustering basics.
# When configuring for production usage, please refer to the respective documentation(s) from official sources for best practices and recommended settings.
FROM openjdk:11-jre-slim-bullseye

ARG EDITION
ENV EDITION=$EDITION

ARG VERSION
ENV VERSION=$VERSION

# Do not build if EDITION and VERSION are not set
RUN test -n "$EDITION" && test -n "$VERSION" || (echo "ERROR: EDITION and VERSION build args are required!" && exit 1)

ENV JOGET="joget-$EDITION-linux-$VERSION"
RUN echo "Building $JOGET"

RUN apt-get update && \
    apt-get install -y \
      tar \
      mariadb-client \
      git && \
    rm -rf /var/lib/apt/lists/*

ENV HOME=/root
WORKDIR $HOME

# NOTE: Please refer to docker-compose.yml for the tarball version to download
# 1- Manually download https://download.joget.org/enterprise/$JOGET.tar.gz
# 2- Put downloaded tarball in the build context (joget folder) before building
COPY $JOGET.tar.gz $HOME

# Extract joget tarball
RUN tar -zxf $JOGET.tar.gz && \
    rm -f $JOGET.tar.gz && \
    mv $JOGET joget

# copy Tomcat clustered settings
COPY server.xml $HOME
RUN mv $HOME/server.xml $HOME/joget/apache-tomcat-*/conf/server.xml

# Remove existing deployed apps
RUN rm -rf $HOME/joget/apache-tomcat-*/webapps/*/

# Replace wflow with shared-wflow
RUN sed -i 's|\./wflow/|./shared-wflow/|g' $HOME/joget/*tomcat*.sh && \
    sed -i 's|wflow|shared-wflow|g' $HOME/joget/build.xml

# Create joget startup script
RUN cat <<\EOF > "/usr/local/bin/joget-entrypoint.sh" && chmod +x "/usr/local/bin/joget-entrypoint.sh"
#!/bin/bash

WFLOW="$HOME/joget/wflow"
SHARED_WFLOW="$HOME/joget/shared-wflow"
SETUP_STARTED="$SHARED_WFLOW/.setup_started"
SETUP_COMPLETED="$SHARED_WFLOW/.setup_completed"
DB_HOST="172.20.0.20"

move_contents_from_wflow_to_shared_wflow() {
  cd $HOME/joget
  mv $WFLOW/* $SHARED_WFLOW/
  rm -rf "$WFLOW"
}

initialize_shared_db() {
  cd $HOME/joget
  
  until mysqladmin ping -h $DB_HOST -u root -proot --silent; do
    echo "Waiting for Database.."
    sleep 2
  done

  ./apache-ant-*/bin/ant -d setup -Ddb.host=$DB_HOST -Ddb.port=3306 -Ddb.name=jwdb -Ddb.user=root -Ddb.password=root -Dprofile.name=jwdb
}

remove_unused_packages() {
  apt autoremove -f \
    mariadb-client \
    tar
}

# If setup not yet completed, wait for it
while [ ! -f "$SETUP_COMPLETED" ]; do
  if [ ! -f "$SETUP_STARTED" ]; then
    touch "$SETUP_STARTED"
    
    move_contents_from_wflow_to_shared_wflow
    initialize_shared_db

    touch "$SETUP_COMPLETED"
  fi
  sleep 2
done

remove_unused_packages

cd $HOME/joget

exec nohup ./*tomcat*.sh run
EOF

ENTRYPOINT ["joget-entrypoint.sh"]
joget/server.xml
<?xml version="1.0" encoding="UTF-8"?>

<Server port="8005" shutdown="SHUTDOWN">
  <Listener className="org.apache.catalina.startup.VersionLoggerListener" />
  <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" />
  <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" />
  <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
  <Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" />

  <GlobalNamingResources>
    <Resource name="UserDatabase" auth="Container"
              type="org.apache.catalina.UserDatabase"
              description="User database that can be updated and saved"
              factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
              pathname="conf/tomcat-users.xml" />
  </GlobalNamingResources>

  <Service name="Catalina">
    <Connector port="8080" protocol="HTTP/1.1"
               connectionTimeout="20000"
               compression="on"
               useSendfile="false"
               redirectPort="8443" />

    <Engine name="Catalina" defaultHost="localhost">
        <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" channelSendOptions="6">
            <Manager className="org.apache.catalina.ha.session.DeltaManager"
                   expireSessionsOnShutdown="false"
                   notifyListenersOnReplication="true"/>

            <Channel className="org.apache.catalina.tribes.group.GroupChannel">
                <Membership className="org.apache.catalina.tribes.membership.StaticMembershipService">
                    <!--
                        Define your joget nodes here.
                        host and uniqueId must be unique for each node.
                    -->
                    <Member className="org.apache.catalina.tribes.membership.StaticMember"
                            port="20000"
                            host="172.20.0.11"
                            uniqueId="{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,11}"/>
                    <Member className="org.apache.catalina.tribes.membership.StaticMember"
                            port="20000"
                            host="172.20.0.12"
                            uniqueId="{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,12}"/>
                </Membership>

                <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
                        address="auto"
                        port="20000"
                        autoBind="100"
                        selectorTimeout="5000"
                        maxThreads="6"
                        />

                <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
                    <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
                </Sender>

                <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpPingInterceptor"/>
                <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
                <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatchInterceptor"/>
            </Channel>

            <Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter=""/>
            <Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
            <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
        </Cluster>

        <Realm className="org.apache.catalina.realm.LockOutRealm">
            <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/>
        </Realm>

        <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true">
            <!--
                To get original requestor ip address.
                Without this, tomcat will use the proxy ip address instead.
                httpServerPort and httpsServerPort should match with proxy's listening ports.
            -->
            <Valve className="org.apache.catalina.valves.RemoteIpValve"
                    internalProxies="\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}"
                    remoteIpHeader="x-forwarded-for"
                    proxiesHeader="x-forwarded-by"
                    protocolHeader="x-forwarded-proto"
                    httpServerPort="8080"
                    httpsServerPort="8443" />
        </Host>
    </Engine>
  </Service>
</Server>
Additional references specifically for Tomcat clustering

Build and Launch Joget Cluster

Open your terminal and run:

cd joget-cluster-demo
docker compose build
docker compose up -d
Note: Initial setup may take a few minutes.

On Windows, before executing any docker commands, you will need to make sure Docker Engine is running. Check its status using the below command in the Power Shell terminal with Administrator privilege:

Get-Service com.docker.service
 

Troubleshooting joget-1 Unhealthy Error

If your Joget cluster fails to start and you encounter an error like the one below, don’t worry.

 docker compose up -d
[+] Running 6/6
 ✔ Network joget-net      Created                                                                        0.2s
 ✔ Volume "joget-data"    Created                                                                        0.1s
 ✔ Container joget-db     Started                                                                        2.9s
 ✘ Container joget-1      Error                                                                         56.5s
 ✔ Container joget-2      Created                                                                        0.2s
 ✔ Container joget-proxy  Created                                                                        0.3s
dependency failed to start: container joget-1 is unhealthy

You can still get your Joget cluster up and running by increasing the joget-1 healthcheck timeout to better suit your situation.
Follow the steps below to adjust the joget-1 healthcheck timeout setting.

  1. Shutdown the cluster:
    docker compose down --volumes --remove-orphans
  2. Edit docker-compose.yml, modify the following:
    joget-1:
        healthcheck:
          interval: 60s    # how often to check
          timeout: 5s      # how long to wait before considering a check failed
          retries: 10      # number of retries
    
    joget-1 new healthcheck timeout is = interval * retries = 60s * 10 = 600s = 10 minutes.
  3. Retry to start up the cluster:
    docker compose up -d

Access the Joget Cluster

Open your browser and go to http://localhost:8080.

Note
On the first visit, it will take a few minutes for Joget to set things up.

Then you'll be redirected to the default landing page.

Next, log in using the default Administrator credentials:

Username: admin
Password: admin

Load Balancing Distribution

To check that the cluster is configured correctly:

  1. Click the pencil icon button at the bottom-right corner.
  2. Go to the Monitor button on the bottom navigation bar.
  3. Click on System Logs.
  4. At the Cluster Nodes section, you should see:
    • joget-1 and joget-2 nodes are available for selection.
    • Either one of the nodes is marked as Current.
  5. Optionally, you can upload a sample app and open the App Composer to edit forms, lists, UIs, and processes.
    This helps further verify that Joget is functioning as usual.

When System Logs (http://localhost:8080/jw/web/console/monitor/slogs) is refreshed multiple times,
the Current node is switching between joget-1 and joget-2.

This behavior is exactly what we want to see. This behavior confirms that the cluster is functioning correctly with proper load balancing among the nodes.

Simulate Failover

A node failure scenario

Before we start, make sure your System Logs URL is http://localhost:8080/jw/web/console/monitor/slogs. Continuation from the above System Logs image, click on the small, tiny green button to change the URL.

Now, let’s test a node failure by disconnecting it from the network:

  1. Check the current node in System Logs, for example joget-1 is the current node.
  2. Run this to disconnect joget-1 from network:
    docker network disconnect joget-net joget-1
  3. Back to the System Logs, refresh it multiple times. You’ll notice that joget-2 it consistently appears as the Current node, while joget-1 is never marked as Current.
  4. Navigate further to other pages to confirm you're still logged in.
  5. Try creating/editing form data or executing processes. You should not notice any disruption even though joget-1 is offline.
When any node in a cluster goes offline, another healthy node takes over, and users don’t even notice a thing. This is exactly the kind of resilience you want in a production environment.

A node recovery scenario

Now, let's test a scenario where a node has been recovered from failure and is joining back into the cluster.

  1. Reconnect joget-1:
    docker network connect --ip 172.20.0.11 joget-net joget-1
  2. After a moment, joget-1 should rejoin the cluster.
  3. Back to System Logs, refresh it multiple times until joget-1 is marked as Current.

Clean Up

When you're done exploring, shut down all Docker components by:

docker compose down --volumes --remove-orphans

Optionally, remove all resources used in this demo:

docker image rm joget:dx mysql:8.0-debian nginx:bullseye
docker builder prune -f

Summary

You’ve now successfully:

  • Set up clustered Joget in Docker
  • Verified load balancing distribution
  • Simulated failover and recovery
  • Ensured uninterrupted user experience during node changes

This shows the power of high availability and non-sticky session handling in a clustered environment.

The setup in this demo is meant for testing or learning clustering basics. For production, ensure proper security, monitoring, and performance tuning.
Created by Sahir Last modified by Debanraj on Jul 18, 2025