Server Clustering

The Server Clustering provides comprehensive instructions for implementing server clustering in your Joget DX environment. By clustering servers, you can enhance your applications' performance, scalability, and availability. This outlines the requirements, architecture, and configuration steps for a robust server clustering infrastructure.

Requirements

With the integration of clustering features into Joget DX 8, setting up clustering is now more accessible and streamlined. This enhancement simplifies the setup process and expands availability. To implement clustering effectively, the following components need to be prepared and configured:

  • Load Balancers: Essential for distributing user requests across servers to balance load and enhance availability.
  • Application Servers: These form the backbone of the cluster, handling the processing of applications.
  • Shared File Directory: A centralized location accessible by all servers in the cluster, used for storing shared data and resources.
  • Shared Database: A database setup that supports concurrent access from multiple application servers to ensure data consistency and integrity.

Architecture

Clustering architecture can vary based on specific needs and environments. However, the fundamental concepts remain consistent. The architecture outlined in this document includes the following key components:

  • Centralized Load Balancing: A load balancer to distribute incoming traffic and requests efficiently among the servers.
  • Replicated Application Servers: Multiple application servers configured to handle requests in a synchronized manner, ensuring no single point of failure.
  • Shared Resources: Both the file directory and the database are configured to be accessible by all application servers, ensuring that all nodes have access to the same data for processing and response.

Deployment and configuration

The following steps outline the process for setting up Joget clustering. The configuration steps may vary depending on the products used at each layer.

Note that Joget LEE requires minimal configuration, with most setup tasks performed on separate layers. Ensure you have sufficient expertise in the products you choose to use.

Pre-deployment requirements

Before you begin the clustering installation, ensure these prerequisites are met:

  1. Shared File Directory
    A common directory accessible by the application servers with read/write permissions. This directory stores shared configuration files, system-generated files, and uploaded files. Confirm that the shared directory is mounted on the application servers and that files can be accessed with the necessary permissions.

  2. Shared Database
    A common database is accessible by the application servers with permissions to select, update, delete, create, and alter tables. Confirm connectivity and querying capabilities from the application servers.

  3. Application Servers
    Install a Java web application server on each cluster server. Confirm each server’s correct installation and accessibility via a web browser.

  4. Session Replication
    Configure session replication across the application servers and network. Ensure each application server and the network have session replication properly set up.

  5. Load Balancer
    Install and configure a load balancer (hardware or software) to direct traffic to the application servers for requests starting with /jw. Ensure it directs web traffic correctly.

Joget clustering configuration

Once the pre-deployment requirements are verified:

Datasource configuration

Configure the data source properties in the shared directory.

  1. Copy all files and directories from the wflow directory of a Joget LEE bundle to the shared file directory.

  2. Edit app_datasource-default.properties to set the database connection settings for the shared database. For instance, for MySQL, modify the values as shown below:

    workflowDriver=com.mysql.jdbc.Driver
    workflowUrl=jdbc:mysql://host:port/database_name?characterEncoding=UTF-8
    workflowUser=username
    profileName=
    workflowPassword=password

Application deployment and configuration

Deploy the Joget WAR file to the application servers and configure startup properties to point to the shared directory.

  1. Deploy jw.war from the Joget installation bundle to each application server. For Apache Tomcat, copy the files into the webapps directory.

  2. For Joget DX 8, wflow-cluster.jar is pre-included and configured in the default startup script. Modify JAVA_OPTS in the startup script as necessary, particularly regarding the shared directory path:

    export JAVA_OPTS="-Xmx1024M -Dwflow.home=/shared_directory_path -javaagent:/shared_directory_path/wflow-cluster.jar -javaagent:/path_to/lib/aspectjweaver-1.9.7.jar -javaagent:/shared_directory_path/glowroot/glowroot.jar"
  3. Include -Dwflow.name= before -javaagent:/shared_directory_path/wflow-cluster.jar if your nodes are on the same server. Example: -Dwflow.name=node1

License activation

Activate the license for each server. Each server requires a separate license activation based on its unique system key. For more information, see the Set Up Your Joget DX Enterprise License guide.

  1. For each application server, use a web browser to access the Joget web console directly (bypassing the load balancer) via http://server1:8080/jw/web/console/home.
  2. Request and activate the license using the link in the web console footer.

Post-deployment testing

After configuring clustering, test the setup by accessing the load balancer with a web browser to ensure everything functions correctly.

Sample installation and configuration

This guide outlines the installation process using the following products:

  • Joget DX 8 EE
  • Load Balancer: Apache HTTP Web Server 2.4 with mod_proxy and mod_balancer running on Ubuntu 18.04
  • Application Servers: Apache Tomcat 8.5 running on Ubuntu 18.04
  • Shared File Directory: NFS on Ubuntu 18.04
  • Shared Database: MySQL 5.7 on Ubuntu 18.04
Important
This document is not a comprehensive installation guide and omits production-level details such as user permissions, network, and database security. Ensure your system, network, and database administrators manage these aspects.

Create a shared file directory

It involves creating a shared directory as a centralized repository for storing data and resources shared among multiple application servers. This practice is essential to ensure consistency and availability of shared resources across cluster nodes.

Follow these steps to create a shared file directory:

  1. Install NFS on your file server:

    sudo apt-get install portmap nfs-kernel-server
  2.  Create and set permissions for the shared directory:
    sudo mkdir -p /export/wflow
    sudo chown nobody:nogroup /export/wflow
  3. Configure NFS to export the directory:

    • Edit /etc/exports to include:
      /export/wflow 192.168.1.0/255.255.255.0(rw,no_subtree_check,async)
    • Execute the following commands to export the shares and restart NFS service:
      sudo exportfs -ra
      sudo service nfs-kernel-server restart

Mount the shared directory on application servers

Once the shared file directory is created, it must be mounted on all application servers that will be part of the cluster. This allows all servers to access the same data and resources, facilitating synchronization and collaboration among cluster nodes.

Here are the steps to mount the shared directory on application servers:

  1. Install the NFS client on application servers:

    apt-get install nfs-common
  2.  Prepare the mount point and set permissions:
    sudo mkdir -p /opt/joget/shared/wflow
    sudo chmod 777 /opt/joget/shared/wflow
  3. Mount the shared directory and test permissions:
    sudo mount -t nfs wflow:/export/wflow /opt/joget/shared/wflow
    echo test123 > /opt/joget/shared/wflow/test.txt

Create a shared database

In addition to the shared file directory, a shared database must be configured to allow concurrent access from multiple application servers. This ensures data consistency and integrity across cluster nodes, which is crucial for the efficient operation of distributed applications.

To create a shared database, perform the following steps:

  1. Install MySQL:

    Access the detailed server guide for installation.

    sudo apt-get install mysql-server
  2. Create the database:

    • Open MySQL as root:
      mysql -u root
    • Create a new database named jwedb:
      create database jwedb;
      quit
  3. Populate the database:
    Load the Joget database schema into the newly created database.
    mysql -uroot jwedb < /path/to/jwdb-mysql.sql
  4. Set database permissions:

    • Re-enter MySQL as root:
      mysql -u root
    • Grant full permissions to the user joget and password joget:
      grant all privileges on jwedb.* to 'joget'@'%' identified by 'joget';
      flush privileges;
      quit
  5. Configure MySQL for remote connections:

    • Edit the MySQL configuration file to allow connections from remote hosts:
      sudo vim /etc/mysql/my.cnf
    • Comment out the bind-address line:
      #bind-address = 127.0.0.1
    • Restart the MySQL service to apply changes:
      sudo service mysql restart
  6. Test remote database connectivity:
    Verify that the application server can connect to the database:
    mysql -h database_host -u joget -p

Deploy application servers

Deployment of application servers in the cluster environment. This involves installing and configuring the application servers on each cluster node to be ready to execute and manage distributed applications.

Follow these steps to deploy application servers:

  1. Install Apache Tomcat:

    • Create a directory for Tomcat on each application server:
      sudo mkdir -p /opt/joget/
    • Extract the Tomcat package:
      sudo tar xvfz apache-tomcat-8.5.41.tar.gz -C /opt/joget/
  2. Start Apache Tomcat:

    • Change to the Tomcat installation directory:
      sudo /opt/joget/apache-tomcat-8.5.41
    • Launch Tomcat:
      sudo ./bin/catalina.sh start
  3. Verify Server Accessibility:
    Confirm that each server is accessible via a web browser at http://server:8080/jw.

Configure application server session replication

This process ensures that user sessions are shared and synchronized across cluster nodes, ensuring service continuity and user experience consistency across all application instances.

Configure application server session replication by following these steps:

  1. Edit Tomcat Configuration for Clustering:

    • Navigate to the Tomcat config directory and modify server.xml:
      sudo vim /opt/joget/apache-tomcat-8.5.41/conf/server.xml
    • Add jvmRoute="node01" to the Engine tag and uncomment the Cluster tag:
      <Engine name="Catalina" defaultHost="localhost" jvmRoute="node01">
      <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
  2. Configure Local Domain IP:

    • Ensure your server's hostname is correctly related to the corresponding IP address. To do this, edit the /etc/hosts file and add the following line, where 'server1' is the server name and '192.168.1.10' is the IP address:
      192.168.1.10 server1
  3. Verify Multicast Configuration:

  4. Restart Tomcat and Verify Replication:

    • Go to the Tomcat directory and restart:
      sudo cd /opt/joget/apache-tomcat-8.5.41
      sudo ./bin/catalina.sh stop
      sudo ./bin/catalina.sh start
    • Verify session replication is working between the application servers. The catalina.out log file in a pache-tomcat-8.5.41/logs Should show something similar to:
      INFO: Starting clustering manager at localhost#/jw
      Jan 17, 2016 11:21:32 AM org.apache.catalina.ha.session.DeltaManager getAllClusterSessions
      INFO: Manager [localhost#/jw], requesting session state from org.apache.catalina.tribes.membership.MemberImpl[tcp://{127, 0, 0, 1}:4001,{127, 0, 0, 1},4001, alive=55733886, securePort=-1, UDP Port=-1, id={-57 118 -98 -98 110 -38 64 -68 -74 -25 -29 101 46 103 5 -48 }, payload={}, command={}, domain={}, ]. This operation will timeout if no session state has been received within 60 seconds.
      Jan 17, 2016 11:21:32 AM org.apache.catalina.ha.session.DeltaManager waitForSendAllSessions
      INFO: Manager [localhost#/jw]; session state send at 1/17/16 11:21 AM received in 104 ms.

Configure load balancer

Configuration of the load balancer to evenly distribute network traffic among the cluster's application servers. This improves the scalability and availability of the application by evenly distributing the workload among cluster nodes.

Here's how to configure the load balancer:

  1. Install Apache HTTP Server and Modules:

    sudo apt-get install apache2
    sudo a2enmod headers proxy proxy_balancer proxy_http

    If you are running Apache 2.4, you will need also to enable the following module.

    sudo a2enmod lbmethod_byrequests
  2. Create and Configure a New Virtual Host:

    • Create a configuration file for your site:
      sudo vim /etc/apache2/sites-available/jwsite.conf
    • Insert the following configuration to setup the proxy and load balancer:
      <VirtualHost *>
          DocumentRoot "/var/www/jwsite"
          ServerName localhost
          ServerAdmin support@example.com
          ErrorLog /var/log/apache2/jwsite-error.log
          CustomLog /var/log/apache2/jwsite-access.log combined
          DirectoryIndex index.html index.htm
          <Proxy balancer://wscluster>
              BalancerMember ws://server1:8080 route=node01
              BalancerMember ws://server2:8080 route=node02
              Allow from all
          </Proxy>
          ProxyPass /jw/web/applog balancer://wscluster stickysession=JSESSIONID
          ProxyPassReverse /jw/web/applog balancer://wscluster
          <Proxy balancer://cluster>
              BalancerMember http://server1:8080 route=node01
              BalancerMember http://server2:8080 route=node02
              Allow from all
          </Proxy>
          ProxyPass /jw balancer://cluster/jw stickysession=JSESSIONID
          ProxyPassReverse /jw balancer://cluster/jw
          ProxyPreserveHost On
      </VirtualHost>
    • Enable the new site and reload Apache:
      sudo a2ensite jwsite
      sudo service apache2 reload

Deploy and configure Joget LEE

Finally, Joget LEE (Large Enterprise Edition) is deployed and configured in the cluster environment. This involves installing and configuring the enterprise edition of Joget to leverage the platform's development and application management capabilities fully.

Follow the steps outlined in the section Joget Clustering Configuration to deploy and configure Joget LEE. Ensure all application servers are properly clustered and the database is configured to handle multiple connections.

Created by Julieth Last modified by Aadrian on Dec 13, 2024