Joget on Azure Kubernetes Service

You can deploy, run, and scale Joget on Azure Kubernetes Service (AKS), a fully managed Kubernetes service offered by Microsoft Azure. This guide simplifies the complex tasks of container orchestration and provides scalable solutions for managing containerized applications across a cluster of machines. By leveraging AKS, you benefit from automatic updates, integrated monitoring, and scaling without the need to deeply understand the intricacies of Kubernetes management. With AKS handling your infrastructure, the focus can completely shift towards improving application development and operations.

If you are unfamiliar with Kubernetes, refer to the Joget on Kubernetes guide for a quick introduction.

Create a kubernetes cluster in AKS

Using the Azure Portal:

  1. Go to Kubernetes services and select Create a Kubernetes cluster.

  2. On the Basics tab, set your Subscription, Resource Group, and enter a Kubernetes cluster name. Adjust other configurations as needed.

  3. In the Node pools tab, configure the node pools. For more information, see Create node pools for a cluster in Azure Kubernetes Service (AKS).

    This guide assumes a single node configuration.

  1. Leave default settings or make adjustments in the Access, Networking, Integrations, Advanced, and Tags tabs.

  2. Click Review + create to deploy your Kubernetes cluster.

  1. After deployment, connect to your cluster using Azure CLI or Azure Cloud Shell. Learn how to connect.

Deploy MySQL database

Set up a MySQL database to support the Joget platform:

Apply the following example YAML files below to deploy the Persistent Volume (PV), Persistent Volume Claim (PVC) and MySQL database to the Kubernetes cluster

  • Create persistent storage using PersistentVolume and PersistentVolumeClaim in mysql-pv.yaml:
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: mysql-pv-volume
      labels:
        type: local
    spec:
      storageClassName: manual
      capacity:
        storage: 10Gi
      accessModes:
        - ReadWriteOnce
      hostPath:
        path: "/mnt/data"
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: mysql-pv-claim
    spec:
      storageClassName: manual
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 5Gi
  • Deploy the mysql-pv.yaml file
    kubectl apply -f mysql-pv.yaml
  • Create the mysql-deployment.yaml
    apiVersion: v1
    kind: Service
    metadata:
      name: mysql
    spec:
      ports:
      - port: 3306
      selector:
        app: mysql
      clusterIP: None
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: mysql
    spec:
      selector:
        matchLabels:
          app: mysql
      strategy:
        type: Recreate
      template:
        metadata:
          labels:
            app: mysql
        spec:
          containers:
          - image: mysql:8.0
            name: mysql
            env:
              # Use secret in real usage
            - name: MYSQL_ROOT_PASSWORD
              value: password
            ports:
            - containerPort: 3306
              name: mysql
            volumeMounts:
            - name: mysql-persistent-storage
              mountPath: /var/lib/mysql
          volumes:
          - name: mysql-persistent-storage
            persistentVolumeClaim:
              claimName: mysql-pv-claim
  • Deploy the MySQL image:
    kubectl apply -f mysql-deployment.yaml
  • Inspect the deployment:
    kubectl describe deployment mysql
    kubectl get pods -l app=mysql
    kubectl describe pvc mysql-pv-claim

Modify the original YAML files for production use, such as using a different MySQL image version and setting up secrets instead of plain passwords.

Deploy shared storage in AKS

For multi-node Kubernetes clusters, allocate shared persistent storage accessible by multiple nodes:

  1. Create an Azure Ubuntu VM in the same virtual network as the AKS cluster.

  2. Set up an NFS server on the VM using this script, modifying variables as necessary:

    #!/bin/bash
    # This script should be executed on Linux Ubuntu Virtual Machine
     
    EXPORT_DIRECTORY=${1:-/export/data}
    DATA_DIRECTORY=${2:-/data}
    AKS_SUBNET=${3:-*}
     
    echo "Updating packages"
    apt-get -y update
     
    echo "Installing NFS kernel server"
     
    apt-get -y install nfs-kernel-server
     
    echo "Making data directory ${DATA_DIRECTORY}"
    mkdir -p ${DATA_DIRECTORY}
     
    echo "Making new directory to be exported and linked to data directory: ${EXPORT_DIRECTORY}"
    mkdir -p ${EXPORT_DIRECTORY}
     
    echo "Mount binding ${DATA_DIRECTORY} to ${EXPORT_DIRECTORY}"
    mount --bind ${DATA_DIRECTORY} ${EXPORT_DIRECTORY}
     
    echo "Giving 777 permissions to ${EXPORT_DIRECTORY} directory"
    chmod 777 ${EXPORT_DIRECTORY}
     
    parentdir="$(dirname "$EXPORT_DIRECTORY")"
    echo "Giving 777 permissions to parent: ${parentdir} directory"
    chmod 777 $parentdir
     
    echo "Appending bound directories into fstab"
    echo "${DATA_DIRECTORY}    ${EXPORT_DIRECTORY}   none    bind  0  0" >> /etc/fstab
     
    echo "Appending localhost and Kubernetes subnet address ${AKS_SUBNET} to exports configuration file"
    echo "/export        ${AKS_SUBNET}(rw,async,insecure,fsid=0,crossmnt,no_subtree_check)" >> /etc/exports
    echo "/export        localhost(rw,async,insecure,fsid=0,crossmnt,no_subtree_check)" >> /etc/exports
     
    nohup service nfs-kernel-server restart
  1. Configure PersistentVolumes, apply the azurenfsstorage.yaml file, adjusting NFS settings as needed:
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: aks-nfs
      labels:
        type: nfs
    spec:
      capacity:
        storage: 1Gi
      accessModes:
        - ReadWriteMany
      nfs:
        server: NFS_INTERNAL_IP
        path: NFS_EXPORT_FILE_PATH
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: aks-nfs
    spec:
      accessModes:
        - ReadWriteMany
      storageClassName: ""
      resources:
        requests:
          storage: 1Gi
      selector:
        matchLabels:
          type: nfs

Update NFS Server Settings

Before applying the azurenfsstorage.yaml file, ensure that the values for NFS_INTERNAL_IP, NFS_NAME, and NFS_EXPORT_FILE_PATH are replaced with the actual settings from your NFS Server:

kubectl apply -f azurenfsstorage.yaml

Deploy Joget DX

  1. With the necessary database and persistent storage configured, proceed to deploy Joget DX. Apply the joget-dx8-tomcat9-aks.yaml file to start the deployment process:
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: joget-dx8-tomcat9
      labels:
        app: joget-dx8-tomcat9
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: joget-dx8-tomcat9
      template:
        metadata:
          labels:
            app: joget-dx8-tomcat9
        spec:
          initContainers:
           - name: init-volume
             image: busybox:1.28
             command: ['sh', '-c', 'chmod -f -R g+w /opt/joget/wflow; exit 0']
             volumeMounts:
               - name: joget-dx8-tomcat9-volume
                 mountPath: "/opt/joget/wflow"
          volumes:
            - name: joget-dx8-tomcat9-volume
              persistentVolumeClaim:
                claimName: aks-nfs
          securityContext:
            runAsUser: 1000
            fsGroup: 0
          containers:
            - name: joget-dx8-tomcat9
              image: jogetworkflow/joget-dx8-tomcat9:latest
              ports:
                - containerPort: 8080
                - containerPort: 9080
              volumeMounts:
                - name: joget-dx8-tomcat9-volume
                  mountPath: /opt/joget/wflow
              env:
                - name: KUBERNETES_NAMESPACE
                  valueFrom:
                    fieldRef:
                        fieldPath: metadata.namespace
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: joget-dx8-tomcat9
      labels:
        app: joget-dx8-tomcat9
    spec:
      ports:
      - name: http
        port: 8080
        targetPort: 8080
      - name: https
        port: 9080
        targetPort: 9080  
      selector:
        app: joget-dx8-tomcat9
      type: ClusterIP
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: joget-dx8-tomcat9-clusterrolebinding
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: view
    subjects:
      - kind: ServiceAccount
        name: default
        namespace: default
  1. You can monitor the deployment progress either from the Azure portal or by using Kubernetes commands, such as:
    kubectl get deployment joget-dx8-tomcat9

     

Deploy ingress for external connections

  1. Configure the Nginx Ingress Controller to enable external access to the Joget application. You can read more regarding Ingress in Kubernetes here. You can deploy it using Helm or from the Nginx Ingress Controller GitHub repository. 
    • Deploy Nginx Ingress Controller to AKS Cluster
      You can refer to the AKS documentation regarding creating ingress-nginx and also the nginx-ingress document.
    • Install using Helm:
      Use Azure CLI/Cloud shell, set up the Helm for Nginx Ingress

      helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
      helm repo update
       
      helm install ingress-nginx ingress-nginx/ingress-nginx --create-namespace --namespace nginx-ingress --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz --set controller.service.externalTrafficPolicy=Local

  1. After deploying the Ingress Controller, apply the joget-ingress.yaml file to enable external access:
    Example the joget-ingress.yaml:
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: joget-dx8-tomcat9-ingress
      annotations:
        nginx.ingress.kubernetes.io/affinity: cookie
        nginx.ingress.kubernetes.io/ssl-redirect: "false"
    spec:
      ingressClassName: nginx
      rules:
        - http:
            paths:
              - path: /jw
                pathType: Prefix
                backend:
                  service:
                    name: joget-dx8-tomcat9
                    port:
                      number: 8080
  2. Obtain the public IP address from the Kubernetes resources to access the application externally:
    http://<external-ip>/jw

Database set up for Joget deployment

  1. Complete the database setup for Joget DX by entering the MySQL service name, database username, and password.
  2. Click Save.

  3. Once the set up is complete, click Done and you will be brought to the Joget App Center.

Set up cert-manager for TLS termination

Implementing TLS termination ensures secure communication by encrypting data in transit. To set up Cert-Manager for TLS termination in your Kubernetes cluster, follow these steps:

  1. Modify the Ingress configuration to support underscores in headers and enable snippet annotations. Create or update a ConfigMap (ingress-configmap.yaml) with the following settings:
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: ingress-nginx-controller
      namespace: ingress-nginx
    data:
      enable-underscores-in-headers: "true"
      allow-snippet-annotations: "true"

    Apply the updated configuration:

    kubectl apply -f ingress-configmap.yaml
Before proceeding with these steps, ensure DNS has been set up to the public IP of the ingress generated by AKS earlier.

Install cert-manager into the cluster

  1. Install Cert-Manager in your cluster to manage the lifecycle of TLS certificates. You can install it using a YAML file from the official repository:
    kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.15.1/cert-manager.yaml
    
  2. Configure a ClusterIssuer to use Let's Encrypt for generating TLS certificates. Update the stagingissuer.yaml file with your details:
    apiVersion: cert-manager.io/v1
    kind: Issuer
    metadata:
      name: letsencrypt-staging
    spec:
      acme:
        # The ACME server URL
        server: https://acme-staging-v02.api.letsencrypt.org/directory
        # Email address used for ACME registration
        email: [Update email here]
        # Name of a secret used to store the ACME account private key
        privateKeySecretRef:
          name: letsencrypt-staging
        # Enable the HTTP-01 challenge provider
        solvers:
        - http01:
            ingress:
              ingressClassName: nginx
  3. Apply the issuer configuration:
    kubectl apply -f stagingissuer.yaml
  4. After deploying the issuer, check its status to ensure it is configured correctly:
    kubectl describe issuer letsencrypt-staging

Deploy/Update the Ingress with TLS Configuration

  1. Once deployed the Ingress without TLS configuration,  update the Ingress yaml file to include the TLS configuration.
    Example Ingress yaml with TLS:
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: joget-dx8-tomcat9-ingress
      annotations:
        nginx.ingress.kubernetes.io/affinity: cookie
        nginx.ingress.kubernetes.io/ssl-redirect: "true"
        cert-manager.io/cluster-issuer: "letsencrypt-staging"
    spec:
      ingressClassName: nginx
      tls:
      - hosts:
        - exampledomain.com
        secretName : aks-jogetworkflow
      rules:
        - host: exampledomain.com
          http:
            paths:
              - path: /jw
                pathType: Prefix
                backend:
                  service:
                    name: joget-dx8-tomcat9
                    port:
                      number: 9080
  2. This staging procedure ensures the certificate is generated correctly before setting up the Issuer with Let’s Encrypt production.
  3. Run the following command to get the current status of the certificates:
    kubectl get certificate
    
  4. The output of the command would be something similar to the following:
    [ ~/jogetaks ]$ kubectl get certificate
    NAME                READY   SECRET              AGE
    aks-jogetworkflow   True    aks-jogetworkflow   30s
  5. Run the following command to get more details about the specific certificate:
    kubectl describe certificate aks-jogetworkflow
    
  6. Once the certificate is generated correctly, set up the production Issuer. Below is an example productionissuer.yaml file:
    apiVersion: cert-manager.io/v1
    kind: ClusterIssuer
    metadata:
      name: letsencrypt-prod
    spec:
      acme:
        # The ACME server URL
        server: https://acme-v02.api.letsencrypt.org/directory
        # Email address used for ACME registration
        email: [update email here]
        # Name of a secret used to store the ACME account private key
        privateKeySecretRef:
          name: letsencrypt-prod
        # Enable the HTTP-01 challenge provider
        solvers:
        - http01:
            ingress:
              ingressClassName: nginx
  7. Update the ingress yaml file with the production annotation.
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: joget-dx8-tomcat9-ingress
      annotations:
        nginx.ingress.kubernetes.io/affinity: cookie
        nginx.ingress.kubernetes.io/ssl-redirect: "true"
        cert-manager.io/issuer: "letsencrypt-prod"
    spec:
      ingressClassName: nginx
      tls:
      - hosts:
        - exampledomain.com
        secretName : aks-jogetworkflow
      rules:
        - host: exampledomain.com
        - http:
            paths:
              - path: /jw
                pathType: Prefix
                backend:
                  service:
                    name: joget-dx8-tomcat9
                    port:
                      number: 9080
  8. Once updated the ingress yaml file, delete the previous secret to allow the new certifiate to be generated for production
    kubectl delete secret aks-jogetworkflow
  9. Run the describe command to check on the certificate status:
    kubectl describe certificate aks-jogetworkflow
  10. Once the new certificate has been issued, access the Joget domain with HTTPS to ensure that it working as intended.

 

Scale deployment

Scaling your deployment effectively responds to changes in demand or capacity. Azure Kubernetes Service (AKS) supports both automatic and manual scaling methods.

Scale pods automatically

To scale the pods automatically in AKS, read here

Scale pods manually

To manually increase or decrease the number of pods running the Joget application, use the kubectl scale command. For example, to scale to three replicas:

kubectl scale –replicas=3 deployment/joget-dx8-tomcat9

Set the --replicas value to your desired number of pods, and the system will initialize and start up the pods accordingly.

Scale nodes manually

Modify Node Count in the Azure Portal to adjust the number of nodes in your cluster:

  1. Go to the Kubernetes service in the Azure portal.
  2. Go to Settings > Node pools.
  3. Select the node pool you want to scale.
  4. Click Scale node pool.
  5. Choose Manual as the scale method.
  6. Enter the desired number of nodes. The maximum available resources will depend on the VM size you have selected.
Created by Julieth Last modified by Aadrian on Dec 13, 2024