Font Size:

Setting up AI Designer API Server

Pulling the AI Designer API container image

  1. The AI Designer API container image is hosted on https://quay.io at quay.io/joget/ai-designer-api:8.1-BETA
  2. To pull the image, authentication to quay.io is required, and credentials can be requested from https://intelligence.int.joget.cloud.
  3. With the credentials, use docker login in docker or imagePullSecrets in Kubernetes to pull the container image.

The secret for accessing the API

The secret is used for authentication to call the APIs.

The secret, which is set in the ENV file, is used as a master key that can be used to:

  1. Accessing the OpenAPI.json through the /docs endpoint with username as admin and password as the SECRET set in the env file.
  2. Accessing the master API endpoint for revoking specific API keys.
  3. Used to generate API keys by encoding JWT using the payload and the SECRET using the HS256 algorithm.

ENV File

Save this file as a .env file:

SECRET=Please request from https://intelligence.int.joget.cloud/ or set your own

MODEL_TEMPERATURE=0.2

LOG_LEVEL=info

FASTAPI_APPLICATION_PORT=8000
APPLICATION_WORKERS=1
APPLICATION_THREADS=1

MAX_ATTEMPTS=2

HUGGING_FACE_TOKEN=Please add your hugging face token if using custom open source models

BACKEND_CORS_ORIGINS=*

ENVIRONMENT=server

BUILD_CELERY=false
BUILD_MONGO=false
.properties
 

Deployment using docker-compose

Save this file as docker-compose.yml or download it from the release package:

version: "3.3"
services:
 llm_workflow_api:
   image: quay.io/joget/ai-designer-api:8.1-BETA
   command: gunicorn -k uvicorn.workers.UvicornWorker --bind 0.0.0.0:8000 --workers ${APPLICATION_WORKERS} --threads ${APPLICATION_THREADS} llm_workflow.fastapi_application --timeout 400 --log-level=${LOG_LEVEL}
   env_file:
     - .env
   container_name: ai_designer_api
   restart: unless-stopped
   ports:
     - ${FASTAPI_APPLICATION_PORT}:8000
   environment:
     LOG_LEVEL: ${LOG_LEVEL}
     ENVIRONMENT: server
   volumes:
     - ./datafiles:/LLMWorkflow/llm_workflow/datafiles
   networks:
     - llm_workflow
   healthcheck:
     test: ["CMD", "curl", "-f", "-X", "POST", "http://localhost:8000/health-check"]
     interval: 30s
     timeout: 10s
     retries: 5
networks:
 llm_workflow:
Markup
 

Command

sudo docker compose -f ./docker-compose.yml --env-file .env up --build -d
Docker
 

Deployment on Kubernetes

Use the following deployment YAML and ensure it is placed inside the Kubernetes directory:

---
apiVersion: apps/v1
kind: Deployment
metadata:
 name: ai-designer-api
spec:
 replicas: 1
 selector:
   matchLabels:
     app: ai-designer-api
 template:
   metadata:
     labels:
       app: ai-designer-api
   spec:
     containers:
     - name: ai-designer-api
       image: ${DOCKER_IMAGE}
       imagePullPolicy: Always
       workingDir: /LLMWorkflow
       command: ["gunicorn", "-k", "uvicorn.workers.UvicornWorker", "--bind", "0.0.0.0:8000", "--workers", "$(APPLICATION_WORKERS)", "--threads", "$(APPLICATION_THREADS)", "llm_workflow.fastapi_application", "--timeout", "400", "--log-level=$(LOG_LEVEL)"]
       ports:
       - containerPort: 8000
       envFrom:
       - configMapRef:
           name: llm-wflow-config
       resources:
         requests:
           # cpu: 4000m
           memory: 2Gi
         limits:
           # cpu: 8000m
           memory: 6Gi
       livenessProbe:
         exec:
           command:
           - curl
           - -f
           - -X
           - POST
           - http://localhost:8000/health-check
         initialDelaySeconds: 30
         timeoutSeconds: 5
         periodSeconds: 10
         successThreshold: 1
         failureThreshold: 6
       volumeMounts:
       - name: datafiles
         mountPath: /LLMWorkflow/llm_workflow/datafiles
     volumes:
     - name: datafiles
       emptyDir: {}  # For demonstration. Consider using a PersistentVolume in production.
---
  apiVersion: v1
  kind: Service
  metadata:
    name: ai-designer-api-service
  spec:
    selector:
      app: ai-designer-api
    ports:
      - protocol: TCP
        port: 8000
        targetPort: 8000
    type: NodePort  # Changed from LoadBalancer to NodePort

---
 apiVersion: networking.k8s.io/v1
 kind: Ingress
 metadata:
   name: ai-designer-api-ingress
   annotations:
     nginx.ingress.kubernetes.io/affinity: "cookie"
     nginx.ingress.kubernetes.io/ssl-redirect: "false"
     nginx.ingress.kubernetes.io/proxy-body-size: "10M"
     cert-manager.io/cluster-issuer: letsencrypt-issuer
 spec:
   ingressClassName: nginx
   tls:
   - hosts:
     - ${INGRESS_HOST}
     secretName: joget-cloud-tls
   rules:
     - host: ${INGRESS_HOST}
       http:
         paths:
           - path: /
             pathType: Prefix
             backend:
               service:
                 name: ai-designer-api-service
                 port:
                   number: 8000
Markup
 

The following are commands:

#!/bin/bash

TAG=${TAG:-"8.1-BETA"}
DOCKER_IMAGE=${DOCKER_IMAGE:-"quay.io/joget/ai-designer-api:$TAG"}
K8S_NAMESPACE=${K8S_NAMESPACE:-"ai-designer"}
INGRESS_HOST=${INGRESS_HOST:-"YOUR HOSTNAME"}

# Load environment variables from the .env file
if [ -f llm_workflow/.env ]; then
 set -a # Automatically export all variables
 source llm_workflow/.env
 set +a # Stop exporting variables automatically
else
 echo "Environment file llm_workflow/.env not found!"
 exit 1
Fi

echo Create Namespace $K8S_NAMESPACE
kubectl get namespace $K8S_NAMESPACE || 
 kubectl create namespace $K8S_NAMESPACE

echo Create ConfigMap
kubectl -n $K8S_NAMESPACE create configmap llm-wflow-config --from-env-file=./llm_workflow/.env.k8s --dry-run=client -o yaml > ./kubernetes/llm-config-configmap.yaml
kubectl -n $K8S_NAMESPACE apply -f ./kubernetes/llm-config-configmap.yaml

echo Deploy API Server
envsubst <  ./kubernetes/wflow-api-deployment.yaml | kubectl -n $K8S_NAMESPACE apply -f -

echo Wait for Deployment and Display Pods and Ingress
kubectl -n $K8S_NAMESPACE rollout status deploy ai-designer-api
kubectl -n $K8S_NAMESPACE get po

kubectl -n $K8S_NAMESPACE get ingress
Docker
 

Deployment using Docker

  1. Create a Docker network.
    docker network create llm_workflow
    Docker
     
  2. Source ENV variables.
    set -a
    source .env
    set +a
    Markup
     
  1. Start the API server.
    docker run -d \
      --name llm_workflow_api \
      --env-file .env \
      -e LOG_LEVEL=${LOG_LEVEL} \
      -e ENVIRONMENT=server \
      -v $(pwd)/datafiles:/LLMWorkflow/llm_workflow/datafiles \
      -p ${FASTAPI_APPLICATION_PORT}:8000 \
      --network llm_workflow \
      --restart always \
      quay.io/joget/ai-designer-api:8.1-BETA \
      gunicorn -k uvicorn.workers.UvicornWorker --bind 0.0.0.0:8000 --workers ${APPLICATION_WORKERS} --threads
    ${APPLICATION_THREADS} llm_workflow.fastapi_application --timeout 400 --log-level=${LOG_LEVEL}
Created by Debanraj Last modified by Debanraj on Jul 10, 2025