- Overview
- Requirements
- Deployment templates
- Manual: Preparing the installation
- Manual: Preparing the installation
- Step 2: Configuring the OCI-compliant registry for offline installations
- Step 3: Configuring the external objectstore
- Step 4: Configuring High Availability Add-on
- Step 5: Configuring SQL databases
- Optional: Configuring the L7 Load Balancer
- Step 7: Configuring the DNS
- Step 8: Configuring the disks
- Step 9: Configuring kernel and OS level settings
- Step 10: Configuring the node ports
- Step 11: Applying miscellaneous settings
- Step 12: Validating and installing the required RPM packages
- Step 13: Generating cluster_config.json
- Cluster_config.json Sample
- General configuration
- Profile configuration
- Certificate configuration
- Database configuration
- External Objectstore configuration
- Pre-signed URL configuration
- ArgoCD configuration
- Kerberos authentication configuration
- External OCI-compliant registry configuration
- Disaster recovery: Active/Passive and Active/Active configurations
- High Availability Add-on configuration
- Orchestrator-specific configuration
- Insights-specific configuration
- Process Mining-specific configuration
- Document Understanding-specific configuration
- Automation Suite Robots-specific configuration
- AI Center-specific configuration
- Monitoring configuration
- Optional: Configuring the proxy server
- Optional: Enabling resilience to zonal failures in a multi-node HA-ready production cluster
- Optional: Passing custom resolv.conf
- Optional: Increasing fault tolerance
- Adding a dedicated agent node with GPU support
- Adding a Dedicated Agent Node for Automation Suite Robots
- Step 15: Configuring the temporary Docker registry for offline installations
- Step 16: Validating the prerequisites for the installation
- Running uipathctl
- Manual: Performing the installation
- Post-installation
- Cluster administration
- Managing products
- Getting Started with the Cluster Administration portal
- Migrating Redis from in-cluster to external High Availability Add-on
- Migrating data between objectstores
- Migrating in-cluster objectstore to external objectstore
- Migrating from in-cluster registry to an external OCI-compliant registry
- Switching to the secondary cluster manually in an Active/Passive setup
- Disaster Recovery: Performing post-installation operations
- Converting an existing installation to multi-site setup
- Guidelines on upgrading an Active/Passive or Active/Active deployment
- Guidelines on backing up and restoring an Active/Passive or Active/Active deployment
- Scaling a single-node (evaluation) deployment to a multi-node (HA) deployment
- Monitoring and alerting
- Migration and upgrade
- Migrating between Automation Suite clusters
- Upgrading Automation Suite
- Downloading the installation packages and getting all the files on the first server node
- Retrieving the latest applied configuration from the cluster
- Updating the cluster configuration
- Configuring the OCI-compliant registry for offline installations
- Executing the upgrade
- Performing post-upgrade operations
- Product-specific configuration
- Best practices and maintenance
- Troubleshooting
- How to troubleshoot services during installation
- How to reduce permissions for an NFS backup directory
- How to uninstall the cluster
- How to clean up offline artifacts to improve disk space
- How to clear Redis data
- How to enable Istio logging
- How to manually clean up logs
- How to clean up old logs stored in the sf-logs bucket
- How to disable streaming logs for AI Center
- How to debug failed Automation Suite installations
- How to delete images from the old installer after upgrade
- How to disable TX checksum offloading
- How to manually set the ArgoCD log level to Info
- How to expand AI Center storage
- How to generate the encoded pull_secret_value for external registries
- How to address weak ciphers in TLS 1.2
- How to check the TLS version
- How to work with certificates
- How to schedule Ceph backup and restore data
- How to collect DU usage data with in-cluster objectstore (Ceph)
- How to install RKE2 SELinux on air-gapped environments
- How to clean up old differential backups on an NFS server
- Error in downloading the bundle
- Offline installation fails because of missing binary
- Certificate issue in offline installation
- SQL connection string validation error
- Azure disk not marked as SSD
- Failure after certificate update
- Antivirus causes installation issues
- Automation Suite not working after OS upgrade
- Automation Suite requires backlog_wait_time to be set to 0
- Temporary registry installation fails on RHEL 8.9
- Frequent restart issue in uipath namespace deployments during offline installations
- DNS settings not honored by CoreDNS
- Upgrade fails due to unhealthy Ceph
- RKE2 not getting started due to space issue
- Upgrade fails due to classic objects in the Orchestrator database
- Ceph cluster found in a degraded state after side-by-side upgrade
- Service upgrade fails for Apps
- In-place upgrade timeouts
- Upgrade fails in offline environments
- snapshot-controller-crds pod in CrashLoopBackOff state after upgrade
- Upgrade fails due to overridden Insights PVC sizes
- Upgrade failure due to uppercase hostname
- Setting a timeout interval for the management portals
- Authentication not working after migration
- Kinit: Cannot find KDC for realm <AD Domain> while getting initial credentials
- Kinit: Keytab contains no suitable keys for *** while getting initial credentials
- GSSAPI operation failed due to invalid status code
- Alarm received for failed Kerberos-tgt-update job
- SSPI provider: Server not found in Kerberos database
- Login failed for AD user due to disabled account
- ArgoCD login failed
- Update the underlying directory connections
- Failure to get the sandbox image
- Pods not showing in ArgoCD UI
- Redis probe failure
- RKE2 server fails to start
- Secret not found in UiPath namespace
- ArgoCD goes into progressing state after first installation
- Missing Ceph-rook metrics from monitoring dashboards
- Mismatch in reported errors during diagnostic health checks
- No healthy upstream issue
- Redis startup blocked by antivirus
- Running High Availability with Process Mining
- Process Mining ingestion failed when logged in using Kerberos
- Unable to connect to AutomationSuite_ProcessMining_Warehouse database using a pyodbc format connection string
- Airflow installation fails with sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string ''
- How to add an IP table rule to use SQL Server port 1433
- Automation Suite certificate is not trusted from the server where CData Sync is running
- Running the diagnostics tool
- Using the Automation Suite support bundle
- Exploring Logs
- Exploring summarized telemetry

Automation Suite on Linux installation guide
Optional: Configuring the L7 Load Balancer
Overview
Layer 7 (L7) load balancer support is available as an optional alternative to the standard Layer 4 (L4) configuration for multi-node, HA-ready production deployments. Unlike L4 load balancers, which operate at the network/transport layer, L7 load balancers provide application-layer intelligence with advanced traffic management capabilities.
Key benefits of using an L7 load balancer include:
- Web Application Firewall (WAF) protection against common web vulnerabilities
- SSL/TLS termination at the load balancer level, reducing backend server load
- Content and path-based routing for intelligent traffic distribution
- Advanced monitoring and analytics with application-layer visibility
- RKE2 does not support TLS termination at the load balancer level. This requires direct TCP/TLS passthrough for services such as the Kubernetes API and RKE2 Supervisor.
- L4 configuration is recommended; however, L7 configuration is also supported.
- L7 load balancers support the same x-forwarded-for header preservation for client IP tracking.
The following diagram shows how HTTPS traffic is processed by the L7 load balancer.

HTTPS Listener Certificate
You must configure a client-validated TLS certificate (custom SAN certificate) on the load balancer's HTTPS listener. This ensures that all browser and API clients trust the Automation Suite endpoint. The certificate is required for the client-to-load balancer SSL connection.
Backend TLS certificates
If your server-side TLS certificates are self-signed or not issued by a public CA, upload the corresponding root certificate into the load balancer's backend configuration. This allows the load balancer to validate and re-encrypt traffic to the Automation Suite servers. The certificate is required for the load balancer-to-server SSL connection.
Automation Suite supports two L7 load balancer configurations:
- L7 load balancer with L4 capabilities configuration (recommended)
- L7-only load balancer configuration (alternative)
Recommended configuration: L7 load balancer with L4 capabilities
This configuration provides full L7 capabilities while maintaining RKE2 compatibility.
Requirements
To use this configuration, your load balancer must meet the following requirements:
- Support both L7 and L4 modes (for example, Azure Application Gateway)
- Allow TCP/TLS passthrough for Kubernetes control plane services (ports 6443 and 9345), which require direct TCP/TLS connectivity
Configuring the backend pool
As part of the configuration, you need to set up the following backend pools on the load balancer:
- Server pool: All server nodes only (no agent nodes)
- Node pool: All server nodes plus nonspecialized agent nodes (no GPU or attended robots)
- Temporary registry pool: The server node where the temporary registry is installed
- This pool is used only during installation, node joining, and upgrade. After completing those procedures, you can close it.
Enabling ports on the load balancer
You must enable the following ports on the load balancer. The following table lists the required ports and their traffic handling.
| Port | Protocol | Layer | Purpose | Traffic handling |
|---|---|---|---|---|
443 | HTTPS | L7 | Automation Suite web access | SSL termination at load balancer (node pool) |
300701 | HTTP | L7 | Temporary registry access | No TLS termination (temporary registry pool) |
6443 | TCP | L4 | Kubernetes API access (node joining) | TCP/TLS passthrough (server pool) |
9345 | TCP | L4 | Kubernetes API access (node joining) | TCP/TLS passthrough (server pool) |
1If you do not have an external OCI-compliant registry, you must open port 30070 on the load balancer and on the server node where the temporary Docker registry is installed.
Also configure listeners, health probes, routing rules, and backend settings in the load balancer according to the definitions above.
The following diagram describes how ports are enabled and mapped in an L7 load balancer with L4 capabilities.

Alternative configuration: L7-only load balancer
This configuration is intended for environments where L4 capabilities are not available. In this case, node joining bypasses the L7 load balancer and connects directly to the server nodes for control plane traffic.
- This configuration does not provide resilience if nodes fail during installation.
- If the primary server is down or deleted, you must update the cluster configuration. If you are adding a new server node after deletion, configure it appropriately based on your requirements before adding it to the server node pool.
- The FQDN of the primary server must be remapped to another available machine in the cluster.
Configuring the backend pool
In this configuration, you need to create the following backend pools on the load balancer:
- Node pool: Contains all server nodes and nonspecialized agent nodes
- Temporary registry pool (if required): Contains the server node where the temporary registry is installed
Enabling ports on the load balancer
You must enable the following ports on the load balancer. The following table lists the required ports and their traffic handling.
| Port | Protocol | Purpose | Traffic handling |
|---|---|---|---|
443 | HTTPS | Automation Suite web access | Forward traffic to the node pool |
The following diagram shows the port configuration for an L7-only load balancer.
