- Overview
- Requirements
- Pre-installation
- Preparing the installation
- Installing and configuring the service mesh
- Downloading the installation packages
- Configuring the OCI-compliant registry
- Granting installation permissions
- Installing and configuring the GitOps tool
- Disaster recovery: Active/Passive configurations
- Generating the configuration file using a wizard
- AKS input.json example
- EKS input.json example
- Applying miscellaneous configurations
- Running uipathctl
- Installation
- Post-installation
- Migration and upgrade
- Upgrading Automation Suite
- Migrating standalone products to Automation Suite
- Step 1: Restoring the standalone product database
- Step 2: Updating the schema of the restored product database
- Step 3: Moving the Identity organization data from standalone to Automation Suite
- Step 4: Backing up the platform database in Automation Suite
- Step 5: Merging organizations in Automation Suite
- Step 6: Updating the migrated product connection strings
- Step 7: Migrating standalone Orchestrator
- Step 8: Migrating standalone Insights
- Step 9: Migrating standalone Test Manager
- Step 10: Deleting the default tenant
- Performing a single tenant migration
- Migrating between Automation Suite clusters
- Migrating from Automation Suite on EKS/AKS to Automation Suite on OpenShift
- Monitoring and alerting
- Cluster administration
- Product-specific configuration
- Orchestrator advanced configuration
- Configuring Orchestrator parameters
- Configuring appSettings
- Configuring the maximum request size
- Overriding cluster-level storage configuration
- Configuring NLog
- Saving robot logs to Elasticsearch
- Configuring credential stores
- Configuring encryption key per tenant
- Cleaning up the Orchestrator database
- Skipping host library creation
- Troubleshooting
- The backup setup does not work due to a failure to connect to Azure Government
- Pods in the uipath namespace stuck when enabling custom node taints
- Unable to launch Automation Hub and Apps with proxy setup
- Robot cannot connect to an Automation Suite Orchestrator instance
- Log streaming does not work in proxy setups
- Velero backup fails with FailedValidation error
- Accessing FQDN returns RBAC: access denied error

Automation Suite on EKS/AKS installation guide
Disaster recovery: Active/Passive configurations
Active/Passive mode is currently available only for EKS.
Some Automation Suite products are not supported in Disaster Recovery - Active/Passive . You can install these products while installing the primary cluster only. For details, see Disaster recovery - Active/Passive.
The disaster recovery configuration requires that you install the two Automation Suite clusters separately. To install both the primary and secondary clusters in an Active/Passive deployment, you must configure the following input.json parameters:
- For Active/Passive deployments: configure the parameters listed in the following table.
| Parameter | Description |
|---|---|
fqdn | It represents the FQDN that, at the time of installation, points to the load balancer of the primary cluster. For details, refer to DNS routing logic. |
cluster_fqdn | It represents the cluster-specific FQDN (DNS) that points to the load balancer of the cluster you set up using the input.json file. For details, refer to DNS routing logic. |
multisite.enabled | It indicates that Automation Suite must be configured to work multi-site. It must be set to true. |
multisite.primary | It indicates that this cluster is a primary cluster and must be set to true. It defaults to false to denote the secondary cluster. |
multisite.other_kube_config | It indicates the base64-encoded kubeconfig file of another cluster. While installing the primary Automation Suite cluster, this value is unavailable and can be left as is. However, you must provide the value when rebuilding the primary automation suite later during recovery. |
multisite.type | It specifies the deployment type. You must set it to active-passive. |
Optional: Proxy configuration
If you choose to configure a proxy, in addition to the standard proxy configuration described in Configuring the proxy, make sure the no_proxy variable includes the following values:
<traffic-manager-fqdn>- same value asfqdnin the disaster recovery configuration<primary-cluster-fqdn>- same value ascluster_fqdnset in the primary cluster<secondary-cluster-fqdn>- same value ascluster_fqdnset in the secondary cluster
The following example shows a valid proxy configuration:
"proxy": {
"enabled": true,
"http_proxy": "http://20.110.210.6:3128",
"https_proxy": "http://20.110.210.6:3128",
"no_proxy": "<secondary-cluster-fqdn>,<primary-cluster-fqdn>,<traffic-manager-fqdn>"
}
"proxy": {
"enabled": true,
"http_proxy": "http://20.110.210.6:3128",
"https_proxy": "http://20.110.210.6:3128",
"no_proxy": "<secondary-cluster-fqdn>,<primary-cluster-fqdn>,<traffic-manager-fqdn>"
}
Multi-site configuration
This page describes how to set up a multi-site configuration with a primary and secondary cluster. The primary cluster is active and the secondary cluster is passive.
-
In the configuration for the primary cluster option, the
enabledoption must be set to true."multisite": { "enabled": true, "primary": true, "type": "active-passive" }"multisite": { "enabled": true, "primary": true, "type": "active-passive" } -
In the configuration for the secondary cluster, the
primaryoption must be set to false:"multisite": { "enabled": true, "primary": false, "other_kube_config": "[base64 encoded kubeconfig]", "type": "active-passive" }"multisite": { "enabled": true, "primary": false, "other_kube_config": "[base64 encoded kubeconfig]", "type": "active-passive" }You must supply the primary kubeconfig in base64 encoded string.
-
Services that are not compatible with being in a passive state must be disabled. For more details on services that do not support Active/Passive mode, refer to the Disaster recovery - Active/Passive page.
-
Ensure the certificates are consistent across the primary and secondary cluster, as this is not automatically checked or enforced.