- Overview
- Requirements
- Deployment templates
- Manual: Preparing the installation
- Manual: Preparing the installation
- Step 2: Configuring the OCI-compliant registry for offline installations
- Step 3: Configuring the external objectstore
- Step 4: Configuring High Availability Add-on
- Step 5: Configuring SQL databases
- Step 7: Configuring the DNS
- Step 8: Configuring the disks
- Step 9: Configuring kernel and OS level settings
- Step 10: Configuring the node ports
- Step 11: Applying miscellaneous settings
- Step 12: Validating and installing the required RPM packages
- Step 13: Generating cluster_config.json
- Cluster_config.json Sample
- General configuration
- Profile configuration
- Certificate configuration
- Database configuration
- External Objectstore configuration
- Pre-signed URL configuration
- ArgoCD configuration
- Kerberos authentication configuration
- External OCI-compliant registry configuration
- Disaster recovery: Active/Passive and Active/Active configurations
- High Availability Add-on configuration
- Orchestrator-specific configuration
- Insights-specific configuration
- Process Mining-specific configuration
- Document Understanding-specific configuration
- Automation Suite Robots-specific configuration
- AI Center-specific configuration
- Monitoring configuration
- Optional: Configuring the proxy server
- Optional: Enabling resilience to zonal failures in a multi-node HA-ready production cluster
- Optional: Passing custom resolv.conf
- Optional: Increasing fault tolerance
- Adding a dedicated agent node with GPU support
- Adding a Dedicated Agent Node for Automation Suite Robots
- Step 15: Configuring the temporary Docker registry for offline installations
- Step 16: Validating the prerequisites for the installation
- Running uipathctl
- Manual: Performing the installation
- Post-installation
- Cluster administration
- Managing products
- Getting Started with the Cluster Administration portal
- Migrating Redis from in-cluster to external High Availability Add-on
- Migrating data between objectstores
- Migrating in-cluster objectstore to external objectstore
- Migrating from in-cluster registry to an external OCI-compliant registry
- Switching to the secondary cluster manually in an Active/Passive setup
- Disaster Recovery: Performing post-installation operations
- Converting an existing installation to multi-site setup
- Guidelines on upgrading an Active/Passive or Active/Active deployment
- Guidelines on backing up and restoring an Active/Passive or Active/Active deployment
- Scaling a single-node (evaluation) deployment to a multi-node (HA) deployment
- Monitoring and alerting
- Migration and upgrade
- Migrating between Automation Suite clusters
- Upgrading Automation Suite
- Downloading the installation packages and getting all the files on the first server node
- Retrieving the latest applied configuration from the cluster
- Updating the cluster configuration
- Configuring the OCI-compliant registry for offline installations
- Executing the upgrade
- Performing post-upgrade operations
- Product-specific configuration
- Best practices and maintenance
- Troubleshooting
- How to troubleshoot services during installation
- How to reduce permissions for an NFS backup directory
- How to uninstall the cluster
- How to clean up offline artifacts to improve disk space
- How to clear Redis data
- How to enable Istio logging
- How to manually clean up logs
- How to clean up old logs stored in the sf-logs bucket
- How to disable streaming logs for AI Center
- How to debug failed Automation Suite installations
- How to delete images from the old installer after upgrade
- How to disable TX checksum offloading
- How to manually set the ArgoCD log level to Info
- How to expand AI Center storage
- How to generate the encoded pull_secret_value for external registries
- How to address weak ciphers in TLS 1.2
- How to check the TLS version
- How to work with certificates
- How to schedule Ceph backup and restore data
- How to collect DU usage data with in-cluster objectstore (Ceph)
- How to install RKE2 SELinux on air-gapped environments
- How to clean up old differential backups on an NFS server
- Error in downloading the bundle
- Offline installation fails because of missing binary
- Certificate issue in offline installation
- SQL connection string validation error
- Azure disk not marked as SSD
- Failure after certificate update
- Antivirus causes installation issues
- Automation Suite not working after OS upgrade
- Automation Suite requires backlog_wait_time to be set to 0
- Temporary registry installation fails on RHEL 8.9
- Frequent restart issue in uipath namespace deployments during offline installations
- DNS settings not honored by CoreDNS
- Upgrade fails due to unhealthy Ceph
- RKE2 not getting started due to space issue
- Upgrade fails due to classic objects in the Orchestrator database
- Ceph cluster found in a degraded state after side-by-side upgrade
- Service upgrade fails for Apps
- In-place upgrade timeouts
- Upgrade fails in offline environments
- snapshot-controller-crds pod in CrashLoopBackOff state after upgrade
- Upgrade fails due to overridden Insights PVC sizes
- Upgrade failure due to uppercase hostname
- Setting a timeout interval for the management portals
- Authentication not working after migration
- Kinit: Cannot find KDC for realm <AD Domain> while getting initial credentials
- Kinit: Keytab contains no suitable keys for *** while getting initial credentials
- GSSAPI operation failed due to invalid status code
- Alarm received for failed Kerberos-tgt-update job
- SSPI provider: Server not found in Kerberos database
- Login failed for AD user due to disabled account
- ArgoCD login failed
- Update the underlying directory connections
- Failure to get the sandbox image
- Pods not showing in ArgoCD UI
- Redis probe failure
- RKE2 server fails to start
- Secret not found in UiPath namespace
- ArgoCD goes into progressing state after first installation
- Missing Ceph-rook metrics from monitoring dashboards
- Mismatch in reported errors during diagnostic health checks
- No healthy upstream issue
- Redis startup blocked by antivirus
- Running High Availability with Process Mining
- Process Mining ingestion failed when logged in using Kerberos
- Unable to connect to AutomationSuite_ProcessMining_Warehouse database using a pyodbc format connection string
- Airflow installation fails with sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string ''
- How to add an IP table rule to use SQL Server port 1433
- Automation Suite certificate is not trusted from the server where CData Sync is running
- Running the diagnostics tool
- Using the Automation Suite support bundle
- Exploring Logs
- Exploring summarized telemetry

Automation Suite on Linux installation guide
Executing the upgrade
To perform an Automation Suite upgrade, you must put the cluster in maintenance mode. Maintenance mode causes downtime during the entire upgrade process, and your business automation is suspended during this time. It is strongly recommended to create a backup of the cluster and the SQL database before the upgrade. This is to ensure you can restore the cluster if something goes wrong during the upgrade operation.
To execute the upgrade, you must take the following steps:
- Run the prerequisite checks.
- Configure the backup.
- Disable the backup.
- Put the cluster in maintenance mode.
- Migrate Longhorn workloads, MongoDB data, and Ceph to Helm-based deployment.
- Update Kubernetes and other infrastructure components.
- Upgrade the shared components and UiPath® services.
Running the prerequisite checks
You must verify that all the upgrade requirements are met before you put the cluster in maintenance mode.
-
Run the infrastructure prerequisite checks, using the following command:
cd /opt/UiPathAutomationSuite/latest/installer ./bin/uipathctl rke2 prereq run cluster_config.json --versions versions/helm-charts.jsoncd /opt/UiPathAutomationSuite/latest/installer ./bin/uipathctl rke2 prereq run cluster_config.json --versions versions/helm-charts.json -
Run the shared component and services prerequisite checks, using the following command:
cd /opt/UiPathAutomationSuite/latest/installer ./bin/uipathctl prereq run cluster_config.json --versions versions/helm-charts.jsoncd /opt/UiPathAutomationSuite/latest/installer ./bin/uipathctl prereq run cluster_config.json --versions versions/helm-charts.json
Configuring the backup
To configure the backup, take the following steps:
- Make sure you enabled the backup on the cluster. You must create the backup using the same version of the installer as the one you used for the current deployment. For instructions, see the backup and restore documentation corresponding to the Automation Suite version from which you plan to upgrade. For instance, if you plan to upgrade from Automation Suite 2023.4, follow the instructions in the 2023.4 guide.
- Connect to one of the server nodes via SSH.
- Verify that all desired volumes have backups in the cluster:
-
If you upgrade from 2022.4 or older, run the following command:
/path/to/old-installer/configureUiPathAS.sh verify-volumes-backup/path/to/old-installer/configureUiPathAS.sh verify-volumes-backup -
If you upgrade from 2022.10 or newer, run the following command:
./configureUiPathAS.sh snapshot list./configureUiPathAS.sh snapshot list -
If you upgrade from 2024.10 or newer, run the following command:
./bin/uipathctl snapshot list./bin/uipathctl snapshot list
-
The backup might take some time, so wait for approximately 15-20 minutes, and then verify the volumes backup again.
Once the backup is created, continue with the following steps.
Make sure you use the uipathctl command from the target version directory when you upgrade from 23.10.0 or later versions.
Disabling the backup
Before you place the cluster in the maintenance mode, you must disable the backup to avoid backing up the cluster in a suboptimal state. For details, see Disabling the snapshot backup.
Putting the cluster in maintenance mode
Putting the cluster in maintenance mode shuts down the ingress controller and all the UiPath® services, blocking all the incoming traffic to the Automation Suite cluster.
-
To put the cluster in maintenance mode, run:
./bin/uipathctl cluster maintenance enable./bin/uipathctl cluster maintenance enable -
To verify that the cluster is in maintenance mode, run:
./bin/uipathctl cluster maintenance is-enabled./bin/uipathctl cluster maintenance is-enabled
Performing pre-upgrade migration operations
Run the following mandatory pre-upgrade command:
./bin/uipathctl cluster pre-upgrade cluster_config.json --versions-dir ./versions
./bin/uipathctl cluster pre-upgrade cluster_config.json --versions-dir ./versions
This command selectively migrates Longhorn workloads to local PV, MongoDB data to SQL, and migrates the Ceph deployment from ArgoCD to a Helm-based deployment.
Updating Kubernetes and other infrastructure components
To upgrade Kubernetes and the other infrastructure components, run the following command on the primary server node:
Make sure you updated the generated cluster_config.json file as described in Updating the cluster configuration.
cd /opt/UiPathAutomationSuite/latest/installer
./bin/uipathctl rke2 upgrade cluster_config.json --versions versions/helm-charts.json
cd /opt/UiPathAutomationSuite/latest/installer
./bin/uipathctl rke2 upgrade cluster_config.json --versions versions/helm-charts.json
-
Running the previous command on the primary server node copies the installer and
cluster_config.jsonto the/opt/UiPathAutomationSuite/<version>/installerdefault location and upgrades the infrastructure on all the machines. -
The
/opt/UiPathAutomationSuite/<version>default location must have at least 5 GB available across all nodes. -
To change the default location, update the following environment variable with the desired location. Make sure that the location is available on all the nodes and has the required permissions to run the upgrade. assignment
export INSTALLER_DIRECTORY=/path/to/copy/installerexport INSTALLER_DIRECTORY=/path/to/copy/installer -
After executing the previous command and running the upgrade, the installer is copied to the
/path/to/copy/installer/<version>/installerlocation.versionis replaced with the version of the installer that you execute.
Troubleshooting
- The upgrade logs on the primary server are available in the
/opt/UiPathAutomationSuite/latest/installer/upgrade-logsdefault location, unless you explicitly used a custom location for the installer. - On all the other nodes, the logs are available in the
/opt/UiPathAutomationSuite/<version>/installer/upgrade-logsdefault location, unless you explicitly changed this location via theINSTALLER_DIRECTORYvariable.
Upgrading the shared components and UiPath® services
-
If you have Insights enabled, you must run the following command to ensure that the Insights data persists after the upgrade:
kubectl -n uipath create cm migration-lock --from-literal=migration=pending --dry-run=client -o yaml | kubectl apply -f -kubectl -n uipath create cm migration-lock --from-literal=migration=pending --dry-run=client -o yaml | kubectl apply -f -Note:Running this command will not have any adverse impact if Insights is not enabled.
-
To upgrade the shared components and the UiPath® product services, run the following command on the primary server node:
cd /opt/UiPathAutomationSuite/latest/installer ./bin/uipathctl manifest apply cluster_config.json --versions versions/helm-charts.jsoncd /opt/UiPathAutomationSuite/latest/installer ./bin/uipathctl manifest apply cluster_config.json --versions versions/helm-charts.jsonImportant:After completing the upgrade, maintenance mode is disabled automatically.
-
To verify if Automation Suite is healthy, run one of the following commands:
cd /opt/UiPathAutomationSuite/latest/installer ./bin/uipathctl health checkcd /opt/UiPathAutomationSuite/latest/installer ./bin/uipathctl health checkNote:If you cannot find
helm-charts.json, you can alternatively useversions.json. To downloadversions.json, see Downloading the installation packages.
After completing the upgrade, perform the cleanup and migration activity applicable to you.