- Overview
- Requirements
- Deployment templates
- Manual: Preparing the installation
- Manual: Preparing the installation
- Step 2: Configuring the OCI-compliant registry for offline installations
- Step 3: Configuring the external objectstore
- Step 4: Configuring High Availability Add-on
- Step 5: Configuring SQL databases
- Step 7: Configuring the DNS
- Step 8: Configuring the disks
- Step 9: Configuring kernel and OS level settings
- Step 10: Configuring the node ports
- Step 11: Applying miscellaneous settings
- Step 12: Validating and installing the required RPM packages
- Step 13: Generating cluster_config.json
- Cluster_config.json Sample
- General configuration
- Profile configuration
- Certificate configuration
- Database configuration
- External Objectstore configuration
- Pre-signed URL configuration
- ArgoCD configuration
- Kerberos authentication configuration
- External OCI-compliant registry configuration
- Disaster recovery: Active/Passive and Active/Active configurations
- High Availability Add-on configuration
- Orchestrator-specific configuration
- Insights-specific configuration
- Process Mining-specific configuration
- Document Understanding-specific configuration
- Automation Suite Robots-specific configuration
- AI Center-specific configuration
- Monitoring configuration
- Optional: Configuring the proxy server
- Optional: Enabling resilience to zonal failures in a multi-node HA-ready production cluster
- Optional: Passing custom resolv.conf
- Optional: Increasing fault tolerance
- Adding a dedicated agent node with GPU support
- Adding a Dedicated Agent Node for Automation Suite Robots
- Step 15: Configuring the temporary Docker registry for offline installations
- Step 16: Validating the prerequisites for the installation
- Running uipathctl
- Manual: Performing the installation
- Post-installation
- Cluster administration
- Managing products
- Getting Started with the Cluster Administration portal
- Migrating Redis from in-cluster to external High Availability Add-on
- Migrating data between objectstores
- Migrating in-cluster objectstore to external objectstore
- Migrating from in-cluster registry to an external OCI-compliant registry
- Switching to the secondary cluster manually in an Active/Passive setup
- Disaster Recovery: Performing post-installation operations
- Converting an existing installation to multi-site setup
- Guidelines on upgrading an Active/Passive or Active/Active deployment
- Guidelines on backing up and restoring an Active/Passive or Active/Active deployment
- Scaling a single-node (evaluation) deployment to a multi-node (HA) deployment
- Monitoring and alerting
- Migration and upgrade
- Migrating between Automation Suite clusters
- Upgrading Automation Suite
- Downloading the installation packages and getting all the files on the first server node
- Retrieving the latest applied configuration from the cluster
- Updating the cluster configuration
- Configuring the OCI-compliant registry for offline installations
- Executing the upgrade
- Performing post-upgrade operations
- Product-specific configuration
- Best practices and maintenance
- Troubleshooting
- How to troubleshoot services during installation
- How to reduce permissions for an NFS backup directory
- How to uninstall the cluster
- How to clean up offline artifacts to improve disk space
- How to clear Redis data
- How to enable Istio logging
- How to manually clean up logs
- How to clean up old logs stored in the sf-logs bucket
- How to disable streaming logs for AI Center
- How to debug failed Automation Suite installations
- How to delete images from the old installer after upgrade
- How to disable TX checksum offloading
- How to manually set the ArgoCD log level to Info
- How to expand AI Center storage
- How to generate the encoded pull_secret_value for external registries
- How to address weak ciphers in TLS 1.2
- How to check the TLS version
- How to work with certificates
- How to schedule Ceph backup and restore data
- How to collect DU usage data with in-cluster objectstore (Ceph)
- How to install RKE2 SELinux on air-gapped environments
- How to clean up old differential backups on an NFS server
- Error in downloading the bundle
- Offline installation fails because of missing binary
- Certificate issue in offline installation
- SQL connection string validation error
- Azure disk not marked as SSD
- Failure after certificate update
- Antivirus causes installation issues
- Automation Suite not working after OS upgrade
- Automation Suite requires backlog_wait_time to be set to 0
- Temporary registry installation fails on RHEL 8.9
- Frequent restart issue in uipath namespace deployments during offline installations
- DNS settings not honored by CoreDNS
- Upgrade fails due to unhealthy Ceph
- RKE2 not getting started due to space issue
- Upgrade fails due to classic objects in the Orchestrator database
- Ceph cluster found in a degraded state after side-by-side upgrade
- Service upgrade fails for Apps
- In-place upgrade timeouts
- Upgrade fails in offline environments
- snapshot-controller-crds pod in CrashLoopBackOff state after upgrade
- Upgrade fails due to overridden Insights PVC sizes
- Upgrade failure due to uppercase hostname
- Setting a timeout interval for the management portals
- Authentication not working after migration
- Kinit: Cannot find KDC for realm <AD Domain> while getting initial credentials
- Kinit: Keytab contains no suitable keys for *** while getting initial credentials
- GSSAPI operation failed due to invalid status code
- Alarm received for failed Kerberos-tgt-update job
- SSPI provider: Server not found in Kerberos database
- Login failed for AD user due to disabled account
- ArgoCD login failed
- Update the underlying directory connections
- Failure to get the sandbox image
- Pods not showing in ArgoCD UI
- Redis probe failure
- RKE2 server fails to start
- Secret not found in UiPath namespace
- ArgoCD goes into progressing state after first installation
- Missing Ceph-rook metrics from monitoring dashboards
- Mismatch in reported errors during diagnostic health checks
- No healthy upstream issue
- Redis startup blocked by antivirus
- Running High Availability with Process Mining
- Process Mining ingestion failed when logged in using Kerberos
- Unable to connect to AutomationSuite_ProcessMining_Warehouse database using a pyodbc format connection string
- Airflow installation fails with sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string ''
- How to add an IP table rule to use SQL Server port 1433
- Automation Suite certificate is not trusted from the server where CData Sync is running
- Running the diagnostics tool
- Using the Automation Suite support bundle
- Exploring Logs
- Exploring summarized telemetry

Automation Suite on Linux installation guide
Additional configuration
Repaving or updating the NFS server
UiPath® does not provide any specific steps to repave or update the NFS server.
Before updating the NFS server, make sure the backup is disabled. For instructions, see Backing up the cluster.
Guidelines
-
It is good practice to back up the disk attached to the NFS server where you create the backup. You can find this information defined as the
nfs.mountpointkey in thebackup.jsonfile. -
To repave the NFS server, take the following steps:
- Mount the backup disk on the newly repaved machine on the same mount point as defined by the
nfs.mountpointkey in thebackup.jsonfile. - Update the NFS endpoint of the new NFS server defined as
nfs.endpointinbackup.json. - After repaving the NFS server, make sure you have followed the Setting up the external NFS server guidance.
- Mount the backup disk on the newly repaved machine on the same mount point as defined by the
-
Once the update to NFS server is complete, it is recommended to reboot the machine. Also make sure to enable the backup on cluster. For instructions, see Backing up the cluster.
Adding a new node to the cluster
To add a new node to the cluster, re-run the following steps:
- Add the FQDN or the IP address of the new node to the allowlist of the NFS server. For instructions, see Allowing NFS mount path access to all backup and restore nodes.
- Enable the backup on the new node post-installation. For instructions, see Backing up the cluster.
Using a local disk backup
When backups are stored on a locally attached disk in a single server node of the Backup Cluster, you must manually detach and reattach that disk to the Restore Cluster before you can restore data.
You need to complete this process carefully to avoid disk conflicts and ensure a clean transition between clusters.
- On the backup cluster, take the following steps:
-
Connect to the backup cluster server node.
ssh <backup-cluster-node>ssh <backup-cluster-node> -
Unmount the backup disk.
umount /backupumount /backup -
Detach the backup disk from the virtual machine. The following is an Azure example:
az vm disk detach \ --resource-group "${resource_group_name}" \ --vm-name "${node_name}" \ --name "server0-CephBackupDisk"az vm disk detach \ --resource-group "${resource_group_name}" \ --vm-name "${node_name}" \ --name "server0-CephBackupDisk"
-
- On the restore cluster, take the following steps:
-
Connect to the restore cluster server node.
ssh <restore-cluster-node>ssh <restore-cluster-node> -
Attach and mount the backup disk. The following is an Azure example:
az vm disk attach \ --resource-group "${resource_group_name}" \ --vm-name "${node_name}" \ --name "server0-CephBackupDisk"az vm disk attach \ --resource-group "${resource_group_name}" \ --vm-name "${node_name}" \ --name "server0-CephBackupDisk" -
If
/backupis currently mounted, unmount it.umount /backupumount /backup -
Remove any existing
/backupentry from/etc/fstab.sed -i.bak '/\/backup/d' /etc/fstabsed -i.bak '/\/backup/d' /etc/fstab -
Identify the newly attached disk.
lsblklsblkLocate the unmounted disk name (for example,
/dev/sdX). -
Rescan physical volumes.
pvscanpvscan -
Rename the volume group (VG) on the attached disk.
vg_uuid=$(pvs --noheadings -o vg_uuid /dev/sdX | awk '{print $1}') vgrename "$vg_uuid" backupvg_restoredvg_uuid=$(pvs --noheadings -o vg_uuid /dev/sdX | awk '{print $1}') vgrename "$vg_uuid" backupvg_restored -
Activate the renamed volume group.
vgchange -ay backupvg_restoredvgchange -ay backupvg_restored -
Ensure the mount directory exists.
mkdir -p /backupmkdir -p /backup -
Mount the logical volume.
mount /dev/backupvg_restored/backuplv /backupmount /dev/backupvg_restored/backuplv /backup -
Add a persistent entry in
/etc/fstab.echo '/dev/backupvg_restored/backuplv /backup xfs defaults 0 0' >> /etc/fstabecho '/dev/backupvg_restored/backuplv /backup xfs defaults 0 0' >> /etc/fstab
-
After you complete the procedure, you need to verify that the backup disk is correctly mounted and that the backup data is accessible on the restore cluster.
Take the following steps to confirm that the configuration is valid:
-
Verify that the
/backupmount point is active.df -h /backupdf -h /backup -
Confirm that backup data is accessible under
/backup. -
Proceed with restore operations as needed.
- Always make sure that no active write operations are occurring on
/backupbefore you detach the disk. - If you are using a non-Azure environment, replace the disk attach and detach commands with the equivalent commands for your platform (for example, AWS
aws ec2 detach-volume). - Renaming the volume group (
vgrename) prevents name conflicts between clusters.