UiPath Documentation
automation-suite
2.2510
true
UiPath logo, featuring letters U and I in white

Automation Suite on Linux installation guide

Last updated Mar 26, 2026

Additional configuration

Repaving or updating the NFS server

Important:

UiPath® does not provide any specific steps to repave or update the NFS server.

Before updating the NFS server, make sure the backup is disabled. For instructions, see Backing up the cluster.

Guidelines

  1. It is good practice to back up the disk attached to the NFS server where you create the backup. You can find this information defined as the nfs.mountpoint key in the backup.json file.

  2. To repave the NFS server, take the following steps:

    1. Mount the backup disk on the newly repaved machine on the same mount point as defined by the nfs.mountpoint key in the backup.json file.
    2. Update the NFS endpoint of the new NFS server defined as nfs.endpoint in backup.json.
    3. After repaving the NFS server, make sure you have followed the Setting up the external NFS server guidance.
  3. Once the update to NFS server is complete, it is recommended to reboot the machine. Also make sure to enable the backup on cluster. For instructions, see Backing up the cluster.

Adding a new node to the cluster

To add a new node to the cluster, re-run the following steps:

  1. Add the FQDN or the IP address of the new node to the allowlist of the NFS server. For instructions, see Allowing NFS mount path access to all backup and restore nodes.
  2. Enable the backup on the new node post-installation. For instructions, see Backing up the cluster.

Using a local disk backup

When backups are stored on a locally attached disk in a single server node of the Backup Cluster, you must manually detach and reattach that disk to the Restore Cluster before you can restore data.

You need to complete this process carefully to avoid disk conflicts and ensure a clean transition between clusters.

  1. On the backup cluster, take the following steps:
    1. Connect to the backup cluster server node.

      ssh <backup-cluster-node>
      ssh <backup-cluster-node>
      
    2. Unmount the backup disk.

      umount /backup
      umount /backup
      
    3. Detach the backup disk from the virtual machine. The following is an Azure example:

      az vm disk detach \
        --resource-group "${resource_group_name}" \
        --vm-name "${node_name}" \
        --name "server0-CephBackupDisk"
      az vm disk detach \
        --resource-group "${resource_group_name}" \
        --vm-name "${node_name}" \
        --name "server0-CephBackupDisk"
      
  2. On the restore cluster, take the following steps:
    1. Connect to the restore cluster server node.

      ssh <restore-cluster-node>
      ssh <restore-cluster-node>
      
    2. Attach and mount the backup disk. The following is an Azure example:

      az vm disk attach \
        --resource-group "${resource_group_name}" \
        --vm-name "${node_name}" \
        --name "server0-CephBackupDisk"
      az vm disk attach \
        --resource-group "${resource_group_name}" \
        --vm-name "${node_name}" \
        --name "server0-CephBackupDisk"
      
    3. If /backup is currently mounted, unmount it.

      umount /backup
      umount /backup
      
    4. Remove any existing /backup entry from /etc/fstab.

      sed -i.bak '/\/backup/d' /etc/fstab
      sed -i.bak '/\/backup/d' /etc/fstab
      
    5. Identify the newly attached disk.

      lsblk
      lsblk
      

      Locate the unmounted disk name (for example, /dev/sdX).

    6. Rescan physical volumes.

      pvscan
      pvscan
      
    7. Rename the volume group (VG) on the attached disk.

      vg_uuid=$(pvs --noheadings -o vg_uuid /dev/sdX | awk '{print $1}')
      vgrename "$vg_uuid" backupvg_restored
      vg_uuid=$(pvs --noheadings -o vg_uuid /dev/sdX | awk '{print $1}')
      vgrename "$vg_uuid" backupvg_restored
      
    8. Activate the renamed volume group.

      vgchange -ay backupvg_restored
      vgchange -ay backupvg_restored
      
    9. Ensure the mount directory exists.

      mkdir -p /backup
      mkdir -p /backup
      
    10. Mount the logical volume.

      mount /dev/backupvg_restored/backuplv /backup
      mount /dev/backupvg_restored/backuplv /backup
      
    11. Add a persistent entry in /etc/fstab.

      echo '/dev/backupvg_restored/backuplv /backup xfs defaults 0 0' >> /etc/fstab
      echo '/dev/backupvg_restored/backuplv /backup xfs defaults 0 0' >> /etc/fstab
      

After you complete the procedure, you need to verify that the backup disk is correctly mounted and that the backup data is accessible on the restore cluster.

Take the following steps to confirm that the configuration is valid:

  1. Verify that the/backup mount point is active.

    df -h /backup
    df -h /backup
    
  2. Confirm that backup data is accessible under /backup.

  3. Proceed with restore operations as needed.

Note:
  • Always make sure that no active write operations are occurring on /backupbefore you detach the disk.
  • If you are using a non-Azure environment, replace the disk attach and detach commands with the equivalent commands for your platform (for example, AWS aws ec2 detach-volume).
  • Renaming the volume group (vgrename) prevents name conflicts between clusters.
  • Repaving or updating the NFS server
  • Guidelines
  • Adding a new node to the cluster
  • Using a local disk backup

Was this page helpful?

Connect

Need help? Support

Want to learn? UiPath Academy

Have questions? UiPath Forum

Stay updated