UiPath Documentation
automation-suite
2.2510
true
UiPath logo, featuring letters U and I in white

Automation Suite on Linux installation guide

Last updated Mar 26, 2026

How to schedule Ceph backup and restore data

For configurations using a single-node RKE2 setup with in-cluster storage, a prerequisite check validates that you have provisioned a minimum 512 GB additional disk to store Ceph data backups.

Important:

The Ceph backup CronJob must be configured only after the cluster is fully installed and running.

To partition the disk for Ceph backup, use the following command and replace the <disk-name> with your disk name:

uipathctl rke2 disk --backup-disk-name <disk-name>
uipathctl rke2 disk --backup-disk-name <disk-name>

The following procedures ensure that you have a reliable backup and restore process for your Ceph storage in a single-node RKE2 cluster setup.

Scheduling hourly Ceph backups

The required Helm chart and configuration files for the backup CronJob are available in the official UiPath support tools GitHub repository.

After the disk partition, you must take the following steps to configure a CronJob that backs up data to the specified disk every hour.

  1. Grant permissions to backup directory so backup pod can write data to it, by using the following command:

    chown 65534:65534 /backup
    chown 65534:65534 /backup
    
  2. Navigate to the backup directory, by using the following command:

    cd Scripts/24.10/ceph-singlenode-backup/backup
    cd Scripts/24.10/ceph-singlenode-backup/backup
    
  3. Edit the values.yaml file to configure the container registry according to your setup.

  4. Install the backup Helm chart, by using the following command:

    helm install ceph-backup . -n rook-ceph
    helm install ceph-backup . -n rook-ceph
    

This way you deploy the backup CronJob, which handles the periodic backup of your Ceph data to the specified disk.

If you encounter an error message related to helm not being found, use the following command:

/opt/UiPathAutomationSuite/UipathInstaller/bin/helm install ceph-backup . -n rook-ceph
/opt/UiPathAutomationSuite/UipathInstaller/bin/helm install ceph-backup . -n rook-ceph

Restoring Ceph backup data

To restore data from the Ceph backup, take the following steps:

  1. Navigate to the restore directory, by using the following command:

    cd Scripts/24.10/ceph-singlenode-backup/restore
    cd Scripts/24.10/ceph-singlenode-backup/restore
    
  2. Edit the image attribute in the objectstore-restore-jobs.yaml file to use the correct container registry according to your setup.

  3. Apply the restore job, by using the following command:

    kubectl apply -f objectstore-restore-jobs.yaml
    kubectl apply -f objectstore-restore-jobs.yaml
    
  4. Monitor the progress of the restore operation by checking the logs of the restore job, by using the following command:

    kubectl logs job/restore-objectstore-job -n rook-ceph
    kubectl logs job/restore-objectstore-job -n rook-ceph
    
  • Scheduling hourly Ceph backups
  • Restoring Ceph backup data

Was this page helpful?

Connect

Need help? Support

Want to learn? UiPath Academy

Have questions? UiPath Forum

Stay updated