- Overview
- Requirements
- Pre-installation
- Preparing the installation
- Downloading the installation packages
- Configuring the OCI-compliant registry
- Granting installation permissions
- Installing and configuring the service mesh
- Installing and configuring the GitOps tool
- Installing the External Secrets Operator
- Deploying Redis through OperatorHub
- Applying miscellaneous configurations
- Running uipathctl
- Installation
- Post-installation
- Migration and upgrade
- Monitoring and alerting
- Cluster administration
- Product-specific configuration
- Troubleshooting

Automation Suite on OpenShift installation guide
Storage
In addition to Microsoft SQL Server, the Automation Suite cluster requires a storage component to store the files. Automation Suite requires the objectstore and the block/file storage, depending on the service type you choose.
Storage estimate for each Automation Suite component
UiPath® platform services
The following services require the storage component. These are only necessary if you have opted to enable them as part of the Automation Suite installation or later.
| Service | Storage type | Purpose | Estimate |
|---|---|---|---|
| Orchestrator | Objectstore |
| Typically, a package is 5 Mb, and buckets, if any, are less than 1 Mb. A mature enterprise deploys around 10 GB of packages and 12 GB of Queues. |
| Action Center | Objectstore |
| Typically, a document takes 0.15 Mb, and the forms to fill take an additional 0.15 Kb. In a mature enterprise, this can total 4 GB. |
| Test Manager | Objectstore |
| Typically, all files and attachments add up to approximately 5 GB. |
| Insights | Blockstore |
| 2 GB is required for enablement, with the storage footprint growing with the number. A well-established enterprise-scale deployment requires another few GB for all the dashboards. Approximately 10GB of storage should be sufficient. |
| Integration Service | Objectstore |
| Connectors vary in size, but installing all the available connectors should consume less than 100 MB. Trigger events vary in number based on usage, but 5 GB should be sufficient. |
| Studio Web | Filestore |
|
|
| Apps | Objectstore |
| Typically, the database takes approximately 5 GB, and a typical complex app consumes about 15 Mb. |
| AI Center | Objectstore / Filestore |
| A typical and established installation will consume 8 GB for five packages and an additional 1 GB for the datasets. A pipeline may consume an additional 50 GB of block storage, but only when actively running. |
| Document Understanding | Objectstore |
| In a mature deployment, 12GB will go to the ML model, 17GB to the OCR, and 50GB to all documents stored. |
| Automation Suite Robots | Filestore |
| Typically, a mature enterprise deploys around 10 GB of packages. |
| Process Mining | Objectstore |
| The minimal footprint is only used to store the SQL files. Approximately a GB of storage should be enough in the beginning. |
| Context Grounding | Objectstore, File store |
|
|
| LLM Observability | Objectstore |
|
|
| Solutions | Filestore |
|
|
Objectstore
Automation Suite supports the following objectstores:
- Azure blob storage
- AWS S3 storage
- S3-compatible objectstore. OpenShift provides OpenShift Data Foundation, a Ceph-based, S3-compatible objectstore. To install OpenShift Data Foundation, see Introduction to OpenShift Data Foundation.
Configuring OpenShift Data Foundation
-
To create an objectstore bucket on OpenShift Data Foundation (ODF), you must create an
ObjectBucketClaimfor each bucket, corresponding to each product that you plan to install.Important:When using ODF as the object store on OpenShift cluster versions earlier than 4.19, CORS cannot be configured. This limitation can prevent services from working correctly with ODF buckets. To ensure compatibility, set
"disable_presigned_url": truein yourinput.jsonfile.If you encounter an error while applying this setting, refer to the Troubleshooting section.
The following sample shows a valid ObjectBucketClaim:
The configuration we provide in the sample is required only if you create the buckets in OpenShift Data Foundation.
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
name: BUCKET_NAME
namespace: <uipath>
spec:
bucketName: BUCKET_NAME
storageClassName: openshift-storage.noobaa.io
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
name: BUCKET_NAME
namespace: <uipath>
spec:
bucketName: BUCKET_NAME
storageClassName: openshift-storage.noobaa.io
- Applying the manifest creates a secret named
BUCKET_NAMEin the<uipath>namespace. The secret contains theaccess_keyand thesecret_keyfor that bucket. To query theaccess_keyand thesecret_key, run the following command:oc get secret BUCKET_NAME -n <uipath> -o jsonpath={.data.AWS_ACCESS_KEY_ID} | base64 -d; echo oc get secret BUCKET_NAME -n <uipath> -o jsonpath={.data.AWS_SECRET_ACCESS_KEY} | base64 -d; echooc get secret BUCKET_NAME -n <uipath> -o jsonpath={.data.AWS_ACCESS_KEY_ID} | base64 -d; echo oc get secret BUCKET_NAME -n <uipath> -o jsonpath={.data.AWS_SECRET_ACCESS_KEY} | base64 -d; echo - To find the host or FQDN to access the bucket, run the following command:
oc get routes s3 -o jsonpath={.spec.host} -n openshift-storage; echooc get routes s3 -o jsonpath={.spec.host} -n openshift-storage; echo
ODF includes a NooBaa deployment with pre-defined CPU and memory limits. These limits can become a bottleneck for the Automation Suite and result in S3 request limit issues. To mitigate this risk, you must configure ODF with higher CPU and memory limits for the NooBaa deployment. For details, refer to the Troubleshooting section.
Configuring the CORS policy
Additionally, you may have to enable the following CORS policy at the storage account/bucket level if you face any CORS-related error during the S3 connection while using the Automation Suite cluster.
Make sure to replace {{fqdn}} with the FQDN of the Automation Suite cluster in the following CORS policy.
The following sample shows the CORS policy in JSON format:
JSON
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"POST",
"GET",
"HEAD",
"DELETE",
"PUT"
],
"AllowedOrigins": [
"https://{{fqdn}}"
],
"ExposeHeaders": [
"etag",
"x-amz-server-side-encryption",
"x-amz-request-id",
"x-amz-id-2"
],
"MaxAgeSeconds": 3000
}
]
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"POST",
"GET",
"HEAD",
"DELETE",
"PUT"
],
"AllowedOrigins": [
"https://{{fqdn}}"
],
"ExposeHeaders": [
"etag",
"x-amz-server-side-encryption",
"x-amz-request-id",
"x-amz-id-2"
],
"MaxAgeSeconds": 3000
}
]
The following sample shows the CORS policy in XML format:
XML
<CORSConfiguration>
<CORSRule>
<AllowedOrigin>{{fqdn}}</AllowedOrigin>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<ExposeHeader>x-amz-server-side-encryption</ExposeHeader>
<ExposeHeader>x-amz-request-id</ExposeHeader>
<ExposeHeader>x-amz-id-2</ExposeHeader>
<ExposeHeader>etag</ExposeHeader>
</CORSRule>
</CORSConfiguration>
<CORSConfiguration>
<CORSRule>
<AllowedOrigin>{{fqdn}}</AllowedOrigin>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<ExposeHeader>x-amz-server-side-encryption</ExposeHeader>
<ExposeHeader>x-amz-request-id</ExposeHeader>
<ExposeHeader>x-amz-id-2</ExposeHeader>
<ExposeHeader>etag</ExposeHeader>
</CORSRule>
</CORSConfiguration>
Configuration
To configure the objectstore, see External Objectstore Configuration.
The Automation Suite installer supports creating the containers/buckets if you provide make permissions. Alternatively, you can provision the required containers/buckets before installation and their information to the installer.
Storage requirements
| Storage | Requirement |
|---|---|
| Objectstore | 500 GB |
The size of the objectstore depends on the size of the deployed and running automation. Therefore, it can be challenging to provide an accurate objectstore estimate initially during the installation. You can start with an objectstore size of 350 GB to 500 GB. To understand the usage of the objectstore, see Storage estimate for each Automation Suite component.
- As your automation scales, you may need to account for the increase in your objectstore size.
- If you use buckets created in OpenShift Data Foundation, then you must
explicitly provision the buckets and provide the details for each product in the
input.jsonfile. For more information on how to provide bucket information explicitly in theinput.jsonfile, see the Product-specific configuration section.
Block storage
Block storage must have CSI drivers configured with the Kubernetes storage classes.
The following table provides details of the block storage, storage class, and provisioner:
| Cloud / Kubernetes | Storage | StorageClass | Provisioner |
|---|---|---|---|
| AWS | EBS Volumes | ebs-sc | ebs.csi.aws.com |
| Azure | Azure Manage Disk | managed-premium Premium LRS Disk | disk.csi.azure.com |
| OpenShift | OpenShift Data Foundation | ocs-storagecluster-ceph-rbd | openshift-storage.rbd.csi.ceph.com |
It is not mandatory that you use the storage solutions mentioned in this section. If you use a different storage solution, then you must use the corresponding StorageClass that your storage vendor provides.
Configuration
You can follow the official guide from Red Hat to create a storage class in your OpenShift cluster.
You must pass the name of the storage class you created for your cluster to the storage_class parameter in the input.json file.
- In OpenShift, CSI drivers are installed automatically and the storage class is created while installing the OpenShift Data Foundation. If these storage classes are not configured, you must configure them before the Automation Suite installation.
- You must make the storage class for the block storage the default one, as shown in the following example.
Example
The following example shows how to configure the storage class and how to provide it to input.json file during installation:
| Configuration | input.json | StorageClass |
|---|---|---|
| Azure |
|
|
| AWS |
|
|
| OpenShift |
|
|
Storage requirements
| Configuration | Requirement |
|---|---|
| Block storage | 50 GB |
The size of the block store depends on the size of the deployed and running automation. Therefore, it can be challenging to provide an accurate estimate initially during the installation. You can start with a block storage size of 50 GB. To understand the usage of the block store, see Storage estimate for each Automation Suite component.
As your automation scales, you may need to account for the increase in your block storage size.
File storage
File storage must have CSI drivers configured with the Kubernetes storage classes.
File storage is required for the components that do not require any replication. However, if you do not have a file system, you can replace file storage with block storage.
| Cloud / Kubernetes | Storage | StorageClass | Provisioner |
|---|---|---|---|
| AWS | EFS | efs-sc | efs.csi.aws.com |
| Azure | Azure Files | azurefile-csi-premium | file.csi.azure.com |
| OpenShift | OpenShift Data Foundation | ocs-storagecluster-cephfs* | openshift-storage.cephfs.csi.ceph.com |
It is recommended to configure ZRS (or replication) for Studio Web storage to ensure high availability. Worker node disks should have at least 2300 IOPS, and the StorageCluster should be configured using the Performance Profile and SKU Profile with a minimum of 5000 IOPS.
It is not mandatory that you use the storage solutions mentioned in this section. If you use a different storage solution, then you must use the corresponding StorageClass that your storage vendor provides.
Configuration
You can follow the official guide from Red Hat to create a storage class in your OpenShift cluster.
You must pass the name of the storage class you created for your cluster to the storage_class_single_replica parameter in the input.json file.
In OpenShift, CSI drivers are installed automatically and the storage class is created while installing OpenShift Data Foundation. If the storage class is not configured, you must configure it before the Automation Suite installation.
Example
The following example shows how to configure the storage class and how to provide it to input.json during the installation:
| Configuration |
|
|
|---|---|---|
| Azure | | |
| Azure | | |
| AWS | | Replace |
| AWS | | Replace |
| OpenShift | | |
| OpenShift | | |
Storage class for the file share must have the required permissions set to 700 for the directory and files.
Additionally, UID and GID must be set to 1000 in Azure, and gidRangeStart and gidRangeEnd to 1000 and 2000, respectively, in AWS.
Storage requirements
| Storage | Requirement |
|---|---|
| File storage | 510 GB |
The size of the file store depends on the size of the deployed and running automation. Therefore, it can be challenging to provide an actual estimate initially, during the installation. However, you should expect approximately 510 GB of storage size to be enough to run ten concurrent training pipelines and for Automation Suite Robots. To understand the usage of the filestore, see Storage estimate for each Automation Suite component.
As your automation scales, you may need to account for an increase in the size of your file storage.