- Overview
- Requirements
- Pre-installation
- Preparing the installation
- Installing and configuring the service mesh
- Downloading the installation packages
- Configuring the OCI-compliant registry
- Granting installation permissions
- Installing and configuring the GitOps tool
- Deploying Redis through OperatorHub
- Applying miscellaneous configurations
- Running uipathctl
- Installation
- Post-installation
- Migration and upgrade
- Upgrading Automation Suite
- Migrating standalone products to Automation Suite
- Step 1: Restoring the standalone product database
- Step 2: Updating the schema of the restored product database
- Step 3: Moving the Identity organization data from standalone to Automation Suite
- Step 4: Backing up the platform database in Automation Suite
- Step 5: Merging organizations in Automation Suite
- Step 6: Updating the migrated product connection strings
- Step 7: Migrating standalone Orchestrator
- Step 8: Migrating standalone Insights
- Step 9: Migrating standalone Test Manager
- Step 10: Deleting the default tenant
- Performing a single tenant migration
- Migrating between Automation Suite clusters
- Monitoring and alerting
- Cluster administration
- Product-specific configuration
- Orchestrator advanced configuration
- Configuring Orchestrator parameters
- Configuring appSettings
- Configuring the maximum request size
- Overriding cluster-level storage configuration
- Configuring NLog
- Saving robot logs to Elasticsearch
- Configuring credential stores
- Configuring encryption key per tenant
- Cleaning up the Orchestrator database
- Skipping host library creation
- Troubleshooting
- Log streaming does not work in proxy setups
- 500 errors and rate limiting on S3 requests in ODF
- Configuring CSI driver tolerations for ODF 4.20

Automation Suite on OpenShift installation guide
500 errors and rate limiting on S3 requests in ODF
Description
Services that send S3 requests through OpenShift Data Foundation (ODF) can encounter rate limiting or 500 Internal Server Error responses. In ODF, storage management is handled by NooBaa. When the number of requests surges beyond a threshold, NooBaa allocates additional memory. If that allocation exceeds the CPU or memory limits configured on the NooBaa deployment, the pod can be terminated by the out-of-memory (OOM) killer. This termination causes service interruptions, request throttling, and error responses.
Solution
To address the issue, you must increase the CPU and memory limits and requests for the NooBaa deployment so it can handle workload spikes without being terminated. Adjusting limits is the main resolution, while increasing requests helps improve resource allocation.
Take the following steps:
- Retrieve the relevant BackingStore by running the following command:
oc get backingstores.noobaa.io -n openshift-storageoc get backingstores.noobaa.io -n openshift-storage - Patch the BackingStore to increase CPU and memory resource limits by running the following command:
oc patch BackingStore -n openshift-storage <backing-store-name> --type='merge' -p '{ "spec": { "pvPool": { "resources": { "limits": { "cpu": "1000m", "memory": "4000Mi" }, "requests": { "cpu": "500m", "memory": "500Mi" } } } } }'oc patch BackingStore -n openshift-storage <backing-store-name> --type='merge' -p '{ "spec": { "pvPool": { "resources": { "limits": { "cpu": "1000m", "memory": "4000Mi" }, "requests": { "cpu": "500m", "memory": "500Mi" } } } } }'
The BackingStore may take several minutes to return to an active state after applying the patch.