- Overview
- Requirements
- Pre-installation
- Preparing the installation
- Installing and configuring the service mesh
- Downloading the installation packages
- Configuring the OCI-compliant registry
- Granting installation permissions
- Installing and configuring the GitOps tool
- Deploying Redis through OperatorHub
- Applying miscellaneous configurations
- Running uipathctl
- Installation
- Post-installation
- Migration and upgrade
- Upgrading Automation Suite
- Migrating standalone products to Automation Suite
- Step 1: Restoring the standalone product database
- Step 2: Updating the schema of the restored product database
- Step 3: Moving the Identity organization data from standalone to Automation Suite
- Step 4: Backing up the platform database in Automation Suite
- Step 5: Merging organizations in Automation Suite
- Step 6: Updating the migrated product connection strings
- Step 7: Migrating standalone Orchestrator
- Step 8: Migrating standalone Insights
- Step 9: Migrating standalone Test Manager
- Step 10: Deleting the default tenant
- Performing a single tenant migration
- Migrating between Automation Suite clusters
- Monitoring and alerting
- Cluster administration
- Product-specific configuration
- Orchestrator advanced configuration
- Configuring Orchestrator parameters
- Configuring appSettings
- Configuring the maximum request size
- Overriding cluster-level storage configuration
- Configuring NLog
- Saving robot logs to Elasticsearch
- Configuring credential stores
- Configuring encryption key per tenant
- Cleaning up the Orchestrator database
- Skipping host library creation
- Troubleshooting

Automation Suite on OpenShift installation guide
Installing and configuring the GitOps tool
Before proceeding with the OpenShift GitOps Operator installation and configuration, you must install OpenShift Service Mesh and provide all the required permissions to the uipathadmin service account.
- For the service mesh installation and configuration instructions, see Installing and configuring the service mesh
- For the installation permissions, see Granting installation permissions.
You can deploy Automation Suite using either an OpenShift GitOps Operator instance dedicated to the UiPath® applications or a shared OpenShift GitOps Operator instance, if it is already installed and available on your cluster.
We recommend using a dedicated OpenShift GitOps Operator instance to install the Automation Suite applications. This method requires minimum permissions to the other namespaces and cluster resources.
For installation and access instructions, see the following sections:
Provisioning a dedicated GitOps instance
We recommend using a namespace that is different from <uipath> for ArgoCD.
If you use Openshift GitOps version 1.15 or higher and install a dedicated instance of ArgoCD within the <uipath> namespace, the ArgoCD UI will not be accessible due to the network policies in the <uipath> namespace added by the Service Mesh Control Plane. To address this, you must add a network policy, as shown in the following example, to allow the ingress pods to reach the ArgoCD-server pods in the <uipath> namespace.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-argocd
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: argocd-server
ingress:
- from:
- namespaceSelector:
matchLabels:
network.openshift.io/policy-group: ingress
policyTypes:
- Ingress
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-argocd
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: argocd-server
ingress:
- from:
- namespaceSelector:
matchLabels:
network.openshift.io/policy-group: ingress
policyTypes:
- Ingress
To provision a dedicated OpenShift GitOps Operator instance, take the following steps:
-
If the
<argocd>namespace does not already exist, run the following commands to create it:oc get namespace <argocd> || oc new-project <argocd> oc project <argocd>oc get namespace <argocd> || oc new-project <argocd> oc project <argocd> -
Install the OpenShift GitOps Operator by following the instructions in Installing OpenShift GitOps.
-
Create a new ArgoCD instance by following the instructions in Setting up a new ArgoCD instance.
Note:In the
specsection described in Enabling replicas for Argo CD server and repo server, you must add the following line:server.route.enabled: trueserver.route.enabled: true -
Patch the ArgoCD deployment:
oc -n <argocd> patch deployment argocd-server \ -p '{"spec":{"template":{"metadata":{"labels":{"maistra.io/expose-route":"true"}}}}}'oc -n <argocd> patch deployment argocd-server \ -p '{"spec":{"template":{"metadata":{"labels":{"maistra.io/expose-route":"true"}}}}}' -
Create a role so that ArgoCD can manage limit ranges. To create the role, take the following steps:
- Save the following role configuration as a YAML file:
kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: limit-range-manager namespace: <uipath> rules: - apiGroups: ["*"] resources: ["limitranges"] verbs: ["get", "watch", "list", "patch", "update", "create"]kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: limit-range-manager namespace: <uipath> rules: - apiGroups: ["*"] resources: ["limitranges"] verbs: ["get", "watch", "list", "patch", "update", "create"] - Apply the configuration by running the following command. Make sure to replace the
<file_name.yaml>placeholder with the actual name of the YAML file:oc apply -f <file_name.yaml>oc apply -f <file_name.yaml>
- Save the following role configuration as a YAML file:
-
Bind the
limit-range-managerrole to theargocdservice account:oc -n <uipath> create rolebinding limit-range-manager-binding --role=limit-range-manager --serviceaccount=<argocd>:argocd-argocd-application-controlleroc -n <uipath> create rolebinding limit-range-manager-binding --role=limit-range-manager --serviceaccount=<argocd>:argocd-argocd-application-controller -
If you enabled either Process Mining - Dapr or Automation Suite Robots, you must enable cluster-wide mode for ArgoCD by taking the following steps:
- In
<openshift-gitops>, edit theopenshift-gitops-operatorsubscription resource to include the following environment variable:ARGOCD_CLUSTER_CONFIG_NAMESPACES: <argocd>ARGOCD_CLUSTER_CONFIG_NAMESPACES: <argocd> - Follow the instructions in Using an Argo CD instance to manage cluster-scoped resources.
- In
-
You must ensure that the ArgoCD instance can manage the
<uipath>namespace if the<uipath>namespace is not the same as the<argocd>namespace:oc label namespace <uipath> argocd.argoproj.io/managed-by=<argocd>oc label namespace <uipath> argocd.argoproj.io/managed-by=<argocd>After you apply the configuration, restart the ArgoCD
application-controler(statefulset) andserver(deployment). -
You must perform the following steps only if the
<uipath>namespace is not the same as the<argocd>namespace.Create a role to manage the applications in the
<argocd>namespace. To create the role, take the following steps:- Save the following role configuration as a YAML file:
apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: uipath-application-manager namespace: <argocd> rules: - apiGroups: - argoproj.io resources: - applications verbs: - "*"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: uipath-application-manager namespace: <argocd> rules: - apiGroups: - argoproj.io resources: - applications verbs: - "*" - Apply the configuration by running the following command. Make sure to replace the
<file_name.yaml>placeholder with the actual name of the YAML file:oc apply -f <file_name.yaml>oc apply -f <file_name.yaml>
- Save the following role configuration as a YAML file:
-
Bind the
uipath-application-managerrole to theuipathadminservice account:oc project <argocd> oc create rolebinding uipath-application-manager \ --role=uipath-application-manager --serviceaccount=<uipath>:uipathadminoc project <argocd> oc create rolebinding uipath-application-manager \ --role=uipath-application-manager --serviceaccount=<uipath>:uipathadmin -
Create a role so that the
uipathadminservice account can create and edit the secret in the<argocd>namespace. The ArgoCD application requires this role to update the Helm secret. To create the role, take the following steps:- Save the following role configuration as a YAML file:
apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: argo-secret-role namespace: <argocd> rules: - apiGroups: ["rbac.authorization.k8s.io"] resources: ["roles", "rolebindings"] verbs: ["*"] - apiGroups: ["*"] resources: ["secrets"] verbs: ["get", "watch", "list", "patch", "update", "create"]apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: argo-secret-role namespace: <argocd> rules: - apiGroups: ["rbac.authorization.k8s.io"] resources: ["roles", "rolebindings"] verbs: ["*"] - apiGroups: ["*"] resources: ["secrets"] verbs: ["get", "watch", "list", "patch", "update", "create"] - Apply the configuration by running the following command. Make sure to replace the
<file_name.yaml>placeholder with the actual name of the YAML file:oc apply -f <file_name.yaml>oc apply -f <file_name.yaml>
- Save the following role configuration as a YAML file:
-
Bind the
argo-secret-rolerole to theuipathadminservice account:oc project <argocd> oc create rolebinding secret-binding \ --role=argo-secret-role --serviceaccount=<uipath>:uipathadminoc project <argocd> oc create rolebinding secret-binding \ --role=argo-secret-role --serviceaccount=<uipath>:uipathadmin -
Bind the
namespace-readerrole in the<argocd>namespace to theuipathadminservice account:oc project <argocd> oc create rolebinding namespace-reader-rolebinding \ --clusterrole=namespace-reader-clusterrole --serviceaccount=<uipath>:uipathadminoc project <argocd> oc create rolebinding namespace-reader-rolebinding \ --clusterrole=namespace-reader-clusterrole --serviceaccount=<uipath>:uipathadmin
Accessing the dedicated ArgoCD instance
To access ArgoCD, take the following steps:
- Get the host URL:
oc get routes argocd-server -n <argocd> -o jsonpath={.spec.host}; echooc get routes argocd-server -n <argocd> -o jsonpath={.spec.host}; echo - To log in, use
adminas the username and run the following command to get the password:oc -n <argocd> get secrets argocd-cluster \ -o "jsonpath={.data['admin\.password']}" | base64 -d; echooc -n <argocd> get secrets argocd-cluster \ -o "jsonpath={.data['admin\.password']}" | base64 -d; echo
Configuring the private Helm repository and certificates in ArgoCD
To configure the Helm repository in ArgoCD, take the following steps:
- Log in to ArgoCD.
- Navigate to Settings > Repositories > +CONNECT REPO.
- Use VIA HTTPS for the connection method.
- Select Helm as the type.
- Provide a name.
- Choose default as the project.
- Provide the repository URL, username, password, and certificate info.
Important:
When adding the TLS client certificate on the +CONNECT REPO page, the TLS client certificate key becomes a mandatory field. To configure the registry certificate without the TLS client certificate key, take the following steps:
- Navigate to Settings > Repository certificates and known hosts > +ADD TLS CERTIFICATE.
- Add the repository name and TLS certificate in PEM format.
- Enable the OCI checkbox.
- Select Connect.
- Make sure that the connection status is Successful.
Configuring a shared GitOps instance
If your platform team has not already provisioned the shared OpenShift GitOps Operator instance, take the following installation and configuration steps:
-
Create the
<uipath>namespace:oc get namespace <uipath> || oc new-project <uipath> oc project <uipath>oc get namespace <uipath> || oc new-project <uipath> oc project <uipath> -
Install the OpenShift GitOps Operator by following the instructions in Installing OpenShift GitOps. This installation comes with the default ArgoCD instance, named
openshift-gitops, in the<openshift-gitops>namespace. -
Enable cluster-wide mode for ArgoCD by taking the following steps:
- In
<openshift-gitops>, edit theopenshift-gitops-operatorsubscription resource to include the following environment variable:ARGOCD_CLUSTER_CONFIG_NAMESPACES: <openshift-gitops>ARGOCD_CLUSTER_CONFIG_NAMESPACES: <openshift-gitops> - Follow the instructions in Using an ArgoCD instance to manage cluster-scoped resources.
- In
-
Make sure that the
openshift-gitopsArgoCD instance can manage the<uipath>namespace:oc label namespace <uipath> argocd.argoproj.io/managed-by=openshift-gitopsoc label namespace <uipath> argocd.argoproj.io/managed-by=openshift-gitopsAfter you apply the configuration, restart the ArgoCD
openshift-gitops-application-controller(statefulset) andopenshift-gitops-server(deployment). -
Patch the ArgoCD deployment:
oc -n <uipath> patch deployment argocd-server -p '{"spec":{"template":{"metadata":{"labels":{"maistra.io/expose-route":"true"}}}}}'oc -n <uipath> patch deployment argocd-server -p '{"spec":{"template":{"metadata":{"labels":{"maistra.io/expose-route":"true"}}}}}' -
Create an ArgoCD project for the UiPath® application:
apiVersion: argoproj.io/v1alpha1 kind: AppProject metadata: name: uipath namespace: <openshift-gitops> spec: description: Appproject to managed and deploy uipath applications clusterResourceWhitelist: - group: '*' kind: '*' destinations: - namespace: <uipath> server: https://kubernetes.default.svc - namespace: <istio-system> server: https://kubernetes.default.svc sourceNamespaces: - <openshift-gitops> sourceRepos: - '*'apiVersion: argoproj.io/v1alpha1 kind: AppProject metadata: name: uipath namespace: <openshift-gitops> spec: description: Appproject to managed and deploy uipath applications clusterResourceWhitelist: - group: '*' kind: '*' destinations: - namespace: <uipath> server: https://kubernetes.default.svc - namespace: <istio-system> server: https://kubernetes.default.svc sourceNamespaces: - <openshift-gitops> sourceRepos: - '*' -
Create a role so that ArgoCD can manage limit ranges. To create the role, take the following steps:
- Save the following role configuration as a YAML file:
kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: limit-range-manager namespace: <uipath> rules: - apiGroups: ["*"] resources: ["limitranges"] verbs: ["get", "watch", "list", "patch", "update", "create"]kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: limit-range-manager namespace: <uipath> rules: - apiGroups: ["*"] resources: ["limitranges"] verbs: ["get", "watch", "list", "patch", "update", "create"] - Apply the configuration by running the following command. Make sure to replace the
<file_name.yaml>placeholder with the actual name of the YAML file:oc apply -f <file_name.yaml>oc apply -f <file_name.yaml>
- Save the following role configuration as a YAML file:
-
Bind the
limit-range-managerrole to theargocdservice account:oc -n <uipath> create rolebinding limit-range-manager-binding --role=limit-range-manager --serviceaccount=<openshift-gitops>:openshift-gitops-argocd-application-controlleroc -n <uipath> create rolebinding limit-range-manager-binding --role=limit-range-manager --serviceaccount=<openshift-gitops>:openshift-gitops-argocd-application-controller -
Create a role to manage the applications in the
<openshift-gitops>namespace:apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: uipath-application-manager namespace: <openshift-gitops> rules: - apiGroups: - argoproj.io resources: - applications verbs: - "*"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: uipath-application-manager namespace: <openshift-gitops> rules: - apiGroups: - argoproj.io resources: - applications verbs: - "*" -
Bind the
uipath-application-managerrole to theuipathadminservice account:oc project <openshift-gitops> oc create rolebinding uipath-application-manager \ --role=uipath-application-manager --serviceaccount=<uipath>:uipathadminoc project <openshift-gitops> oc create rolebinding uipath-application-manager \ --role=uipath-application-manager --serviceaccount=<uipath>:uipathadmin -
Create a role so that ArgoCD can create and edit the secret in the
<openshift-gitops>namespace. The ArgoCD application requires this role to update the Helm secret. The following sample shows a valid configuration for the role:apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: argo-secret-role namespace: <openshift-gitops> rules: - apiGroups: ["rbac.authorization.k8s.io"] resources: ["roles", "rolebindings"] verbs: ["*"] - apiGroups: ["*"] resources: ["secrets"] verbs: ["get", "watch", "list", "patch", "update", "create"]apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: argo-secret-role namespace: <openshift-gitops> rules: - apiGroups: ["rbac.authorization.k8s.io"] resources: ["roles", "rolebindings"] verbs: ["*"] - apiGroups: ["*"] resources: ["secrets"] verbs: ["get", "watch", "list", "patch", "update", "create"] -
Bind the
argo-secret-rolerole to theuipathadminservice account:oc project <openshift-gitops> oc create rolebinding secret-binding \ --role=argo-secret-role --serviceaccount=<uipath>:uipathadminoc project <openshift-gitops> oc create rolebinding secret-binding \ --role=argo-secret-role --serviceaccount=<uipath>:uipathadmin -
Bind the
namespace-readerrole in the<openshift-gitops>namespace to theuipathadminservice account:oc project <openshift-gitops> oc create rolebinding namespace-reader-rolebinding \ --clusterrole=namespace-reader-clusterrole --serviceaccount=<uipath>:uipathadminoc project <openshift-gitops> oc create rolebinding namespace-reader-rolebinding \ --clusterrole=namespace-reader-clusterrole --serviceaccount=<uipath>:uipathadmin
In addition to completing the steps to configure the shared ArgoCD instance for the Automation Suite installation, you must add the following parameters to the input.json file:
"argocd": {
"project": "<uipath>"
},
"argocd": {
"project": "<uipath>"
},
Accessing the shared ArgoCD instance
To access ArgoCD, take the following steps:
- Get the host URL by running the following commands:
oc get routes openshift-gitops-server -n <openshift-gitops> -o jsonpath={.spec.host}; echooc get routes openshift-gitops-server -n <openshift-gitops> -o jsonpath={.spec.host}; echo - To log in, use
adminas the username and run the following command to get the password:oc -n <openshift-gitops> get secrets openshift-gitops-cluster \ -o "jsonpath={.data['admin\.password']}" | base64 -d; echooc -n <openshift-gitops> get secrets openshift-gitops-cluster \ -o "jsonpath={.data['admin\.password']}" | base64 -d; echo
Configuring the private Helm repository and certificates in ArgoCD
To configure the Helm repository in ArgoCD, take the following steps:
- Log in to ArgoCD.
- Navigate to Settings > Repositories > +CONNECT REPO.
- Use VIA HTTPS for the connection method.
- Select Helm as the type.
- Provide a name.
- Choose uipath as the project. uipath is the name of the ArgoCD project you created for the UiPath® application.
- Provide the repository URL, username, password, and certificate info.
Important:
When adding the TLS client certificate on the +CONNECT REPO page, the TLS client certificate key becomes a mandatory field. To configure the registry certificate without the TLS client certificate key, take the following steps:
- Navigate to Settings > Repository certificates and known hosts > +ADD TLS CERTIFICATE.
- Add the repository name and TLS certificate in PEM format.
- Enable the OCI checkbox.
- Select Connect.
- Make sure that the connection status is Successful.
Configuring ArgoCD for multiple installations in a single cluster
To configure ArgoCD for multiple Automation Suite installations in a single OpenShift cluster, follow these steps:
- Check if all the services of ArgoCD are up and running. You can run the following command to monitor all the pods:
oc get pods -n <argocd>oc get pods -n <argocd> - Once all services are up and running, you can use the following command sequentially to patch ArgoCD's permissions. This allows ArgoCD to manage different application namespaces where Automation Suite is installed:
oc patch appprojects.argoproj.io default -n <argocd> --type='merge' -p '{"spec": {"sourceNamespaces": ["*"]}}' oc patch configmaps argocd-cmd-params-cm -n <argocd> --type='merge' -p '{"data": {"application.namespaces": "*"}}' oc rollout restart -n <argocd> deployment argocd-server oc rollout restart -n <argocd> statefulset argocd-application-controlleroc patch appprojects.argoproj.io default -n <argocd> --type='merge' -p '{"spec": {"sourceNamespaces": ["*"]}}' oc patch configmaps argocd-cmd-params-cm -n <argocd> --type='merge' -p '{"data": {"application.namespaces": "*"}}' oc rollout restart -n <argocd> deployment argocd-server oc rollout restart -n <argocd> statefulset argocd-application-controller
- Provisioning a dedicated GitOps instance
- Accessing the dedicated ArgoCD instance
- Configuring the private Helm repository and certificates in ArgoCD
- Configuring a shared GitOps instance
- Accessing the shared ArgoCD instance
- Configuring the private Helm repository and certificates in ArgoCD
- Configuring ArgoCD for multiple installations in a single cluster