automation-suite
2024.10
false
- Overview
- Requirements
- Pre-installation
- Installation
- Post-installation
- Migration and upgrade
- Upgrading Automation Suite
- Migrating standalone products to Automation Suite
- Step 1: Restoring the standalone product database
- Step 2: Updating the schema of the restored product database
- Step 3: Moving the Identity organization data from standalone to Automation Suite
- Step 4: Backing up the platform database in Automation Suite
- Step 5: Merging organizations in Automation Suite
- Step 6: Updating the migrated product connection strings
- Step 7: Migrating standalone Orchestrator
- Step 8: Migrating standalone Insights
- Step 9: Migrating standalone Test Manager
- Step 10: Deleting the default tenant
- Performing a single tenant migration
- Migrating between Automation Suite clusters
- Migrating from Automation Suite on EKS/AKS to Automation Suite on OpenShift
- Monitoring and alerting
- Cluster administration
- Product-specific configuration
- Orchestrator advanced configuration
- Configuring Orchestrator parameters
- Configuring appSettings
- Configuring the maximum request size
- Overriding cluster-level storage configuration
- Configuring NLog
- Saving robot logs to Elasticsearch
- Configuring credential stores
- Configuring encryption key per tenant
- Cleaning up the Orchestrator database
- Skipping host library creation
- Troubleshooting
- The backup setup does not work due to a failure to connect to Azure Government
- Pods in the uipath namespace stuck when enabling custom node taints
- Unable to launch Automation Hub and Apps with proxy setup
- Robot cannot connect to an Automation Suite Orchestrator instance
- Log streaming does not work in proxy setups
- Velero backup fails with FailedValidation error
- Accessing FQDN returns RBAC: access denied error

Automation Suite on EKS/AKS installation guide
Last updated Mar 31, 2026
Log streaming does not work in proxy setups
Description
Log forwarding does not work in Proxy setups because the proxy environment variables were not set in the logging pods.
Solution
-
Set the
http_proxy,https_proxyandno_proxyenvironment variables. Example:export http_proxy=http://<proxy>:3128 export https_proxy=http://<proxy>:3128 export no_proxy=<fqdn>,.<fqdn>,10.0.0.0/8,kerberossql.autosuitead.local,kerberospostgres.AUTOSUITEAD.LOCAL,rook-ceph-rgw-rook-ceph.rook-ceph.svc.cluster.local,localhost,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,.svc,.svc.cluster,.svc.cluster.local,.svc.cluster.local.,.uipath.svc.cluster.local,argocd-repo-server,istiod.istio-system.svc,logging-operator-logging-fluentd.logging.svc.cluster.local,.local,.cluster,ai-helper-svc,ai-pkgmanager-svc,ai-deployer-svc,ai-appmanager-svc,ai-trainer-svc,studioweb-backend,studioweb-frontend,studioweb-typecacheexport http_proxy=http://<proxy>:3128 export https_proxy=http://<proxy>:3128 export no_proxy=<fqdn>,.<fqdn>,10.0.0.0/8,kerberossql.autosuitead.local,kerberospostgres.AUTOSUITEAD.LOCAL,rook-ceph-rgw-rook-ceph.rook-ceph.svc.cluster.local,localhost,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,.svc,.svc.cluster,.svc.cluster.local,.svc.cluster.local.,.uipath.svc.cluster.local,argocd-repo-server,istiod.istio-system.svc,logging-operator-logging-fluentd.logging.svc.cluster.local,.local,.cluster,ai-helper-svc,ai-pkgmanager-svc,ai-deployer-svc,ai-appmanager-svc,ai-trainer-svc,studioweb-backend,studioweb-frontend,studioweb-typecache -
Execute the following script which ingests the environment variables into the logging pods and restarts them.
#!/bin/bash set -euo pipefail APP_NAME="logging" NAMESPACE="argocd" HTTP_PROXY="${http_proxy:-}" HTTPS_PROXY="${https_proxy:-}" NO_PROXY="${no_proxy:-}" # Create a temporary JSON patch PATCH_FILE=$(mktemp) cat > "$PATCH_FILE" <<EOF { "spec": { "source": { "helm": { "parameters": [ { "name": "logging-operator.env[0].name", "value": "http_proxy" }, { "name": "logging-operator.env[0].value", "value": "${HTTP_PROXY}" }, { "name": "logging-operator.env[1].name", "value": "https_proxy" }, { "name": "logging-operator.env[1].value", "value": "${HTTPS_PROXY}" }, { "name": "logging-operator.env[2].name", "value": "no_proxy" }, { "name": "logging-operator.env[2].value", "value": "${NO_PROXY}" }, { "name": "logging-operator.logging.fluentd.envVars[0].name", "value": "http_proxy" }, { "name": "logging-operator.logging.fluentd.envVars[0].value", "value": "${HTTP_PROXY}" }, { "name": "logging-operator.logging.fluentd.envVars[1].name", "value": "https_proxy" }, { "name": "logging-operator.logging.fluentd.envVars[1].value", "value": "${HTTPS_PROXY}" }, { "name": "logging-operator.logging.fluentd.envVars[2].name", "value": "no_proxy" }, { "name": "logging-operator.logging.fluentd.envVars[2].value", "value": "${NO_PROXY}" }, { "name": "logging-operator.logging.fluentbit.envVars[0].name", "value": "http_proxy" }, { "name": "logging-operator.logging.fluentbit.envVars[0].value", "value": "${HTTP_PROXY}" }, { "name": "logging-operator.logging.fluentbit.envVars[1].name", "value": "https_proxy" }, { "name": "logging-operator.logging.fluentbit.envVars[1].value", "value": "${HTTPS_PROXY}" }, { "name": "logging-operator.logging.fluentbit.envVars[2].name", "value": "no_proxy" }, { "name": "logging-operator.logging.fluentbit.envVars[2].value", "value": "${NO_PROXY}" } ] } } } } EOF # Patch the Argo CD Application kubectl patch application "$APP_NAME" -n "$NAMESPACE" --type merge --patch-file "$PATCH_FILE" # Cleanup rm -f "$PATCH_FILE" echo "Patched Argo CD Application '$APP_NAME' with proxy parameters." echo "Restarting logging pods..." kubectl rollout restart deploy/logging-logging-operator -n logging kubectl rollout restart sts/logging-fluentd -n logging kubectl rollout restart ds/logging-fluentbit -n logging echo "Rollout restart completed"#!/bin/bash set -euo pipefail APP_NAME="logging" NAMESPACE="argocd" HTTP_PROXY="${http_proxy:-}" HTTPS_PROXY="${https_proxy:-}" NO_PROXY="${no_proxy:-}" # Create a temporary JSON patch PATCH_FILE=$(mktemp) cat > "$PATCH_FILE" <<EOF { "spec": { "source": { "helm": { "parameters": [ { "name": "logging-operator.env[0].name", "value": "http_proxy" }, { "name": "logging-operator.env[0].value", "value": "${HTTP_PROXY}" }, { "name": "logging-operator.env[1].name", "value": "https_proxy" }, { "name": "logging-operator.env[1].value", "value": "${HTTPS_PROXY}" }, { "name": "logging-operator.env[2].name", "value": "no_proxy" }, { "name": "logging-operator.env[2].value", "value": "${NO_PROXY}" }, { "name": "logging-operator.logging.fluentd.envVars[0].name", "value": "http_proxy" }, { "name": "logging-operator.logging.fluentd.envVars[0].value", "value": "${HTTP_PROXY}" }, { "name": "logging-operator.logging.fluentd.envVars[1].name", "value": "https_proxy" }, { "name": "logging-operator.logging.fluentd.envVars[1].value", "value": "${HTTPS_PROXY}" }, { "name": "logging-operator.logging.fluentd.envVars[2].name", "value": "no_proxy" }, { "name": "logging-operator.logging.fluentd.envVars[2].value", "value": "${NO_PROXY}" }, { "name": "logging-operator.logging.fluentbit.envVars[0].name", "value": "http_proxy" }, { "name": "logging-operator.logging.fluentbit.envVars[0].value", "value": "${HTTP_PROXY}" }, { "name": "logging-operator.logging.fluentbit.envVars[1].name", "value": "https_proxy" }, { "name": "logging-operator.logging.fluentbit.envVars[1].value", "value": "${HTTPS_PROXY}" }, { "name": "logging-operator.logging.fluentbit.envVars[2].name", "value": "no_proxy" }, { "name": "logging-operator.logging.fluentbit.envVars[2].value", "value": "${NO_PROXY}" } ] } } } } EOF # Patch the Argo CD Application kubectl patch application "$APP_NAME" -n "$NAMESPACE" --type merge --patch-file "$PATCH_FILE" # Cleanup rm -f "$PATCH_FILE" echo "Patched Argo CD Application '$APP_NAME' with proxy parameters." echo "Restarting logging pods..." kubectl rollout restart deploy/logging-logging-operator -n logging kubectl rollout restart sts/logging-fluentd -n logging kubectl rollout restart ds/logging-fluentbit -n logging echo "Rollout restart completed"