- Overview
- Get started
- Concepts
- Using UiPath CLI
- How-to guides
- CI/CD recipes
- Command reference
- Overview
- Exit codes
- Global options
- uip codedagent
- uip docsai
- add-test-data-entity
- add-test-data-queue
- add-test-data-variation
- analyze
- build
- create-project
- diff
- find-activities
- get-analyzer-rules
- get-default-activity-xaml
- get-errors
- get-manual-test-cases
- get-manual-test-steps
- get-versions
- get-workflow-example
- indicate-application
- indicate-element
- inspect-package
- install-data-fabric-entities
- install-or-update-packages
- list-data-fabric-entities
- list-workflow-examples
- pack
- restore
- run-file
- search-templates
- start-studio
- stop-execution
- uia
- uip traces
- Migration
- Reference & support
UiPath CLI user guide
This page shows how to run a UiPath Test Manager test set from the CLI as part of a CI pipeline — and, just as importantly, how to read the verdict correctly. The uip tm tool splits a test run into three verbs for a reason; using only one of them can produce a passing build even when tests have failed.
The golden rule: launch → wait → verify
No single uip tm verb exits non-zero because tests failed. Getting a pipeline to fail on a red test run takes three commands:
- Launch —
uip tm testset execute. Queues the run and exits0as soon as Orchestrator accepts the request.Status: "Running"in the response reflects the state at launch, not the final outcome. - Block —
uip tm wait. Polls until the execution reaches a terminal state (Passed,Failed,Cancelled). Exits0when it reaches any terminal state — because "finished" is the success signal forwait. Exits2on--timeout(a domain-specific reuse of the authentication-error slot, so scripts can branch on "took too long" without parsing text). - Verify —
uip tm report get. ReadsData.Failed(andPassed,Skipped,PassRate). Exits0whenever it successfully fetches the summary, regardless of pass/fail — the script is responsible for turningData.Failed > 0into a non-zero exit.
Shortcut patterns exist (uip or jobs start --wait-for-completion, for example, does wait+verify on a single job), but not for test sets. Always write the three steps.
Minimum viable snippet
#!/usr/bin/env bash
set -euo pipefail
# 1. Launch
EXECUTION_ID=$(uip tm testset execute \
--test-set-key DEMO:10 \
--output-filter "Data.ExecutionId" \
--output plain)
# 2. Block (default timeout: 30 min; adjust with --timeout)
if ! uip tm wait \
--execution-id "$EXECUTION_ID" \
--project-key DEMO \
--timeout 1800; then
code=$?
if [ "$code" -eq 2 ]; then
echo "test run did not finish within 30 minutes" >&2
exit 2
fi
echo "wait failed (exit $code)" >&2
exit "$code"
fi
# 3. Verify
FAILED=$(uip tm report get \
--execution-id "$EXECUTION_ID" \
--project-key DEMO \
--output-filter "Data.Failed" \
--output plain)
if [ "$FAILED" -gt 0 ]; then
echo "$FAILED test case(s) failed" >&2
exit 1
fi
echo "all tests passed"
#!/usr/bin/env bash
set -euo pipefail
# 1. Launch
EXECUTION_ID=$(uip tm testset execute \
--test-set-key DEMO:10 \
--output-filter "Data.ExecutionId" \
--output plain)
# 2. Block (default timeout: 30 min; adjust with --timeout)
if ! uip tm wait \
--execution-id "$EXECUTION_ID" \
--project-key DEMO \
--timeout 1800; then
code=$?
if [ "$code" -eq 2 ]; then
echo "test run did not finish within 30 minutes" >&2
exit 2
fi
echo "wait failed (exit $code)" >&2
exit "$code"
fi
# 3. Verify
FAILED=$(uip tm report get \
--execution-id "$EXECUTION_ID" \
--project-key DEMO \
--output-filter "Data.Failed" \
--output plain)
if [ "$FAILED" -gt 0 ]; then
echo "$FAILED test case(s) failed" >&2
exit 1
fi
echo "all tests passed"
--output plain returns the bare value for a scalar filter result; this is the safest way to capture a field into a shell variable without parsing JSON. See Scripting patterns — reading values out of the envelope.
Picking inputs to the commands
uip tm testset execute needs the test set key, not its UUID. The key has the form <ProjectKey>:<Number> — for example, DEMO:10. Two ways to find it:
# From the Test Manager UI — it is the "Key" column in the test set list.
# Or from the CLI:
uip tm testset list --project-key DEMO --filter smoke \
--output-filter "Data[*].{key:TestSetKey, name:Name}"
# From the Test Manager UI — it is the "Key" column in the test set list.
# Or from the CLI:
uip tm testset list --project-key DEMO --filter smoke \
--output-filter "Data[*].{key:TestSetKey, name:Name}"
--project-key is mandatory on list; execute derives the project from the test set key's prefix (DEMO:10 → project DEMO).
If you need the numeric UUID for uip tm execution list --test-set-id, that is the Id field on the same list output — not the TestSetKey.
Picking which cases run
uip tm testset execute --execution-type controls which test cases inside the set run:
automated(default) — only automated test cases.manual— only manual test cases.mixed— both.none— no type filter.
For CI, keep the default (automated) — manual cases require a human to mark pass/fail and will sit in InProgress until --timeout.
Passing parameter overrides
If the test set exposes parameters (for example, a target URL or an account ID), override them at launch with --input-path:
cat > ./params.json <<'EOF'
[
{"name": "TargetUrl", "type": "String", "value": "https://staging.example.com"},
{"name": "AccountId", "type": "String", "value": "acme-staging"}
]
EOF
uip tm testset execute --test-set-key DEMO:10 --input-path ./params.json
cat > ./params.json <<'EOF'
[
{"name": "TargetUrl", "type": "String", "value": "https://staging.example.com"},
{"name": "AccountId", "type": "String", "value": "acme-staging"}
]
EOF
uip tm testset execute --test-set-key DEMO:10 --input-path ./params.json
Overrides are matched against the test set's current parameter definitions by name (case-insensitive) and, when present, type. If the server reports no parameter definitions, the inputs are sent as-is.
Reading the verdict in detail
uip tm report get is the summary reader. Default output gives you the scorecard; pass --query (a jq-style filter) to extract a subset in one call:
# Pretty scorecard
uip tm report get --execution-id "$EXECUTION_ID" --project-key DEMO
# Script-friendly — JSON with just the counts
uip tm report get --execution-id "$EXECUTION_ID" --project-key DEMO \
--query '{total: .TotalTests, passed: .Passed, failed: .Failed}'
# Pretty scorecard
uip tm report get --execution-id "$EXECUTION_ID" --project-key DEMO
# Script-friendly — JSON with just the counts
uip tm report get --execution-id "$EXECUTION_ID" --project-key DEMO \
--query '{total: .TotalTests, passed: .Passed, failed: .Failed}'
The Data envelope has these fields (full schema in the report get reference):
TotalTests,Passed,Failed,Skipped,PassRate("80%"),Duration(HH:MM:SS).FailedTests— one entry per failed test case, withTestCaseNameand anErrorstring (either the log'sinfofield, or a concatenated list of failed assertion messages).
When you need a JUnit-XML file that your CI's test dashboard can ingest, use uip tm result download instead of (or alongside) report get.
Surfacing failed cases in CI logs
Turn the FailedTests array into a human-readable line per failure before exiting the pipeline:
if [ "$FAILED" -gt 0 ]; then
uip tm report get \
--execution-id "$EXECUTION_ID" \
--project-key DEMO \
--output-filter "Data.FailedTests[*].[TestCaseName, Error]" \
--output plain >&2
echo "$FAILED test case(s) failed" >&2
exit 1
fi
if [ "$FAILED" -gt 0 ]; then
uip tm report get \
--execution-id "$EXECUTION_ID" \
--project-key DEMO \
--output-filter "Data.FailedTests[*].[TestCaseName, Error]" \
--output plain >&2
echo "$FAILED test case(s) failed" >&2
exit 1
fi
For deeper failure analysis — assertion-by-assertion detail on a single failed case — use uip tm testcaselog list-assertions on the log IDs returned by uip tm execution list-testcaselogs --only-failed.
Retrying flaky cases
If a run has failures that might be flaky (timing, environment), retry only the failed cases in place:
uip tm execution retry \
--execution-id "$EXECUTION_ID" \
--project-key DEMO
uip tm execution retry \
--execution-id "$EXECUTION_ID" \
--project-key DEMO
uip tm execution retry reuses the same execution ID — no new one is created. If there are no failures to retry, it prints an informational message and exits 0 (intentionally not an error).
After retry, loop back through wait + report get on the same execution ID.
Common pitfalls
- Parsing
testset executeoutput to decide pass/fail. TheStatusfield at launch is typicallyRunning— it means the run was queued, not that every test passed. Always callwaitand thenreport get. - Forgetting the
--timeoutonwait. Default is 1800 seconds (30 minutes). Pass0to wait indefinitely (rarely what you want in CI); pass a larger number for long suites. - Treating exit
2fromwaitas an auth failure. Onwaitspecifically,2means timeout, notAuthenticationError— seeuip tm waitexit codes. - Using
testsetverbs to create tests. Test cases and their links to Orchestrator automations come fromuip tm testcase;testsetonly groups them.
Full CI-ready example
A bash snippet for a pipeline step — fails the build when any test fails, times out, or errors:
#!/usr/bin/env bash
set -euo pipefail
TEST_SET_KEY="${TEST_SET_KEY:?TEST_SET_KEY is required}"
PROJECT_KEY="${PROJECT_KEY:?PROJECT_KEY is required}"
TIMEOUT="${TEST_TIMEOUT:-1800}"
EXECUTION_ID=$(uip tm testset execute \
--test-set-key "$TEST_SET_KEY" \
--output-filter "Data.ExecutionId" \
--output plain)
echo "started execution $EXECUTION_ID" >&2
if ! uip tm wait \
--execution-id "$EXECUTION_ID" \
--project-key "$PROJECT_KEY" \
--timeout "$TIMEOUT"; then
code=$?
case "$code" in
2) echo "execution $EXECUTION_ID did not finish within ${TIMEOUT}s" >&2; exit 2 ;;
*) echo "wait failed with exit $code" >&2; exit "$code" ;;
esac
fi
# Get the scorecard and the names of any failed cases in one shot
uip tm report get \
--execution-id "$EXECUTION_ID" \
--project-key "$PROJECT_KEY"
FAILED=$(uip tm report get \
--execution-id "$EXECUTION_ID" \
--project-key "$PROJECT_KEY" \
--output-filter "Data.Failed" \
--output plain)
if [ "$FAILED" -gt 0 ]; then
echo "$FAILED test case(s) failed" >&2
exit 1
fi
echo "all tests passed"
#!/usr/bin/env bash
set -euo pipefail
TEST_SET_KEY="${TEST_SET_KEY:?TEST_SET_KEY is required}"
PROJECT_KEY="${PROJECT_KEY:?PROJECT_KEY is required}"
TIMEOUT="${TEST_TIMEOUT:-1800}"
EXECUTION_ID=$(uip tm testset execute \
--test-set-key "$TEST_SET_KEY" \
--output-filter "Data.ExecutionId" \
--output plain)
echo "started execution $EXECUTION_ID" >&2
if ! uip tm wait \
--execution-id "$EXECUTION_ID" \
--project-key "$PROJECT_KEY" \
--timeout "$TIMEOUT"; then
code=$?
case "$code" in
2) echo "execution $EXECUTION_ID did not finish within ${TIMEOUT}s" >&2; exit 2 ;;
*) echo "wait failed with exit $code" >&2; exit "$code" ;;
esac
fi
# Get the scorecard and the names of any failed cases in one shot
uip tm report get \
--execution-id "$EXECUTION_ID" \
--project-key "$PROJECT_KEY"
FAILED=$(uip tm report get \
--execution-id "$EXECUTION_ID" \
--project-key "$PROJECT_KEY" \
--output-filter "Data.Failed" \
--output plain)
if [ "$FAILED" -gt 0 ]; then
echo "$FAILED test case(s) failed" >&2
exit 1
fi
echo "all tests passed"
Run this script after your Solution deploy step in any of the CI recipes. The only env-specific config is TEST_SET_KEY / PROJECT_KEY — set them per environment in the pipeline, and the same script promotes cleanly across dev/stage/prod.
See also
uip tmoverview — the full command surface.uip tm testset execute,uip tm wait,uip tm report get— the three verbs used above.uip tm result download— JUnit XML export for CI dashboards.- Scripting patterns — polling, JSON filtering, idempotent pipelines.
- Exit codes — the shared contract and
wait's domain-specific2.