Testing the CLI
How to write and run tests for splent_cli commands.
Table of contents
Overview
Testing a CLI is different from testing a library. The challenge is that commands:
- read environment variables (
SPLENT_APP,WORKING_DIR) - touch the filesystem (pyproject.toml, cache, symlinks)
- call external processes (
docker,git,splent)
The solution is click.testing.CliRunner — Click’s built-in test runner — combined with pytest’s tmp_path fixture and unittest.mock.patch.
The key insight: because all commands read context through the service layer (context.workspace(), context.require_app()), a single monkeypatch.setenv("WORKING_DIR", ...) redirects an entire command to a temporary directory. No Docker needed.
Running the tests
From inside the Docker container:
cd /workspace/splent_cli
# Run all tests
pytest tests/ -v
# Run a specific file
pytest tests/unit/commands/product/test_product_status.py -v
# Run a specific test class or case
pytest tests/unit/commands/env/test_env_list.py::TestFilter -v
pytest tests/unit/commands/env/test_env_list.py::TestFilter::test_filter_by_prefix -v
# With coverage
pytest tests/ -v --cov=splent_cli --cov-report=term-missing
Test structure
splent_cli/
tests/
conftest.py # shared fixtures (runner, workspace, product_workspace)
unit/
services/
test_context.py # context.workspace(), require_app(), resolve_env()
test_compose.py # project_name(), resolve_file(), normalize_feature_ref()
commands/
cache/
test_cache_status.py
env/
test_env_list.py
product/
test_product_status.py
test_product_up.py
feature/
test_feature_add.py
The three testing patterns
Every CLI command falls into one of three categories, each with its own testing approach.
Pattern 1 — Pure logic (no mocking)
Commands that are just functions with no side effects (service helpers, string transformations).
from splent_cli.services.compose import project_name, normalize_feature_ref
def test_project_name_replaces_special_chars():
assert project_name("splent_io/auth@v1.0", "prod") == "splent_io_auth_v1_0_prod"
def test_normalize_bare_name():
assert normalize_feature_ref("splent_feature_auth") == "splent_io/splent_feature_auth"
Pattern 2 — Filesystem commands
Commands that read or write files (pyproject.toml, .env, cache directories).
Use tmp_path + monkeypatch.setenv("WORKING_DIR", ...). No Docker needed.
from click.testing import CliRunner
from splent_cli.commands.cache.cache_status import cache_status
def test_cache_shows_versioned_feature(tmp_path, monkeypatch):
monkeypatch.setenv("WORKING_DIR", str(tmp_path))
# Build a fake cache entry
cache = tmp_path / ".splent_cache" / "features" / "splent_io" / "splent_feature_auth@v1.0.0"
cache.mkdir(parents=True)
result = CliRunner().invoke(cache_status, [])
assert result.exit_code == 0
assert "splent_feature_auth" in result.output
assert "v1.0.0" in result.output
Pattern 3 — Subprocess commands
Commands that call docker, git, or other processes.
Mock subprocess.run to avoid needing Docker. Return a MagicMock with the right shape.
import json
from unittest.mock import patch, MagicMock
from click.testing import CliRunner
from splent_cli.commands.product.product_status import product_status
def test_shows_running_containers(product_workspace):
containers = [{"Service": "web", "State": "running", "Publishers": []}]
mock_output = json.dumps(containers[0])
with patch("subprocess.run", return_value=MagicMock(returncode=0, stdout=mock_output, stderr="")):
result = CliRunner().invoke(product_status, ["--dev"])
assert result.exit_code == 0
assert "web" in result.output
assert "running" in result.output
Shared fixtures
Defined in tests/conftest.py. Import them by name — pytest injects them automatically.
runner
A CliRunner with stderr separated from stdout.
def test_something(runner):
result = runner.invoke(my_command, ["--flag"])
assert result.exit_code == 0
workspace
A tmp_path directory wired as the workspace root. Sets WORKING_DIR, clears SPLENT_APP and SPLENT_ENV.
def test_something(workspace):
# workspace == Path("/tmp/pytest-.../test_something0")
# WORKING_DIR is set to str(workspace)
(workspace / ".env").write_text("FOO=bar\n")
...
product_workspace
A complete workspace with a test_app product. Sets WORKING_DIR, SPLENT_APP=test_app, SPLENT_ENV=dev. Creates pyproject.toml and docker-compose files.
def test_something(product_workspace):
# Ready to test any product:* command
with patch("subprocess.run", ...):
result = runner.invoke(product_up, ["--dev"])
assert result.exit_code == 0
Helper functions
from tests.conftest import make_env_file, make_cache_entry
# Write a .env file into a workspace
make_env_file(workspace, "SPLENT_APP=my_app\nGITHUB_TOKEN=secret\n")
# Create a cache directory entry
make_cache_entry(workspace, "splent_io", "splent_feature_auth", "v1.0.0") # versioned
make_cache_entry(workspace, "splent_io", "splent_feature_notes") # editable
What to assert
result from runner.invoke() has three useful attributes:
| Attribute | What it contains |
|---|---|
result.exit_code |
0 = success, 1 = error, 0 with raise SystemExit(0) = cancelled |
result.output |
Everything printed to stdout (and stderr if mix_stderr=True) |
result.exception |
The exception if the command crashed (not SystemExit) |
Always assert exit_code first. If a test fails unexpectedly, print result.output to debug:
result = runner.invoke(my_command, ["--flag"])
print(result.output) # see what was printed
print(result.exception) # see if it crashed
assert result.exit_code == 0
Testing error paths
Error paths are as important as the happy path. For every command, test:
# Missing required env var
def test_requires_splent_app(runner, workspace):
result = runner.invoke(product_up, ["--dev"])
assert result.exit_code == 1
assert "SPLENT_APP" in result.output
# Mutually exclusive flags
def test_rejects_both_dev_and_prod(runner, product_workspace):
result = runner.invoke(product_status, ["--dev", "--prod"])
assert result.exit_code == 1
assert "Cannot specify both" in result.output
# Missing file
def test_exits_when_no_env_file(runner, workspace):
result = runner.invoke(env_list, [])
assert result.exit_code == 1
assert ".env" in result.output
Adding tests for a new command
- Create
tests/unit/commands/<group>/test_<command>.py - Import the command function directly (not through the CLI entry point)
- Use the appropriate fixture (
workspace,product_workspace, ortmp_path) - Mock subprocess if the command calls external tools
- Test at minimum: flag validation, happy path, one error path
from click.testing import CliRunner
from splent_cli.commands.mygroup.my_command import my_command
class TestFlagValidation:
def test_requires_splent_app(self, runner, workspace):
result = runner.invoke(my_command, [])
assert result.exit_code == 1
assert "SPLENT_APP" in result.output
class TestHappyPath:
def test_success(self, runner, product_workspace):
result = runner.invoke(my_command, ["--some-flag"])
assert result.exit_code == 0
assert "expected output" in result.output
Important: avoid string substring traps
When asserting that a variable name does NOT appear in output, make sure your test variable names are not substrings of each other. For example, "SET_VAR" is a substring of "UNSET_VAR", so assert "SET_VAR" not in output would fail even when SET_VAR itself is absent.
Use clearly distinct names like LOADED_KEY and MISSING_KEY.