Next big step for SPLENT
Internal roadmap analysis — March 2026
Current state
SPLENT has the core well solved: modular features, variability model (UVL), pre-flight validation (UVL + contracts + infrastructure), dev and prod deployment with a single command, multi-stage Docker builds, health checks, CI/CD support, feature refinement (service override, template override, model extension, hook replacement), and full lifecycle management (install, migrate, rollback, uninstall). What’s missing to make the leap is what differentiates a “framework that works” from an “ecosystem that changes how you work”.
Three possible directions. Not mutually exclusive, but each has a different impact.
Option A — Automated combinatorial testing
The strongest argument of SPL is: “I can reason about all valid combinations of my product”. But right now that’s theoretical. SPLENT validates that a configuration is satisfiable (product:validate), but doesn’t test that it works at runtime.
The idea:
splent product:test-matrix
- Read the UVL and generate all valid configurations (or a representative subset using combinatorial sampling — t-wise, pairwise).
- For each configuration, derive a temporary product (
product:derive --devin an isolated environment). - Run
feature:test --unit --integration --functional. - Report: which configurations pass, which fail, and which feature is responsible.
Configuration matrix — sample_splent_app
# Features Unit Integ Func
──────────────────────────────────────────────────────────────────
1 auth, public, redis ✅ ✅ ✅
2 auth, public, redis, profile ✅ ✅ ✅
3 auth, public, redis, mail, confirmemail ✅ ✅ ⚠️
4 auth, public, redis, mail, reset ✅ ✅ ✅
5 auth, public, mail, profile, reset ✅ ❌ —
└─ integration: test_reset_email_flow FAILED (redis required)
5 configurations tested, 1 failure.
Root cause: splent_feature_reset requires redis at runtime (not declared in UVL).
Why it matters
- Finds bugs that no other tool finds — implicit dependencies between features that the UVL doesn’t capture, conflicts that only appear in certain combinations.
- It’s the bridge between the formal model and reality.
Research angle
Publishable as original contribution: “Automated combinatorial testing of SPL products using UVL-driven configuration sampling” — with Flamapy for sampling and SPLENT for derivation and execution.
Option B — Feature Impact Analysis (CI/CD for SPL)
When you update a feature (new version, contract change), you currently don’t know which products break. You have to test manually.
The idea:
splent feature:impact splent_feature_auth
- Scan all products in the workspace (or a remote registry).
- For each product that uses the feature, run pre-flight checks + tests.
- Report the impact of the change.
Impact analysis — splent_feature_auth (v1.2.7 → v1.2.8)
Product UVL Contracts Tests Verdict
──────────────────────────────────────────────────────────
sample_splent_app ✅ ✅ ✅ safe
enterprise_app ✅ ⚠️ route — review
research_portal ✅ ✅ ❌ broken
└─ test_login_redirect: expected 302, got 404 (route changed)
1 safe, 1 needs review, 1 broken.
Why it matters
- Turns SPLENT into a native CI/CD system for SPL — not a wrapper around GitHub Actions, but a tool that understands variability.
- Essential when you have 5+ products sharing features. Without this, every feature release is a leap of faith.
Relationship to Option A
Option B is the natural step after A. If you can test one configuration, you can test the impact of a change across all configurations.
Option C — Visual configurator + remote derivation
A web frontend where:
- You see the UVL model as an interactive tree (features, mandatory/optional, constraints).
- You select features by clicking. Constraints propagate in real time (selecting
profileauto-selectsauth). - You click “Derive” and SPLENT generates the product on a server or gives you the
docker-compose.deploy.yml+ image.
Why it matters
- Makes SPLENT usable by non-developers — a product manager could configure a variant without touching the terminal.
- Integration with UVLHub, feature model visualization, derivation as a service.
Trade-off
Most visually impressive but least urgent. A well-made CLI is already usable.
Recommendation
Option A (combinatorial testing) should come first:
- All the pieces already exist: UVL + Flamapy for sampling,
product:derivefor derivation,feature:testfor execution. It’s “just” orchestration. - It’s the feature nobody has. Neither academic SPL frameworks (they stop at model analysis) nor DevOps tools (they don’t understand variability).
- Finds real bugs that the formal model doesn’t capture.
- Publishable as original contribution.
- Developers would use it every time they add a feature or change a contract.
Then Option B (impact analysis) as the natural follow-up. Then Option C (visual configurator) when the audience expands beyond developers.
Implementation sketch for Option A
New commands
splent product:test-matrix [--strategy pairwise|all|random] [--max N] [--level unit|integration|functional]
Key steps
- Configuration sampling: Use Flamapy to enumerate valid configurations from UVL. Support strategies:
all— every valid configuration (feasible for small models)pairwise— t-wise coverage (default, scales to large models)random— random N valid configurations
- Isolated derivation: For each configuration:
- Create a temporary
pyproject.tomlwith only the selected features - Run
product:derive --devin a Docker-in-Docker or worktree-based isolation - Run
feature:testat the specified level
- Create a temporary
-
Result aggregation: Collect pass/fail per configuration, identify the failing feature, and detect patterns (e.g., “every configuration with feature X and without feature Y fails”).
- Root cause analysis: When a test fails, diff the failing configuration against passing ones to isolate the feature responsible.
Estimated scope
- Flamapy integration: ~200 lines (sampling API)
- Test orchestration: ~300 lines (derive + test loop with isolation)
- Reporting: ~150 lines (table output + root cause heuristic)
- Tests: ~200 lines
- Total: ~850 lines of new code + 1 new command