Next big step for SPLENT

Internal roadmap analysis — March 2026


Current state

SPLENT has the core well solved: modular features, variability model (UVL), pre-flight validation (UVL + contracts + infrastructure), dev and prod deployment with a single command, multi-stage Docker builds, health checks, CI/CD support, feature refinement (service override, template override, model extension, hook replacement), and full lifecycle management (install, migrate, rollback, uninstall). What’s missing to make the leap is what differentiates a “framework that works” from an “ecosystem that changes how you work”.

Three possible directions. Not mutually exclusive, but each has a different impact.


Option A — Automated combinatorial testing

The strongest argument of SPL is: “I can reason about all valid combinations of my product”. But right now that’s theoretical. SPLENT validates that a configuration is satisfiable (product:validate), but doesn’t test that it works at runtime.

The idea:

splent product:test-matrix
  1. Read the UVL and generate all valid configurations (or a representative subset using combinatorial sampling — t-wise, pairwise).
  2. For each configuration, derive a temporary product (product:derive --dev in an isolated environment).
  3. Run feature:test --unit --integration --functional.
  4. Report: which configurations pass, which fail, and which feature is responsible.
  Configuration matrix — sample_splent_app

  #   Features                                    Unit  Integ  Func
  ──────────────────────────────────────────────────────────────────
  1   auth, public, redis                          ✅    ✅     ✅
  2   auth, public, redis, profile                 ✅    ✅     ✅
  3   auth, public, redis, mail, confirmemail      ✅    ✅     ⚠️
  4   auth, public, redis, mail, reset             ✅    ✅     ✅
  5   auth, public, mail, profile, reset           ✅    ❌     —
      └─ integration: test_reset_email_flow FAILED (redis required)

  5 configurations tested, 1 failure.
  Root cause: splent_feature_reset requires redis at runtime (not declared in UVL).

Why it matters

  • Finds bugs that no other tool finds — implicit dependencies between features that the UVL doesn’t capture, conflicts that only appear in certain combinations.
  • It’s the bridge between the formal model and reality.

Research angle

Publishable as original contribution: “Automated combinatorial testing of SPL products using UVL-driven configuration sampling” — with Flamapy for sampling and SPLENT for derivation and execution.


Option B — Feature Impact Analysis (CI/CD for SPL)

When you update a feature (new version, contract change), you currently don’t know which products break. You have to test manually.

The idea:

splent feature:impact splent_feature_auth
  1. Scan all products in the workspace (or a remote registry).
  2. For each product that uses the feature, run pre-flight checks + tests.
  3. Report the impact of the change.
  Impact analysis — splent_feature_auth (v1.2.7 → v1.2.8)

  Product                  UVL    Contracts  Tests   Verdict
  ──────────────────────────────────────────────────────────
  sample_splent_app        ✅     ✅          ✅      safe
  enterprise_app           ✅     ⚠️ route    —       review
  research_portal          ✅     ✅          ❌      broken
    └─ test_login_redirect: expected 302, got 404 (route changed)

  1 safe, 1 needs review, 1 broken.

Why it matters

  • Turns SPLENT into a native CI/CD system for SPL — not a wrapper around GitHub Actions, but a tool that understands variability.
  • Essential when you have 5+ products sharing features. Without this, every feature release is a leap of faith.

Relationship to Option A

Option B is the natural step after A. If you can test one configuration, you can test the impact of a change across all configurations.


Option C — Visual configurator + remote derivation

A web frontend where:

  1. You see the UVL model as an interactive tree (features, mandatory/optional, constraints).
  2. You select features by clicking. Constraints propagate in real time (selecting profile auto-selects auth).
  3. You click “Derive” and SPLENT generates the product on a server or gives you the docker-compose.deploy.yml + image.

Why it matters

  • Makes SPLENT usable by non-developers — a product manager could configure a variant without touching the terminal.
  • Integration with UVLHub, feature model visualization, derivation as a service.

Trade-off

Most visually impressive but least urgent. A well-made CLI is already usable.


Recommendation

Option A (combinatorial testing) should come first:

  1. All the pieces already exist: UVL + Flamapy for sampling, product:derive for derivation, feature:test for execution. It’s “just” orchestration.
  2. It’s the feature nobody has. Neither academic SPL frameworks (they stop at model analysis) nor DevOps tools (they don’t understand variability).
  3. Finds real bugs that the formal model doesn’t capture.
  4. Publishable as original contribution.
  5. Developers would use it every time they add a feature or change a contract.

Then Option B (impact analysis) as the natural follow-up. Then Option C (visual configurator) when the audience expands beyond developers.


Implementation sketch for Option A

New commands

splent product:test-matrix [--strategy pairwise|all|random] [--max N] [--level unit|integration|functional]

Key steps

  1. Configuration sampling: Use Flamapy to enumerate valid configurations from UVL. Support strategies:
    • all — every valid configuration (feasible for small models)
    • pairwise — t-wise coverage (default, scales to large models)
    • random — random N valid configurations
  2. Isolated derivation: For each configuration:
    • Create a temporary pyproject.toml with only the selected features
    • Run product:derive --dev in a Docker-in-Docker or worktree-based isolation
    • Run feature:test at the specified level
  3. Result aggregation: Collect pass/fail per configuration, identify the failing feature, and detect patterns (e.g., “every configuration with feature X and without feature Y fails”).

  4. Root cause analysis: When a test fails, diff the failing configuration against passing ones to isolate the feature responsible.

Estimated scope

  • Flamapy integration: ~200 lines (sampling API)
  • Test orchestration: ~300 lines (derive + test loop with isolation)
  • Reporting: ~150 lines (table output + root cause heuristic)
  • Tests: ~200 lines
  • Total: ~850 lines of new code + 1 new command

Back to top

splent. Distributed by an LGPL license v3. Contact us: drorganvidez@us.es