Skip to main content

14. AI Governance and Controls

This section defines how AI-assisted execution is governed to ensure safety, trust, and operational control without undermining delivery velocity.

14.1 AI Usage Boundaries

AI usage is explicitly bounded.

Boundaries are defined by:

  • Capability contracts that declare whether AI-assisted execution is permitted
  • Policies that constrain when and how AI may be invoked
  • Data classification rules that restrict AI access to sensitive information

AI is:

  • Never implicitly invoked
  • Never required for baseline functionality
  • Always subject to policy enforcement

This ensures AI remains an augmentation, not a dependency.

14.2 Human-in-the-Loop Controls

Human oversight is applied where required.

Human-in-the-loop controls include:

  • Review and approval steps for AI-assisted outputs
  • Escalation paths for low-confidence results
  • Manual override and correction mechanisms
  • Clear SLAs for reviewer response times and throughput tracking

These controls:

  • Are capability-specific
  • Are declarative and configurable
  • Do not require embedding human logic into the DXP

This balances automation with accountability.

14.3 Confidence Scoring

AI-assisted outputs must include explicit confidence signals.

Confidence scoring:

  • Reflects uncertainty in the output
  • Is expressed in a structured, interpretable form
  • Is included in the capability response

Confidence:

  • Is surfaced to users where relevant
  • Is used by policies to trigger review or fallback
  • Is never implied or hidden

This enables informed trust decisions.

14.4 Explainability

Explainability is required for AI-assisted execution.

Explainability includes:

  • Clear indication of AI involvement
  • High-level rationale for outputs
  • Traceability to inputs and rules where possible

Explainability:

  • Is proportionate to risk and impact
  • Does not expose internal model details
  • Supports review and accountability

This ensures AI outputs are understandable and defensible.

14.5 Model Substitution and Rollback

Models are treated as replaceable components.

Model substitution:

  • Does not change capability contracts
  • Is controlled by policy and configuration
  • Supports A/B testing and phased rollout
  • Requires documented risk assessment and validation checklist before promotion

Rollback:

  • Can be executed rapidly
  • Does not require consumer changes
  • Preserves audit and traceability
  • Includes automated rollback triggers (e.g. drift, latency, error-rate thresholds)

This ensures resilience in the face of model failure or regression.