21. Use Cases and Scenarios
This section presents practical examples that demonstrate how the architecture operates in real-world situations. Each scenario highlights the involved capabilities, execution paths, governance controls, and measurable outcomes.
21.1 Document Generation: User Flow Example
Scenario: A caseworker generates a templated decision letter for a benefits claim.
Flow:
- User selects a case in the DXP work queue and chooses "Generate Letter".
- DXP issues a Generate Document capability invocation with structured inputs (case data, template id, locale, compliance flags) and execution context (latency class, cost-sensitivity, user identity).
- Capability Gateway validates the contract, applies policy (template permissions, data classification), and selects an executor.
- If deterministic generator is available and template covers all fields, deterministic executor is chosen.
- If gaps or freeform content required, AI-assisted executor is permitted under policy and run with strict input filtering.
- Executor returns generated document with rendering metadata, confidence and provenance for any AI-generated sections.
- DXP renders preview; user may accept, edit (human-in-the-loop), or request regeneration.
- Finalised document and audit record (inputs, template version, execution mode, executor id, timestamp) are stored in the document store and linked to the case.
Outcomes & Controls:
- Deterministic-first reduces cost and preserves predictable latency for common templates.
- Confidence thresholds and review gates control when human review is required.
- Full provenance enables audit and post-hoc review of AI contributions.
21.2 Progressive AI Adoption Pilot
Scenario: Run a pilot to introduce AI-assisted entity resolution for incoming scanned forms.
Pilot plan:
- MVP: AI-assisted Extract Metadata capability for a limited document set (3 document types) with human-in-the-loop verification.
- Success criteria: accuracy above baseline deterministic heuristics, acceptable time-to-review, and controlled cost under quota.
- Scope: routed through MCP with policy limiting AI to non-sensitive fields and logging all invocations.
Pilot steps:
- Implement capability contract and initial AI executor with strict schema for outputs and confidence signals.
- Route pilot traffic from a single team within the DXP, exposing human review UI to accept/reject extractions.
- Monitor accuracy, confidence distribution, cost per invocation, and reviewer throughput.
- Iterate prompts, add deterministic post-processing rules for common patterns, and progressively increase document types.
- On meeting criteria, promote capability maturity from AI-Assisted to Optimised by adding deterministic fallback and caching for repeated patterns.
Governance:
- Quotas and stricter rate limits for pilot executors
- Mandatory provenance and review logging
- Model substitution plan with rollback capability
21.3 Performance and Cost Impact Scenario
Scenario: Estimating cost and latency trade-offs when adding AI-assisted summarisation to a high-volume incoming mail pipeline.
Analysis:
- Baseline: deterministic parsing + rule-based summarisation (cost X, latency Y)
- AI-assisted: model summarisation reduces reviewer time but increases per-invocation cost (cost X + delta).
Mitigations:
- Apply sampling: only a percentage of documents use AI summaries initially, based on confidence triggers.
- Use hybrid path: deterministic extractor first, then AI summariser only for items that fail heuristics or exceed complexity thresholds.
- Cache summaries for repeat or similar documents to amortise cost.
Metrics to track:
- Cost per document (deterministic vs AI path)
- Reviewer time saved per document
- Latency distribution and tail latencies for synchronous user flows
- Cache hit rate and cost amortisation
Decision rule example:
- Promote AI summarisation for a document class when reviewer time saved * reviewer cost > incremental AI cost, adjusted by confidence-weighted error rate.