Multi-component compositions for complete interaction flows. Load alongside the component files for each stage you implement.
How RAD components combine to cover the full arc of an autonomous agent run — from pre-execution disclosure through active execution, approval gates, and post-run audit. No single component covers this arc. The pattern does.
Disclose what the agent will do, what data it will access, and under whose authority it is operating. If the agent requires access to systems or data not previously authorized, a consent gate is required before it can proceed.
Surface continuous state: what the agent is doing right now, how far along it is, what action comes next. The monitoring layer watches for anomalies — behavioral drift, scope escalation, unresolvable decision forks — and surfaces attention triggers independent of what the agent itself reports.
Any action that is irreversible, high-consequence, or outside the agent's original authorized scope requires an explicit human approval gate before execution. Risk level determines whether the gate is mandatory or advisory.
Every agent run produces an immutable, exportable audit trail — timestamped, typed, and tied to human approval events. The impact assessment surfaces the full footprint: what data was touched, what systems were affected, what regulatory exposure was created.
How RAD error and recovery components combine to handle the distinct failure modes of AI systems. AI errors are categorically different from system errors. They require explanation, recovery paths, and human override — not a generic error screen.
| Failure Type | Component | Role Attribute | When Triggered |
|---|---|---|---|
| Model hallucination | AI04 | role="alert" | Unverifiable citation or fabricated reference detected in output |
| Model refusal | AI04 | role="alert" | Model declines to fulfill request due to policy or safety constraint |
| Rate limit | AI04 | role="alert" | Workspace or user hits request ceiling |
| Timeout | AI04 | role="alert" | Response window exceeded — model or network |
| Low confidence output | TD04 + TD05 | role="status" | Model confidence falls below operator-configured threshold |
| Agent error / pause | HC06 | role="alert" | Agent execution stops unexpectedly — schema mismatch, policy violation, or unresolvable step |
| Scope escalation | HC05 + HC04 | role="alert" | Agent attempts access outside approved scope — monitoring layer detects and pauses |
| Bias / statistical risk | TD03 | role="alert" | Output involves demographic data, low-representation category, or known bias signal |
Every AI error must surface the error type, a human-readable cause, and at least one concrete recovery action. Never show a generic error for a model-specific failure. Hallucination, refusal, rate limit, and timeout are distinct states that require distinct UI responses.
When an agent pauses due to an error, the recovery surface must surface: what completed before the pause, what the error was, and what the human can do next — override and continue, roll back and stop, or escalate to senior review. Every irreversible agent action requires a pre-action warning. Every detected error requires a recovery surface.
Confidence failures are not errors — they are governance events. When model confidence drops below the operator-configured threshold, the UI must surface a visual breach state, identify the source of uncertainty, and offer concrete recovery options: send for review, adjust threshold, or override with documented justification.
The execution arc for a network of agents. Governing principle: trust doesn't transfer automatically. When an orchestrator spawns a subagent, the human's original consent does not silently extend to that subagent's actions, scope, or authority.
Before the orchestrator starts, the human must understand who is in the agent network, what each subagent is authorized to do, and how the agents relate to each other. Consent to run the orchestrator is not consent to run every subagent it may spawn. Show the full topology before the run begins.
When the orchestrator spawns a subagent during a run, that spawn event is itself a consent boundary. A mid-run gate interrupts the flow and requires the human to explicitly authorize the delegation — the new agent's purpose, scope, and what it will have access to — before it can begin operating.
Multi-agent runs don't execute linearly. Subagents run in parallel, block on dependencies, wait for human approvals, and fail independently. The execution state surface must show the full network simultaneously — who is running, who is blocked, who has completed, and whether any node has failed in a way that affects the overall run validity.
When one agent hands off to another, the human must be able to inspect exactly what was transferred: context carried forward, instructions included, scope granted, and any constraints attached to the delegation. The handoff receipt is a first-class record, not a log entry. It must be inspectable on demand, not reconstructed after the fact.
The failure and accountability layer for multi-agent runs. Picks up where Pattern 03 ends and where Pattern 02 (Error & Recovery) runs out of scope — handling failures that are network-level, not single-agent, and conflicts that are structural rather than technical.
When one subagent in a network fails, the run does not automatically fail — but its validity becomes conditional. The partial failure surface must isolate which agents stopped, whether their failure affects downstream agents, and whether the orchestrator considers the run recoverable. The human must understand the failure's blast radius before deciding whether to continue, repair, or abort.
When two or more agents return contradictory results — different answers to the same question, conflicting recommendations, or incompatible data — the run cannot close without a human decision. The conflict must be surfaced explicitly, showing which agents produced which outputs, what the discrepancy is, and what the downstream consequences of each resolution path would be. Auto-resolution is never acceptable.
Post-run accountability for multi-agent workflows cannot be reconstructed from individual agent logs. The aggregate audit composes the full network footprint — every agent's actions, every handoff, every consent event, every tool call — into a unified view that allows cross-agent inspection while preserving per-agent drilldown. The audit is the accountability record for the entire run, not a collection of individual receipts.
RAD is the original work of Jackie Curry. All rights reserved. No portion may be reproduced, adapted, or incorporated into any product or system without express written permission.
Permitted: citation in academic or editorial contexts with full attribution.
© 2025 Jackie Curry. All rights reserved. Publication date: 2025.
For licensing inquiries, connect on LinkedIn →