UX in Agentic Systems: Opportunities, Challenges, and Ethical Considerations

Most companies are treating AI as a minor enhancement to existing interfaces. That's the wrong frame. Agentic systems require a fundamental rethinking of how autonomy, transparency, and human control are designed, not patched on afterward.

The article maps the design challenges that matter most as AI begins acting on behalf of users rather than waiting to be asked: how do we preserve user agency when systems act autonomously? How do we design accountability when the reasoning is invisible? And how do we measure success when the system anticipates needs rather than responding to requests?

Read on LinkedIn →
Original Framework
RAD

Responsible AI Design System

© 2025 Jackie Curry · All Rights Reserved

Agentic AI systems don't just respond · they act. RAD is a design system for those surfaces: standardized components and interaction patterns that make autonomous behavior legible, interruptible, and accountable. Ten components. Every trust failure addressed by design.

Five component categories address five distinct accountability failure modes: disclosure alerts, human-in-the-loop controls, transparency popovers, bias-check prompts, and audit trail widgets. Together they form a layer that keeps humans genuinely informed, in control, and accountable for AI action.

Explore the Framework →

Three Ideas I Keep Coming Back To

Full essays available on request. Opening arguments below.

Essay 01
Human-in-the-loop design

The Moment You Stop Explaining

There's a specific moment in the life of every AI system when someone stops asking what it decided and starts assuming it was right.

I've watched it happen in fraud detection. An analyst gets a score. The score is high. They close the case. What used to be a process of investigation, pulling the signal chain, tracing the connected incidents, asking "does this actually look like fraud?", becomes a rubber stamp. The AI becomes the judgment, and the human becomes the signature.

This is not a technology failure. It's a design failure.

When we collapse AI reasoning into a single number, we're not just simplifying an interface. We're quietly removing the human from the loop while leaving them on the hook for the outcome. Explainability is not a feature you add at the end of a project. It's a structural commitment to keeping humans capable of exercising real judgment, not just the appearance of it.

· · · Full essay available on request
Essay 02
Agent transparency

What Agents Need to Show You

A human colleague who disappears for three hours and comes back with a finished report will be asked: how did you do that? What sources did you use? Did you check with legal?

An AI agent that does the same thing is usually applauded.

We've built a strange double standard into how we evaluate autonomous work. When humans act on our behalf, we expect accountability by default. When agents act on our behalf, we tend to accept the output and move on, at least until something goes wrong.

The shift to agentic systems means we need a new vocabulary for transparency. Not just "what did the model decide" but "what did the agent do, in what order, and on what basis?" Execution transparency isn't about showing users everything. It's about making the right things visible at the right level of abstraction.

· · · Full essay available on request
Essay 03
Trust as design material

Trust Is Not a Feeling

In most product conversations, trust is treated like a brand attribute. Something you earn over time, communicate through tone, and measure in NPS. The goal is to make users feel safe.

I think that's the wrong frame, and in AI systems, it's a dangerous one.

Trust, in the context of consequential AI, is not a feeling. It's a functional state. It means: the user understands enough about how this system works to know when to rely on it and when to question it. They can calibrate. They know the system's boundaries, its failure modes, the conditions under which it performs well and the conditions under which it doesn't.

A user who trusts an AI because it looks polished and confident is not in a functional trust state. They're in a dependency state. And dependency breaks catastrophically. Designing for trust is not the same as designing for confidence, and confusing the two produces interfaces that feel safe right until the moment they fail.

· · · Full essay available on request

From the Feed

1,001 impressions

"AI is moving fast... maybe too fast for our own comfort. We're treating LLMs like reasoning partners, when they're really just mirrors."

On the gap between how we talk about AI and what it actually is.

On AI interaction design

"The answer is not another dashboard. We need adaptive, agentic, story-driven interfaces that surface the right insight at the right time and in the right form."

Responding to Julie Zhuo on the future of product interfaces.

On the next UI paradigm

"The next UI for AI isn't conversation, it's context. The interface adapts to where the user is, what they're doing, and what they need next."

On Altman and Nadella's competing visions, and why both miss the point.

318 impressions

"The effort to do something is now the same as the effort to think about doing it."

On what AI tooling actually changes about how designers and engineers work.