Jackie
Curry
For more than twenty years, I’ve worked at the boundary where consequential decisions get made: fraud detection, healthcare, and enterprise AI.
The task has never changed: study the humans, understand the system, and design the space where the two meet.
Today that space is the human–AI boundary. It is the work of making autonomous systems legible, trustworthy, and accountable.
The same conviction, across every context
At Capital One, it was fraud detection. The challenge was protecting an analyst's ability to see the full reasoning chain so they could exercise real judgment, not just rubber-stamp an AI verdict. That required deeply understanding how analysts actually worked — not how the system assumed they did.
At Memorial Sloan Kettering, it was care coordination. One of the most mature research organizations I've encountered. Every design decision grounded in how patients actually behaved under maximum cognitive load and emotional stress.
At SnapLogic, it is enterprise AI. Taking a research experiment built without design or users in mind and transforming it into a trusted intelligence layer that people rely on and return to.
The industries change. The problem does not.
AI systems make decisions that affect people. Those decisions must be understandable to the humans responsible for them.
I've spent my career designing the layer where humans and AI share responsibility.
As AI agents take on more autonomous decision-making in enterprise environments, the question of human accountability becomes existential, not just for users, but for the organizations deploying these systems.
This isn't a new problem for me. It's the work I've been doing all along.
Design without research
is just decoration
I've started research practices from scratch at Snapfish, Theorem, and CareFirst BCBS FEPOC — not just run studies, but stood up the infrastructure, methods, and habits that make research stick. At Shutterfly, I collaborated with a research team and owned my own studies and interviews. At MSK and Capital One, I worked inside two of the most rigorous research organizations I've encountered. At SnapLogic, the research role was eventually eliminated and the team adapted — working through sales associates, customer advocates, and qualitative data.
The output isn't decks. It's decisions. What to build, what to kill, where the model is wrong about its own users. That's what research is for.
Three Contexts,
One Conviction
Full Impact Stories →
Human Judgment in the Loop
Fraud detection interfaces that preserved the analyst's ability to see, question, and defend the AI's reasoning, before the field had a name for it.
Dignity as a Design Requirement
End-to-end care coordination for cancer patients, where the stakes of getting UX wrong are measured in human suffering, not bounce rates.
From Experiment to Trusted Platform
Inherited a research AI with no design involvement. Built the complete skill ecosystem and intelligence layer that enterprise teams now rely on daily.
"AI without human understanding is not good enough. The answer is not another dashboard. We need adaptive, agentic, story-driven interfaces that surface the right insight at the right time and in the right form."
, Jackie Curry · Read the essays →