Keeping the analyst in the reasoning chain

Fraud detection AI works by aggregating signals, transaction patterns, behavioral anomalies, network connections, into a risk score. The engineering instinct is to surface that score cleanly. One number. High confidence. Clear action.

I pushed back on that instinct. A single score doesn't give an analyst the information they need to exercise real judgment. It gives them something to agree with. And when analysts are agreeing with scores rather than evaluating evidence, they're no longer in the loop, they're just the signature at the end of it.

The interfaces I designed kept the signal chain visible. Connected incidents, contributing factors, the relative weight of different data sources. Analysts could trace the reasoning. They could question it. They could close a case and know exactly why, and be able to explain that reasoning to a supervisor, a compliance officer, or a customer who called to dispute a declined transaction.

This was explainability work before the field had a name for it. The underlying conviction was simple: if a human is accountable for a decision, they need to understand it. Collapsing that understanding into a score doesn't simplify the interface, it removes the human from consequential accountability while leaving them on the hook for the outcome.

The Principle That Held

Show the reasoning, not just the verdict. Design for accountability, not just efficiency. The human in the loop must be capable of real judgment, not just approval.

Dignity as a design requirement

Most UX failures are inconvenient. At Memorial Sloan Kettering, they're measured differently. The patients using the care coordination system I designed were navigating cancer treatment, scheduling appointments, communicating with care teams, understanding complex treatment plans, managing insurance, maintaining some sense of normalcy and control over their own healthcare journey.

The design challenge wasn't technical complexity. It was emotional reality. People who are frightened, exhausted, and overwhelmed don't have patience for unclear information architecture. They don't have bandwidth for interfaces that make them feel like a patient number rather than a person. And they don't have the luxury of calling support when the app doesn't work.

Building this from the ground up meant working within one of the most mature research organizations I've encountered. I worked closely with a dedicated UX research team, grounding every design decision in how patients actually behaved, not how we assumed they would. Scheduling flows, care team communication, treatment plan comprehension, access to the full MSK hospital network, all of it had to work under conditions of maximum cognitive load and emotional stress.

Across the scope of this app, from the first appointment booking to ongoing healthy lifestyle options, the through-line was the same: this person is going through something serious, and the interface is not allowed to add to that burden. Clarity is care. Simplicity is respect. Getting it right is not optional.

What This Taught

The stakes of poor UX are never just usability scores. In healthcare, in finance, in any domain where the system touches people's real lives, design is a form of responsibility. You build it like it matters, because it does.

From research experiment to the platform people depend on

When I inherited SnapGPT, it was a research project. Built by the chief data scientist and an offshore development team, no design involvement, no user testing, no product integration. It lived at the edge of the platform, separate from the core experience, and users didn't know it existed.

Over two years, I redesigned it from the ground up. Not just the interface, the conceptual model. What is this for? Who is it for? What should they be able to do with it that they couldn't do before?

The answer became a complete skill ecosystem: pipeline documentation, pipeline analysis, intelligent pipeline building, research assistance, contextual help, snap configuration. Each capability designed for a specific user need, progressively disclosed so users could discover the system's depth without being overwhelmed by it.

The metric I cared about most wasn't adoption. It was the moment when a user said "I didn't know it could do that", and came back to try something harder the next day. That's the signal that trust has been established: not passive acceptance, but expanding engagement.

Simultaneously, I expanded the design vision beyond a chatbot, positioning SnapLogic as a governed orchestration fabric for enterprise AI execution. That meant designing for Agent Creator (including Agent Visualizer and Prompt Composer), MCP Servers and Tools, governance surfaces, and audit interfaces that give human administrators meaningful oversight of autonomous AI action. The research experiment became the platform. The platform became the strategy.

Current Scope

SnapGPT · Agent Creator · Agent Visualizer · Prompt Composer · MCP Servers & Agent Tools · Designer Canvas · Admin Manager · Governance Surfaces · Expression Builder · Agent Visualizer & Execution Replay

What I Lead at SnapLogic

AI Interaction

SnapGPT

The primary AI assistant layer. Full skill ecosystem for pipeline documentation, analysis, building, research, and help.

Agentic Systems

Agent Creator

Design system for building, configuring, and deploying autonomous AI agents — including Agent Visualizer and Prompt Composer — within enterprise orchestration workflows.

Protocol

MCP Server & Gateway

Model Context Protocol implementation, the enterprise interface layer for AI model integration and governance.

Pipeline Design

Designer Canvas

Core pipeline authoring environment. Where integration engineers build and visualize data flows.

Governance · AI Design

Governance Surfaces

AI design for audit interfaces, access controls, and accountability layers, within a governance program led by a dedicated director.

Observability · AI Design

Monitor & Execution Replay

AI design for historical agent execution replay and step-level inspection, within an observability program led by a dedicated director.