Turning complex ideas into useful products.
I help teams make AI, agentic systems and data-intensive products easier to understand, govern, and trust.
Specialized expertise at the intersection of AI, human judgment, and enterprise systems.
Agentic Workflows
Designing agent builders, execution experiences, tool use, approvals, and autonomy controls.
Learn More →AI Trust & Governance
Creating transparency, auditability, disclosure, confidence, and oversight across the AI lifecycle.
Learn More →Enterprise AI Platforms
Building scalable, observable, and human-centered experiences for complex enterprise systems.
Learn More →From prompt to pull request.
Understanding users, context, and the real problem worth solving still comes first — that part requires human judgment. But once you know what needs to exist and who it's for, this is how I use Claude Code, the Figma MCP, and agent-assisted iteration to move from design intent to reviewed PR. Accurate as of May 2026. AI tooling moves fast and this workflow will keep changing.
Open the local repo in your IDE. Connect your design system, project context, and Figma MCP so the agent has access to the actual codebase — not a blank slate.
Ask the agent to read the relevant Figma file or frame. The goal is to understand the existing layout, components, spacing patterns, and design system rules before touching anything. You can also point to an existing file in the codebase instead.
Give the agent a clear brief: what the feature should do, which file to update, where to write the design in Figma, and what should not change.
Let the agent create an initial design direction in Figma. This is a starting point — the value is speed to something reviewable, not perfection on the first output.
Review the generated design and make direct adjustments to spacing, hierarchy, layout, copy, and component usage. This is where design judgment comes back into the loop.
Share the revised Figma frame with the agent. The MCP reads your actual design decisions — not just the original prompt — and updates the local codebase from there.
Review locally for visual accuracy, responsiveness, token usage, and accessibility before handing to engineering. Then ask the agent to create a branch, commit the changes, and open a pull request.
Every file read, tool call, and diff becomes part of the agent's working context — that's not just a prompting habit, it's the cost of operating inside a real codebase. Figma stays in the loop because visual iteration is often faster than another paragraph of instructions.
Three Contexts,
One Conviction
Human Judgment in the Loop
Lead UX Designer
Fraud analysts were being pushed toward rubber-stamping AI verdicts, losing the ability to see, question, and defend the reasoning chain.
Designed interfaces that preserved analyst agency inside AI-assisted fraud detection — before the field had a name for it.
Human accountability requires human visibility.
02 Memorial Sloan Kettering · Clinical SystemsDignity as a Design Requirement
Sr. Product Design Lead
Cancer patients navigating care coordination under maximum cognitive load, where UX failures have consequences measured in suffering.
End-to-end care coordination designed with dignity as a non-negotiable constraint, grounded in rigorous patient research.
The stakes of getting UX wrong are not always bounce rates.
03 SnapLogic · AI/ML PlatformFrom Experiment to Trusted Platform
Director of Product Design
A research AI with no design involvement, no adoption path, and no trust from enterprise teams who needed to rely on it daily.
Built the complete skill ecosystem and intelligence layer that enterprise teams now use in production agentic workflows.
Trust is earned through consistency, legibility, and control.
A design system for agentic experiences
RAD is an open framework for making autonomous AI behavior legible, interruptible, and accountable — built from real problems in enterprise AI product work.
Explore RAD →Agentic Execution Flow
The full arc of an autonomous agent run — from task initiation to completion and review.
02Error & Recovery
Distinct failure modes in AI systems — with patterns for graceful degradation and human re-entry.
03Multi-Agent Orchestration
Managing coordination, delegation, and visibility across multiple agents operating in parallel.
04Multi-Agent Governance
Oversight, compliance, and accountability structures across distributed agent systems.
05Disclosure & Transparency
Surfacing AI reasoning, confidence, and provenance to users who need to understand and verify.
06Human-in-the-Loop Controls
Preserving meaningful human agency in automated workflows — approvals, overrides, and decision checkpoints.
"Trust is not a feeling. It is a functional state."
I write about what it actually takes to design for AI systems that humans can rely on — accountability, legibility, and the unglamorous work of making complex behavior comprehensible.
Read the Thinking →Open to the right engagement.
Available for fractional, contract, advisory, and senior IC design work in AI product strategy, agentic UX, and enterprise platform design.
Get in Touch →