Keeping the analyst in the reasoning chain
Fraud detection AI works by aggregating signals, transaction patterns, behavioral anomalies, network connections, into a risk score. The engineering instinct is to surface that score cleanly. One number. High confidence. Clear action.
I pushed back on that instinct. A single score doesn't give an analyst the information they need to exercise real judgment. It gives them something to agree with. And when analysts are agreeing with scores rather than evaluating evidence, they're no longer in the loop, they're just the signature at the end of it.
The interfaces I designed kept the signal chain visible. Connected incidents, contributing factors, the relative weight of different data sources. Analysts could trace the reasoning. They could question it. They could close a case and know exactly why, and be able to explain that reasoning to a supervisor, a compliance officer, or a customer who called to dispute a declined transaction.
This was explainability work before the field had a name for it. The underlying conviction was simple: if a human is accountable for a decision, they need to understand it. Collapsing that understanding into a score doesn't simplify the interface, it removes the human from consequential accountability while leaving them on the hook for the outcome.
Show the reasoning, not just the verdict. Design for accountability, not just efficiency. The human in the loop must be capable of real judgment, not just approval.