Agentic Snippet — Formula Tracing
The most meaningful agentic work I've shipped — and why sequencing the human out of the loop deliberately matters more than going fully autonomous.
"The most meaningful agentic work I've shipped was at CaptivateIQ — and the reason I'd describe it as agentic isn't just the AI we used, it's how we thought about the architecture toward autonomous action.
The context was payout error troubleshooting. Admins were spending hours, sometimes days, tracing why a commission number was wrong. We had an opportunity to apply AI, and the tempting version was obvious: build an agent that inspects the dependency graph, identifies the root cause, and tells the admin exactly what to fix. Fully autonomous. That was the vision.
But we made a deliberate decision to stage it. The first version used deterministic code to map column-level dependencies — the graph itself — and an LLM to generate natural language explanations for each node. Plain English descriptions of what each formula was doing, with business context layered in. The human still made the diagnosis and the fix. We kept the agent out of the decision loop intentionally — we didn't yet have the data the agent would need to be reliable, and we had no cascading impact analysis in place. Someone's paycheck was on the line. A confident wrong answer was worse than no answer.
What we shipped was designed explicitly as the foundation for the agent to come in safely — once we had the guard rails, the data, and the trust. 70% adoption in the first month, 65% reduction in payout error support tickets over three months.
The thing I took from that: agentic design isn't just about what the agent does — it's about sequencing the human out of the loop deliberately, at the right moment, with the right safeguards. Shipping a half-ready agent in a high-stakes context doesn't build trust. It destroys it."