Determinism wins
I saw a demo recently for an AI agent that handles client onboarding. New client comes in, agent reads the engagement letter, sets up the matter, sends the welcome pack, creates the folder structure, schedules the kick-off.
Looked impressive. The demo was slick. But the question that kept nagging me: what happens when the agent misreads a clause in the engagement letter and sets up the wrong fee structure? Who’s accountable for that? Where’s the audit trail?
The problem they were solving didn’t need an agent. It needed a workflow.
Two types of system
This distinction doesn’t get enough airtime, and I think it’s because the people selling AI solutions have no incentive to explain it.
Deterministic systems do the same thing every time. Same inputs, same outputs. An invoice workflow that routes approvals based on dollar thresholds. A compliance check that flags missing documents against a list. A client onboarding sequence that sends the right forms in the right order. Predictable. Testable. Auditable. When something breaks, you can point to the exact step and fix it.
Stochastic systems - which is what AI agents are under the hood - produce different outputs each time, even with identical inputs. Ask an agent to summarise a contract and you’ll get a slightly different summary every run. That’s not a flaw. It’s the nature of language models. They reason, adapt, make judgement calls. Powerful when you need that. Dangerous when you don’t.
The market right now is selling stochastic solutions to deterministic problems. And most buyers don’t know the difference.
The audit trail problem
If you work in professional services - law, accounting, engineering - you already know why this matters, even if you haven’t framed it this way. Your clients expect consistency. Your regulators expect traceability. Your PI insurer expects documentation.
When a workflow automation sends an email, you can see the trigger, the template, the timestamp, the recipient. When an agent sends an email, you get a probabilistic output shaped by a prompt and whatever context it decided was relevant. Try explaining that to a compliance auditor.
I’ve watched firms get excited about AI agents for client communications, then hit the wall when they realise they can’t guarantee the agent won’t hallucinate a deadline, invent a clause, or phrase something that creates a liability. The risk profile is completely different from a templated workflow that does the same thing every time.
Deterministic systems are boring. They’re also the ones you can defend.
When agents are the right call
I build with agents. I use them daily. This isn’t an anti-agent argument - it’s a right-tool-for-the-job argument.
Agents earn their place when the inputs are genuinely messy. A client sends a rambling email with three different requests buried in it - an agent can parse that and route the actions. A rule-based system can’t, because the input isn’t structured. Or reviewing 200 pages of project documentation to find clauses relevant to a variation claim. A human could do it (slowly). An agent does it in minutes, and the output is a starting point for review, not a final answer.
The pattern is consistent: agents work where the inputs are ambiguous, the judgement is real, and a human checks the output before it matters. Drafting a first-pass proposal. Summarising meeting notes. Identifying which client enquiries are likely to convert based on how they’re worded. Work where the output should be different each time because the context is different each time.
The moment the output needs to be the same every time? That’s a workflow. Not an agent.
Follow the money
There’s a reason agents are everywhere in the pitch deck: they’re more expensive to build.
A workflow automation might cost a few thousand dollars and run indefinitely with near-zero maintenance. An agent needs prompt engineering, testing, guardrails, monitoring, and ongoing model costs. The economics of selling consulting services favour the complex solution. Always have.
When the market rewards the word “agentic,” everything becomes agentic. Gartner put AI agents at the Peak of Inflated Expectations in their 2025 Hype Cycle. They’re predicting 40% of agentic projects will fail by 2027 - not because agents don’t work, but because organisations deploy them where a simpler solution would do the job better.
That 40% number doesn’t surprise me at all.
The best systems use both
The real answer (predictably) is that it’s not agents or automation. It’s knowing which one fits where.
I wrote about this in the context of my own tooling - using hooks to enforce “deterministic control over non-deterministic behaviour.” The AI can be creative where creativity helps. Rules get enforced by rules, not by asking the model nicely.
That onboarding demo I mentioned at the top? The version that would actually work is 90% deterministic. Forms sent, documents collected, records created, reminders triggered - all workflows. The one stochastic step: an AI that reads the client’s initial enquiry and drafts a personalised welcome message, reviewed by a human before it sends. The workflow handles the predictable parts. The agent handles the part that genuinely needs judgement.
I’ve written before about the hierarchy underneath this - simplify first, automate second, apply AI third. The deterministic/stochastic framing sharpens that. It’s not just about sequencing. It’s about matching the nature of the tool to the nature of the problem. And the foundations need to be there before either approach delivers.
The systems question
My background is systems, not software. Eighteen years of infrastructure, integration, and figuring out how things connect. That shapes how I look at this.
A systems engineer doesn’t start with the technology. They start with the problem. What are the inputs? What are the outputs? What are the failure modes? What needs to be traceable? What happens when it breaks at 2am and nobody’s around?
When you ask those questions honestly, the answer is usually boring. A webhook. A scheduled task. A database query. A well-designed form. The agent is the exception, not the default.
I’m comfortable with that. The goal was never to use the most impressive technology. It’s to solve the problem in the simplest way that works reliably.
One question
Next time someone pitches you an AI agent, ask: could this be a workflow instead?
If the inputs are structured, the logic is definable, and the output needs to be consistent - it should be a workflow. Cheaper to build, easier to maintain, auditable end to end.
If the inputs are genuinely ambiguous, the judgement is real, and the output should vary based on context - now you’ve got a case for an agent. With a human in the loop and a plan for when it gets it wrong.
The firms that get the most from AI won’t be the ones who deployed agents first. They’ll be the ones who were honest about what needed an agent and what didn’t.
Determinism isn’t the exciting answer. It’s the one that works.