ManitouAdvisory
Operations

Your AI Tools Are Already Making Decisions. Do You Know Which Ones?

Phil Bolton · March 19, 2026 · 4 min read

Last month, a client's controller flagged a problem. Their AP automation tool had been miscategorizing a vendor's invoices for three months. The amounts were small enough that nobody caught it until the quarterly close. By then, the errors had cascaded into their departmental P&Ls, their variance analysis, and a board deck that had already been presented.

The tool wasn't broken. It was doing exactly what it was designed to do — categorize based on patterns. But nobody had defined who was responsible for reviewing its output, how often, or what "good" looked like.

This is the governance gap, and almost every company between $3M and $20M has it.

The problem isn't the technology

The tools are genuinely good. AI-powered categorization, anomaly detection, cash forecasting, invoice processing — we deploy these across nearly every client engagement. They save real time and reduce real errors. That part isn't in question.

The problem is that most companies adopt these tools the way they adopt any SaaS product: sign up, configure it, and move on. Nobody writes down what the tool is supposed to do, how its outputs get verified, or what happens when it gets something wrong.

When you're a 15-person company, this feels fine. The founder or controller reviews everything anyway. But at 40 people with three business lines and a board expecting clean monthly reporting, "someone eyeballs it" is not a control framework.

What governance actually looks like at this stage

This isn't about building a compliance department. It's about five decisions that take an afternoon to make and save you months of cleanup later.

1. Inventory what's running. List every tool that touches financial data and what it does autonomously. Most companies are surprised by this list. The expense tool auto-categorizes. The banking integration auto-reconciles. The revenue platform auto-recognizes. That's a lot of automation with no oversight framework.

2. Define the review cadence. Every automated output needs a human checkpoint. Not because AI is unreliable, but because your business changes faster than your tool configurations. New vendors, new revenue streams, new cost centers — these all create edge cases that automation handles poorly until someone retrains it.

3. Assign ownership. One person owns each tool's output quality. Not "the finance team." A name. When the AP tool miscategorizes, there's a specific person whose job it is to catch that within a defined window.

4. Set a threshold for manual review. Not everything needs a human eye on every transaction. Define the dollar amount, the category, or the anomaly score that triggers a manual review. This is how you get the efficiency benefit of automation without the risk of unmonitored outputs.

5. Document the decisions. Write down what each tool does, who owns it, and what the review process is. This takes an hour. It saves you when the auditor asks, when the board asks, or when the person who set it up leaves the company.

The companies that get the most value from AI in finance aren't the ones who adopt the most tools. They're the ones who know exactly what each tool is doing and have a plan for when it's wrong.

How we think about this with clients

When we build or rebuild a client's finance stack, the governance layer goes in at the same time as the tools. Not after. We treat the automation configuration and the control framework as a single deliverable because they are.

This means the monthly close process includes explicit checkpoints for every automated workflow. The reporting package includes a note on any AI-assisted outputs that required manual correction. And the tool inventory gets reviewed quarterly as the business changes.

It's not glamorous work. But it's the difference between a finance function that scales cleanly and one that accumulates invisible errors until something breaks at the worst possible time.

The real question

The pressure to adopt AI tools is real, and for most growing companies, the tools are worth adopting. But the question isn't "should we use AI in finance?" Most of you already are.

The question is: if one of those tools made a bad call last Tuesday, would you know?

Phil Bolton

Phil Bolton

Founder & Principal at Manitou Advisory

Want to talk about your finance setup?

We help growing companies build the right finance function.

Book a Call →