Guides, Not Governors: Our 2026 AI Decision
The work isn’t blocked by tech. It’s blocked by doubt.
Most AI programs don’t stall because of the tech.
They stall because people are afraid to move.
That is why, in 2026, I am making Guidance, not Governance, the main AI priority at Furniture Bank.
Not more committees and policies.
More Guides.
The 3 a.m. notebook that exposed the real problem
This started with a Slack from our Sales Development Lead (Sam).
She had been up half the night with a notebook, sketching ideas for AI agents to handle:
Furniture pickups and drop-offs
Tax receipting
Client questions and languages
Automated follow-up on quotes, like abandoned cart emails
Her questions were specific:
Could an AI agent change fields in Salesforce?
Would Google Calendar be better than our current Google Sheets for booking capacity?
Could it safely handle credit cards for partner programs?
Could it regenerate tax receipts in the right language?
“When we send a quote, why can’t an automation follow up the next day?
Why does the team have to do all of that manually?
Why can’t we offer a small discount automatically when someone stalls?”
This is not a theory.
This is a senior leader trying to redesign work at 3 a.m., alone with a notebook.
That is the real AI demand inside an organization.
Governance was not what she needed
Sam was not asking, “Am I allowed to use AI?”
She was asking, “How do I make this system less painful without breaking it?”
No policy solves that.
Governance is built for stopping bad things:
Use rules, approvals, contracts, and sign-offs.
We need that.
But it does not turn a late-night idea into a safe experiment.
It does not give a VP the confidence to own a change, instead of waiting for the CEO or a vendor.
At that moment, she did not need a Governor.
She needed a Guide.
What a Guide actually did
Enter Tim from The Human Stack.
He did not sell her a platform.
He started with questions:
How much are you paying now?
When does the contract end?
How often are call dispositions missing?
Who else are you talking to besides Zoom?
Who will make the final call?
Then he gave her a simple tool: a decision matrix.
Options in the columns.
Real needs in the rows: price, AI features, contract terms, reporting, Salesforce integration, languages, and so on.
He told her to use AI to draft the first version from their transcript, then clean it up and circulate it.
That matrix became a clear request:
Something our Center of Excellence, leadership team, and vendors could react to.
Something that plugged into the roadmap instead of living in a notebook.
Same tools.
Different behavior.
Guidance turned “I have a bunch of ideas and I’m not sure I should bother Dan” into
“Here is a structured recommendation. Here is what I need from you.”
The hidden emotion: fear and shame
In the Slack thread, Tim asked me:
Why do you think your team has fear and shame coming to you?
My answer was simple: “I am 3.5 Years into AI. They are just starting. They are unsure how to proceed”
If you are a CEO who is very deep into AI, staff will often assume:
You already know the right answer.
Their questions are basic.
Their friction is a personal failing.
So they hide the truth:
How many tabs they click.
How many times they copy and paste.
How much Salesforce slows them down.
On the surface, this looks like a technology gap.
Underneath, it is a safety gap.
Guidance is how you close that gap on purpose: by inviting messy reality, turning it into clear requests, and backing people who bring you their friction.
Sam stepping forward like she did is exactly the behaviour I want from everyone at Furniture Bank
Guidance vs Governance, in practice
If you zoom out, that one Slack thread shows the difference we are trying to instill in all our staff in a coordinated and intentional manner. The difference on the ground, the reality in our halls and offices…
Governance asks:
What are the rules?
Who approves this?
How do we avoid risk?
Guidance asks:
What problem are you really trying to solve?
Who owns this decision?
What do they need to see to say yes?
How do we turn this into a safe, small experiment?
In one day (last Friday), the Guidance model produced:
A clearer definition of the real problems for Furniture Bank (not just “AI agents”).
A decision matrix that any stakeholder can understand.
A path to connect Sam’s ideas to our Digital AI roadmap, not just her inbox.
A our first live example other teams can copy.
No new policy.
No new platform.
Just better questions and a better-shaped request.
Our 2026 decision: Guides first
Here is what this means for us.
In 2026 at Furniture Bank:
Guides come first. We are investing in people who can turn friction into decision-ready requests. AI Cred and training will point at them first.
Requests are assets. A well-framed request about “annoying, double-clicky, tab-hopping” work is now a valuable artifact, not a complaint.
Mini product managers, everywhere. My goal is to have dozens of Sam’s who can define scope, log work, weigh cost versus impact, and recommend a path that does not require me in the room.
Governance wraps what Guidance proves. Policies, contracts, and risk controls will wrap around real, working patterns, not abstract fears.
Most AI roadmaps start with Governance and hope Guidance shows up later.
We are doing the reverse.
We are building the Guides and rhythms first, then putting the right governance around what actually works.
One question for other CEOs
If you lead a team wrestling with AI, ask yourself:
Do your people have a safe way to bring you their half-formed AI ideas and real system pain, or are they working on it alone at 3 a.m. with a notebook?
If it is the second, you do not have a technology problem.
You have a Guidance problem.
In 2026, we are betting that the fastest way to move AI is simple:
Support the Guides.
Then let them pull the tech, not the other way around.








