AI Didn’t Replace Her. It Gave Her a Christmas Pac-Man Game.
What I learned about trust, tools, and team morale
Our AI policy can be solid, a perfect piece of governance.
But a nervous staff member can still freeze.
That is the gap most leaders underestimate.
This week at Furniture Bank, a staff member asked for a meeting. I’ll call her Jeda.
She did not want a demo.
She wanted to know if she had a future.
Before we talked tools, I asked one question that I think every CEO should learn to ask: “What are you most worried AI will do to you?”
Jeda did not hesitate.
“When I hear ‘efficiency,’ I hear ‘reduction.’
If this tool can write emails, does the organization still need me?
And will I be controlled by robots in 15 years?”
That is not a technical question. That is a trust question.
Here is the bottom line, from where I sit:
Governance reduces organizational risk.
Guidance reduces human fear.
You need both. But you do not get adoption without guidance.
Most nonprofits are building policy binders while their teams are quietly building stories in their heads. Stories about layoffs, replacement, surveillance, and losing the human part of the work.
Guidance is how you interrupt those stories with clarity. That’s my experience one team member at a time.
The moment you stop managing tools and start leading people
I thanked Jeda for saying it out loud.
Then I gave her the most honest answer I could.
FIRST: I cannot predict 15 years. But I can tell you what is true here.
We are not using AI to cut headcount. Nonprofits like Furniture Bank have NEVER had enough people. We have too much demand and not enough capacity. My goal is to use AI to remove work that wastes your time so you can do more work that only you can do.
Jeda said, “That distinction helps.”
It should. It is the distinction most leaders avoid saying directly.
SECOND: yes, the world will see disruption. Many jobs will disappear, more will be created, especially where companies are motivated to maximize profit by reducing people.
But our reality is different. If we use AI well, it should feel less like replacement and more like relief - augmenting our team talents, wisdom and experiences.
The first guidance move: separate jobs from time-wasting work
I gave Jeda a frame she could carry back into her week.
I asked her to audit her work with three questions:
What is one time-wasting task that takes forever and adds little value?
What is one energy-draining task that burns you out?
If you had an extra hour each week, what would you do that would matter most?
Then I told her the line I meant.
“My KPI as CEO is to hunt the first two categories down.”
Not because your work does not matter. Because some of the work should not be asking for your attention in the first place.
Jeda said, “I can do that.”
Good. Now we had a next step that was not abstract.
The second guidance move: learn AI as a loop, not a lecture
Then I did the thing most leaders skip. We used the tool, live.
I said: let’s reduce the fear by doing one small thing together. Not a demo you watch, a loop you drive.
She shared her screen and opened Gemini. She told me she had opened it before, but had not really used it.
Perfect.
I asked her to prompt it: “Help me build a retro Pac-Man game with a festive theme.”
She watched it generate code and said, “Whoa.”
Then she ran it.
“It’s kind of broken.”
Also perfect.
Because this is the real skill. Not prompting. Correcting.
I asked her to tell it what was wrong. “The controls aren’t working. Use arrow keys.”
She did. It changed the code. The game improved.
And I said the sentence I wish more leaders would put on posters:
AI literacy is not a lecture. It is a loop: prompt, test, correct, repeat.
When you learn it this way, fear drops. Not because risk disappears, but because you are in the driver’s seat.
Readers note: Jeda was updating me over the next two hours on Slack …as she wrangled and grappled with trying to create a Pac-Man game that fit her vision for a Christmas-themed Pac-Man game
The third guidance move: make mistakes feel manageable
Jeda was still worried about hallucinations and errors.
So I gave her a rule that makes AI feel less like magic and more like management.
Treat AI like a summer intern named Kyle.
Kyle knows nothing on day one.
Kyle messes up.
You correct Kyle.
Over time, Kyle becomes useful.
But you are still accountable for what ships.
Jeda said, “That makes it feel more manageable.”
That is the point. Fear becomes workable when people know their role.
They are not being controlled by a tool.
They are stewarding a draft.
What I am committing to as CEO
I ended the meeting the way leaders should end these conversations: with commitments, not vibes.
Clear boundaries on privacy, safety, and what we do not do with AI.
Training that respects different learning styles: video, reading, hands-on.
Small experiments, not overwhelm.
A culture where it is safe to be new.
Jeda said, “I feel excited and a little terrified, because it’s unknown.”
That is the honest mix.
If your staff only feel excitement, they are probably not telling you the truth.
If they only feel terror, you have probably left them alone with their stories.
The hour was the work
The most important output of that conversation was not a Pac-Man game.
It was a shift in posture.
From: “I am nervous and I might get this wrong.”
To: “I can try one small thing safely, and someone will walk with me.”
That is adoption in real organizations.
Not memos.
Not all-hands.
Moments where leadership shows up, sets boundaries, and makes learning feel safe.
So here is the question I am holding myself to as CEO:
Am I spending more time writing policy, or more time making it safe for one person to start?
Because the tool will not earn trust.
Leadership will.
Sidebar: A one-hour guidance template you can steal
If you lead a team, run this meeting with one nervous person next week.
Start with the fear: “What are you most worried AI will do to you?”
Tell the truth about your reality: what you will use AI for, and what you will not.
Do a three-question job audit: time-wasting, energy-draining, one-hour wish.
Do one live loop: prompt, test, correct.
End with commitments and a next check-in.
Policy matters.
But guidance is what gets a human being moving.






