The WHAT and the HOW of an AI-Ready Culture
What I learned putting Tim Lockie’s Human Stack next to The AI Daily Brief
I was walking between our warehouse and the showroom when I heard it.
On The AI Daily Brief, Nathaniel Whittemore and his head of research, Nuphar, were reading transcripts from thousands of AI interviews inside companies. Not product demos or benchmark charts. Just employees talking honestly to a voice agent about how work really works.
Fear about jobs. Pride in home-grown tools. Engineers ready to “die on this hill” rather than let outside vendors in. Champions experimenting after hours on personal accounts because the tools at home are better than the ones at work.
I could have been listening to my own staff in many ways.
At Furniture Bank and through The Human Stack, I spend a lot of time in those conversations: people who want AI to help, trapped between culture, governance, and no time or permission to learn.
What clicked for me on that walk was simple:
Nathaniel’s world is naming WHAT an AI-ready culture needs.
Tim Lockie’s Human Stack work is HOW you actually build that culture inside a team.
This piece is that bridge.
The AI Daily Brief: the “what” of AI readiness
Listen to the original podcast on Spotify
Superintelligent’s “agent readiness” work has a clear conclusion: nobody is fully ready. The models and tools are improving. Culture is still the bottleneck.
Across thousands of interviews, six levers show up over and over:
Communication: a real AI manifesto from leadership. What we believe, what we expect, what is allowed, and what this means for jobs.
Human oversight: clear guardrails for where agents can act, how we measure success, and who is accountable.
Attitude: staff are excited to delete grunt work and scared about what happens to their role. Both are real.
Network: not just a CIO memo, but champions and builders who help peers and can actually ship things.
Governance: neither “wild west” nor “16-person committee that kills everything”. Enough speed to learn, enough safety to sleep.
Enablement: time, permission, and training beyond “prompt tips,” into agent management and workflow change.
Add in the tool quality gap (home tools often better than work tools) and the ROI gap (individual gains not showing up at the org level) and you get a familiar result:
AI works. Culture is stuck. ROI is missing.
The Human Stack: the “how” inside a real week
I have been working with Tim Lockie since late 2023, first on my own personal training, and now on bringing his guidance operating system into our AI and digital work at Furniture Bank.
His starting point: culture predicts nonprofit tech success far more than tool choice. You can swap CRMs and get nothing, or change the culture around the same CRM and it feels like a different system.
He uses two models.
Belonging → Beliefs → Behaviours.
Belonging is the base code. When people feel they belong, they form beliefs like psychological safety and trust. Those beliefs unlock behaviours like speaking up, experimenting, and taking ownership. Skip belonging and you get compliance at best and sabotage at worst.
Accountability = Attention + Power.
Leaders set attention through metrics and conversations. Then they delegate real power to act on those priorities. When attention and power line up, accountability works. When they do not, you get chaos or learned helplessness.
Digital Guidance is how this shows up on a calendar:
Roles: a strategic Leader who focuses attention; a Guide who turns that attention into weekly execution with a small guidance team.
Rhythm: one weekly tactical meeting to triage requests; one monthly Center of Excellence to step back, review results, and reset priorities.
Roadmap: a shared queue of improvements, experiments, and trainings tied to real business goals.
Results: a short list of human and tech health scores, not just project plans.
It is intentionally boring so it can be repeated.
Mapping the “what” to the “how” at Furniture Bank in 2026
Listening to that AI Daily Brief episode, I started mapping each cultural lever to the Human Stack moves we want to build out at Furniture Bank in 2026.
Here is what we expect that to look like if we do the work.
1. Communication → AI manifesto and agenda
In 2026, we want nobody at Furniture Bank guessing how we think about AI.
Our plan:
Publish a one-page AI manifesto from the CEO: what AI is for here, what it is not for, and how we will treat jobs and time saved.
Bake that message into our Center of Excellence and team meetings until staff can explain it in their own words.
If we get this right, “what is allowed” will be common language, not hallway / warehouse speculation.
2. Human oversight → learning zone guardrails
Agents will touch more of our work in 2026. We do not want that to feel like a silent takeover.
Our plan:
Define clear categories for where we are aiming for autonomy and where human review is non negotiable.
Decide, in advance, what we expect people to do with time freed up by AI.
Use the weekly guidance meeting as the place where AI work is reviewed together, mistakes are mined for learning, and changes go back into the roadmap.
If we get this right, staff will know where AI fits and will treat errors as input, not evidence that “this will get me fired.”
3. Attitude → belonging and resistance as signal
In 2026, we are going to treat resistance to AI as a design signal, not a character flaw.
Our plan:
Name openly that some of our strongest people are also our strongest skeptics.
Give those people formal roles as co designers or builders on key workflows.
Make sure our AI Champions network includes skeptics, not only early adopters.
If we get this right, pushback will show up in working sessions, not in quiet workarounds and shadow tools.
4. Network → champions and guides, not AI tourists
We do not want “AI tourists” who try a tool once and never come back. We want a real network, of active collaborators on our digital stack.
Our plan:
Nominate and train champions across our main teams, with time and permission to support peers.
Give one or more Guides ownership of the weekly rhythm and the escalation path for bigger builds.
Connect Leg Up participants into this network so social employment staff are learning AI on real work, not fake exercises.
If we get this right, AI will spread through peer help and guidance, not only from the CEO or a single tech person.
5. Governance → from Governor to Guide
Our governance in 2026 needs to protect our community and also let us learn fast.
Our plan:
Keep clear rules on sensitive data and vendor selection.
Create safe sandboxes where staff can try better tools than the ones we bought last year, inside agreed guardrails.
Shift our AI steering conversations to include questions like “what did we learn this month” and “what changed for clients or staff,” not only “what did we block.”
If we get this right, governance will feel like guidance: firm on risk, helpful on how we move forward.
6. Enablement → from prompts to agent management
By the end of 2026, I do not want “we ran a prompt workshop once” to be our story.
Our plan:
Fold AI into Leg Up and staff development as a basic literacy: how to frame work for agents, not just how to write prompts.
Use the Digital Guidance rhythm to coach Guides and champions on choosing workflows, designing small automations, and measuring impact at the system level.
Protect calendar time for experimentation so “too busy” is not the only story.
If we get this right, people at Furniture Bank will not just use AI tools. They will manage AI agents as part of their job, with clarity, safety, and support. In turn this will enable us to scale our impact for our community and the families who rely on us to be ready to help when we are called upon!



