A pattern keeps showing up in conversations with business owners. The people who say their team is using AI well almost never know what their team is actually doing with it. The ones who say their team is rubbish at AI usually have one or two people quietly producing genuinely good work, often without telling anyone.
Most leaders are making AI decisions based on assumptions, not evidence.
The visibility problem
AI adoption in most businesses is invisible by default. Unlike a new CRM or a project management tool, there is no dashboard that shows who is using AI, how often, or on what. People experiment quietly. They do not announce it in meetings because they are not sure whether it is allowed, encouraged or expected.
The result: a leadership team making decisions about AI investment based on what they assume is happening, not what is actually happening.
What you will find when you look
The team member you assumed was leading the way is often using ChatGPT for one task and nothing else. The quiet senior person you wrote off as a sceptic is often building elaborate workflows and not telling anyone. The middle bit of the org chart, where the actual work happens, is mostly winging it.
None of this is a problem on its own. It becomes a problem when you start making decisions based on what you assume is happening. Buying licences for tools nobody has been trained on. Paying for an enterprise tier when most users are still in the free version. Investing in advanced training for people still learning the basics.
The five-question audit
The single most useful hour you can spend on AI in your business this month is asking five people what they actually use it for. Not in a meeting. One at a time. Quietly.
Here are the questions that surface the real picture:
- Which AI tools do you use at work? (Not which do you have access to. Which do you actually open.)
- What do you use them for? (Get specific. "Emails" is not enough. "Drafting follow-up emails after sales calls" is.)
- How often? (Daily, weekly, tried-it-once?)
- What works well? (Where has it actually saved time or improved quality?)
- What have you tried that did not work? (This is where the training gaps live.)
What to do with the answers
Three categories will emerge:
- Working well, not shared. The person using AI effectively but quietly. Get them to show one other person what they do. That is your cheapest training programme.
- Tried and abandoned. The person who tried Copilot, got a bad summary, and stopped. That is a training problem, not a tool problem.
- Not started. The person who has access but has never opened it. That is an onboarding problem.
Each category has a different fix. None of them is "buy more tools."
What people say they do with AI and what they actually do are rarely the same thing. Go and look.