Your Team Has AI Tools. They Don't Have AI Skills.

A Scrum Master I coach sent me a screenshot last week from a sprint retrospective. Half her team had been using ChatGPT all sprint for stand-up summaries, acceptance criteria, and backlog reformatting. The other half had not touched it once. Same team, same tools, same Copilot license sitting unused on six laptops.

When she dug in, the picture got more interesting. Of the four using AI, exactly one was reviewing outputs critically. The other three were pasting whatever came back. Of the two not using it, one thought it was unethical and one had tried it twice, gotten bad answers, and given up. Six people, five different relationships with the same tool, zero hours of training.

This is the AI literacy gap, and it is the most underdiscussed problem in business AI right now. We have spent eighteen months buying licenses and almost no time teaching anyone how to use them.

The core idea
AI literacy is not one skill. It is three: knowing what to ask, knowing what to trust, and knowing what to redesign. Most teams have a tool problem only on the surface. Underneath, they have a skill problem, and it will not solve itself.

Why "we gave them access" is not a strategy

Most companies' AI rollout reads like this: pick a tool, buy licenses, send the announcement, maybe schedule a lunch-and-learn that twelve people attend and four remember.

Ethan Mollick's research on this is unflinching. When organizations skip the literacy investment, they get one of two outcomes. Either people do not adopt the tool, which looks like an AI failure but is actually a training failure. Or they adopt it without judgment, which looks like an AI win until something blows up downstream. Paul Daugherty and James Wilson called this the "missing middle" in Human + Machine: a new set of skills where humans and AI collaborate, and almost nobody is being trained for that middle. (We saw a version of this earlier this spring in the AI productivity paradox; experienced developers using AI in repos they already knew came out 19% slower, not faster.)

The three distinct skills (and they are not interchangeable)

1. Knowing what to ask. The prompting layer. Context, examples, constraints, structure. Someone with this skill writes a prompt that includes the audience, the format, the constraints, and one example of the desired output. Someone without it writes "give me ideas for the offsite." The first gets useful drafts. The second gets corporate clichés.

2. Knowing what to trust. The evaluation layer. AI outputs sound confident even when they are wrong. Spotting hallucination, checking sources, noticing when a number is suspiciously round, and verifying before pasting is its own skill. It overlaps with critical thinking but is not identical, because evaluating AI requires understanding how AI fails, not just whether something feels off.

3. Knowing what to redesign. The workflow layer. Most teams bolt AI onto the same process they had before. The skill is recognizing where AI changes what the process should be. If AI can draft acceptance criteria in 30 seconds, your sprint planning probably should not still spend 45 minutes on it. That is a redesign decision, and it takes a different kind of thinking than prompting.

Most "AI training" programs cover the first skill and call it done. The other two are where the actual business value lives.

A five-step playbook to close the gap

1. Audit the actual current state, not the assumed one.
Before you spend a dollar on training, run a simple survey: "In the last two weeks, what task did you use AI for? Paste a real example of a prompt you used." Most leaders are shocked at the answers. A third of the team has never touched it. A third uses it for things that probably should not leave the laptop (client data, board materials). A third is doing impressive work and not telling anyone. You cannot train a team you have not looked at. (This pairs with the shadow AI question almost every leader I work with is trying to dodge.)

2. Start with role-based fluency, not generic training.
A Scrum Master, a product owner, and a senior engineer should learn AI differently because their work looks different. A Scrum Master is learning how to draft retro questions, summarize team health signals, and translate ceremony outputs across stakeholders. A product owner is learning how to rewrite vague user stories, synthesize a stack of customer interview notes, and pressure-test their own roadmap assumptions. A senior engineer is learning how to review AI-generated code for the subtle errors AI is known to produce. One curriculum for everybody is a curriculum for nobody.

3. Run a weekly "show your work" ritual.
This is the single highest-leverage thing on the list, and almost nobody does it. Once a week, for thirty minutes, one person walks the team through a real AI workflow they used. The prompt, the output, what they kept, what they changed, and why. Ethan Mollick calls the two effective modes for working with AI the centaur (clear handoff between human and machine) and the cyborg (deeply interleaved). Both are learnable by watching them in action and essentially unlearnable by reading about them. If you do nothing else from this list, do this one.

4. Teach evaluation explicitly, not implicitly.
Pick five recent AI outputs your team has produced. Print them. In a one-hour working session, grade them: what is accurate, what is fabricated, what is technically correct but contextually wrong, what would you change before sending. This is the muscle most teams are missing, and it cannot be built by sending a Slack link to a Coursera course. (We dug into a version of this for engineering teams in the hidden cost of AI-generated code; the same gap applies to writing, research, and analysis.)

5. Redesign one workflow this quarter, not five.
Pick a workflow where AI could change the shape of the work, not just its speed. Common candidates: backlog refinement, customer interview synthesis, release notes, post-incident write-ups, onboarding documentation. Map the current workflow, identify where AI fits, redesign the steps, and run it for four weeks. Then debrief honestly. One thoughtful redesign teaches the team more than ten optimistic ones.

Leadership cue
If you have not personally done a "show your work" session with a real prompt in front of your team this quarter, your AI literacy program is theoretical. People copy what their leaders demonstrate, not what their leaders mandate.

Common traps

Confusing tool adoption with literacy. Your usage dashboard says 84% of the team used Copilot last week. That tells you the licenses are not being wasted. It tells you nothing about whether the outputs are good, whether anyone is checking them, or whether the work is better. Adoption is the easiest metric to move and the least useful one to brag about. Measure behavior change four weeks after training, not training hours.

Generic AI training nobody applies. The 90-minute introduction to ChatGPT is the corporate equivalent of giving someone a piano lesson and expecting a concerto. People need to practice on the actual work they do, not toy examples about planning a birthday party. If the training does not include their real prompts and their real outputs, it will not stick.

Banning instead of teaching. Mollick has been very public that bans push usage underground, not out of existence. Top performers do not stop using AI when you ban it. They stop telling you. You lose visibility into both the risks and the productivity wins. (We made this case in detail in why your team is resisting AI, and the same dynamic shows up here.)

Letting executives skip the practice. If your leadership team has not personally written a prompt, reviewed an output, and corrected it in the last month, they will set unrealistic expectations for what AI can do. You cannot govern a technology you have not used.

Try this next week

Pick one person on your team who is using AI well. Block sixty minutes. Have them walk the rest of the team through three real prompts they used in the last week. The prompt. The raw output. What they kept. What they changed. Have everyone else ask questions.

That single hour will teach your team more practical AI literacy than any vendor training you could buy, for the same reason apprenticeship has always worked: people learn skilled judgment by watching it in context, on real work.

If you want a faster path to that fluency for product roles specifically, the Big Agile AI for Product Owners micro-credential walks teams through this same set of skills with real product artifacts, or come find me at one of the upcoming public courses. If you want a quick read on where your team falls on the literacy spectrum, drop me a note.

 
Read Next
The AI Productivity Paradox: Why Your Team Might Be Getting Slower With AI
The counterintuitive evidence on what happens when experienced people use AI tools without literacy, and what to do about it.