Your People Are Using AI Without You. Now What?

A VP of Engineering talked to me a few weeks ago, half-laughing, half-serious. Legal had run a quiet audit of customer support transcripts and found that one of their senior agents had been pasting full ticket threads, including customer names, into ChatGPT to draft responses. Not maliciously. The agent had been told to improve response time and ChatGPT cut her drafting time by sixty percent. Nobody told her she could not. Nobody told her she could either.

This is the AI governance conversation most companies have not had yet. And the longer they put it off, the more interesting the audit gets.

The core idea
AI governance is not about banning tools. It is about giving people clear permission with clear guardrails so they can move fast without putting the company at risk. Without that, you do not have safety. You have shadow AI.

The honest state of AI use at your company

Ethan Mollick wrote about this pattern in Co-Intelligence and the data has only sharpened since. Companies that ban AI tools typically discover that their people are using them anyway, just from personal devices and personal accounts. Mollick lands the point hard: shadow IT use is common, but it incentivizes workers to keep quiet about both their innovations and their productivity gains.

That last part is what nobody wants to talk about. When you ban or ignore AI, your top performers do not stop using it. They just stop telling you. Which means you lose visibility into both the risks they are creating and the productivity wins they are generating. The gap between your stated policy and what is actually happening on Slack and in shared docs at 9 PM on a Tuesday is often enormous.

You probably already have AI in your company. The question is whether it is governed or whether it is hiding. (If you have not yet read our piece on why teams resist AI adoption, the dynamics there and the dynamics here are tightly linked.)

What governance actually means (and what it does not)

Governance is not a 47-page policy document. It is a small set of clear answers to the questions your people are quietly asking themselves:

  • Which tools am I allowed to use?
  • What data am I allowed to put into them?
  • Who is responsible when something goes wrong?
  • Where do I go for help when I am unsure?
  • How will my use be evaluated during the review?

Brotman and Sack, in AI First, make a great point. They argue that an AI use policy is often seen as inherently restrictive, but in practice, a comprehensive AI use policy gives the organization more freedom to run pilots, experiment, and innovate without running afoul of sound security, privacy, and ethical considerations.

Good governance creates speed. It is the absence of governance that slows you down, because every decision becomes a one-off debate.

Leadership cue
If your AI policy reads like a list of "thou shalt nots" with no positive guidance about what good looks like, you have not written a policy. You have written a discouragement memo. People will route around it.

A practical playbook for the next 30 days

You do not need a Fortune 50 governance program to get going. You need five specific things, sequenced in this order.

1. A tiered data classification

Three buckets are usually enough:

  • Green data that anyone can put into any approved tool (general industry information, public documents, internal training material that is not confidential).
  • Yellow data that can go into approved tools with audit trails (internal work product, draft strategy, code that is not customer-facing).
  • Red data that never leaves approved internal-only systems (customer PII, financial records, source code with credentials, regulated health or banking data).

Janna Lipenkova, in The Art of AI Product Development, outlines a similar approach that uses GDPR, CCPA, HIPAA, and the EU AI Act as the regulatory floor. The point is that your people need to know what counts as red versus yellow without having to call legal. Make it short. Make it laminated if you have to. Make it easy to remember, in the moment, that someone has a draft to send.

2. An approved tools list

Pick three to five tools your company actively supports and stands behind. Note the data tier each one is approved for. ChatGPT Enterprise, Claude for Enterprise, or Microsoft Copilot are obvious starting points; all can be configured with data residency and retention controls that consumer accounts cannot. The list is allowed to evolve. What is not allowed is silence about which tools have been blessed and which have not.

3. A small, cross-functional AI council

Moderna's GenAI Champions Team is a useful model. Brotman and Sack describe how the team emerged from the company's prompt contest, which included more than a hundred proficient AI users, and not only supported AI use but also helped set guidelines and best practices for the wider organization. You do not need a hundred. You need six to ten people from product, engineering, legal, security, marketing, and operations who meet every two weeks to review what is changing and update the policy as needed. Treat them as a standing team, not a steering committee.

4. A no-blame incident channel

When something goes wrong, and it will, you need a channel where people can say so quickly without worrying about getting fired. Daugherty and Wilson in Human + Machine use Microsoft's Tay chatbot as the textbook case for why guardrails and a fast response path matter. Your version is less dramatic but just as important. A Slack channel called #ai-incidents that the AI council monitors is a fine start. The first time an executive surfaces their own near-miss in that channel, you will know the culture has caught up to the policy.

5. A quarterly review

Governance only works if it gets reviewed. Once a quarter, the council asks four questions: What is on the approved tools list that should not be? What is missing? Where did people work around the policy and why? What new regulation is coming that we need to plan for? Adjust and republish. AI will keep moving faster than your policy. Build in the cadence to keep up.

Common traps

Pretending policy is a one-time event. It is not. AI changes faster than your policy can keep up. Build in the review cadence from day one or it will quietly go stale.

Letting legal write it without product or engineering. A pure-legal policy will be technically correct and operationally useless. Cross-functional drafting is non-negotiable. The lawyers protect you. The practitioners make it implementable.

Banning what you do not understand. Mollick's research is clear: bans push usage underground without reducing it. If you do not know how a tool works, find your most advanced internal user and learn from them before you write rules about it. (This is also why most AI projects stall after the pilot; the people writing the policy are not close enough to the work.)

Treating governance as separate from culture. If your people do not feel safe surfacing AI mistakes, no amount of policy text will save you. Psychological safety is the substrate that makes governance actually function. The same forces are at play with the hidden cost of AI-generated code; quiet adoption produces quiet risk.

Try this next week

Pick one question and answer it in writing for your team this week: "Which AI tools are approved for which kinds of data here, and where do you go when you are not sure?" If you cannot answer that in a paragraph that fits on one screen, you have your governance project. Start there.

Then pick one person from product, one from engineering, one from legal, one from security, and ask them to spend two hours together drafting that paragraph. Two hours, not two weeks. The point is not perfection. The point is removing the silence.

 

If your organization is wrestling with where to begin, Big Agile's AI for Product Management workshop walks product and delivery leaders through this exact framework with worked examples. You can also join one of our public courses or reach out directly if you want help building your council and your tiered data framework with your specific industry constraints in mind.

 
Read Next
Why Your Team Is Resisting AI (And What to Do About It)
Governance gives people permission with guardrails. This companion piece covers what to do when the resistance is not about the rules but about three specific fears nobody is voicing out loud.