A guide-tier walkthrough of writing an AI acceptable use policy that survives contact with reality. Includes the full template, the four sections that matter, the rollout playbook, and the EU AI Act Article 4 connection most teams miss.
The conversation about AI policies changed in 2025. The LayerX 2025 corporate AI usage study found that 77% of employees who use generative AI tools at work paste sensitive company data through personal accounts. ISACA's 2025 survey put the share of organisations with a formal AI policy at around 28%. And on 2 February 2025, Article 4 of the EU AI Act came into force, requiring every provider and deployer to ensure "a sufficient level of AI literacy" for staff and anyone else operating AI systems on their behalf. The supervision and enforcement of that obligation begins on 2 August 2026.
Three things converged. The use is real, the policies do not exist, and the regulation is now live. The gap is no longer optional.
This is a guide-tier walkthrough for writing the actual document. It assumes you have some AI tools in production, no formal policy yet, and a small enough team that legal review is a meeting rather than a department.
Most published AI policies fail in the same way. They get drafted by a compliance function that does not use the tools, signed by employees during onboarding, and then forgotten. The fifteen-page version is worse than no policy at all: it transfers paper liability without changing behaviour, and it gives the team a defensible reason to claim they "followed the guidelines" when the next incident happens.
The one-page version works for a different reason. People read it. Team leads can quote it from memory. New hires can absorb it in five minutes. When a support agent is about to paste a customer email into ChatGPT to draft a reply, the one-page version lives in their head and the fifteen-page version does not.
I think the strongest AUP I have read is the one-page version that gets read, and the strongest evidence for that is what happens during incidents. When you walk through a real prompt-paste incident with the team afterwards, the question is always whether the person remembered the rule, not whether the policy theoretically prohibited the behaviour. Memory is the only enforcement mechanism that works in real time.
The corollary: every section of your AUP should pay rent. If you cannot articulate what specific behaviour the section prevents or enables, it does not belong in version one. You can always add later.
Writing rules in a vacuum produces rules that miss the real risks. Before drafting, run a discovery pass that takes between thirty minutes and a long afternoon, depending on team size.
The four cheap discovery sources:
chat.openai.com, claude.ai, gemini.google.com, chatgpt.com, perplexity.ai, copilot.microsoft.com, and the API hosts.The output is a short list: tools in use, rough volume, account types, data categories, and unmet wants. That list is the input to every section that follows. If you have already done a shadow AI discovery, reuse it. If not, the shadow AI guide walks through the deeper version of this audit.
The single most useful question to ask is the one most surveys skip: what do you want to use that we have not approved? The answers identify the policies that need to enable, not the ones that need to forbid.
A working AUP has four load-bearing sections. Everything else is supporting context that can be cut.
Name the tools explicitly. "Use only approved AI tools" is not a policy. Specific tool names with their tier, what the team can use them for, and the contract status are the actual rule.
| Tool | Tier | Approved for | DPA / contract status |
|---|---|---|---|
| ChatGPT | Team or Enterprise only | Drafting, summarisation, research. No customer personal data. | DPA signed, no-train default |
| Claude | Team or Enterprise | Drafting, code review, research. No customer personal data. | DPA signed, no-train default |
| GitHub Copilot | Business or Enterprise | Code assistance for non-client repositories | DPA signed |
| Azure OpenAI | Enterprise (EU Data Zone) | Product AI features, customer-facing | DPA signed, EU residency |
| Microsoft Copilot (M365) | Enterprise | Drafting and document workflows in M365 | DPA signed |
| Otter, Fireflies, Zoom AI Companion | Business tier | Internal-only meeting transcription | DPA signed, ePrivacy review pending |
Then the not-approved list:
Date the table. Mark when each row was last reviewed. An approved tool list with no dates erodes trust the moment a vendor changes terms.
Three tiers, with concrete examples per tier. Most teams over-engineer this section. The goal is a rule that the support agent can apply in three seconds, not a taxonomy that classifies every possible data category.
Green: unrestricted for approved tools. Public information, marketing copy, your own internal-tool code without embedded credentials, anonymised or synthetic data, query plans against test data.
Yellow: approved tools only, with care. Internal business documents (strategy, meeting notes without participant names, internal emails without third-party content), aggregated non-identifiable customer data, source code for products via Copilot Business or equivalent, employee names and roles in internal context.
Red: explicit approval required, or prohibited entirely.
For Red data, the default is "do not put this into any AI tool." Exceptions exist (a self-hosted model on internal infrastructure, a dedicated Azure OpenAI deployment with proper DPA and access controls), but they require documented approval from the data controller or DPO. The exceptions live in the AUP itself, not in tribal knowledge.
The most common Red-tier failure is not the obvious one. It is the support agent who pastes the customer's email and complaint into ChatGPT to draft a reply. The data feels mundane (one name, one email, one paragraph of text), the time pressure is real, and the consumer-tier ChatGPT is one keyboard shortcut away. Your data classification has to address that exact scenario by name, not in the abstract. The named approved alternative ("use the Team workspace at the same URL, not your personal account") is what makes the rule actionable. See the incident runbook for the response side.
New AI tools appear constantly. Your team will find them. The question is whether they check with you first or start using them and tell you later. The answer depends entirely on how fast your approval process is.
A 48-hour lightweight evaluation looks like this:
If your approval process needs a committee meeting, a 30-page risk assessment, and sign-off from three directors, people will skip it. Speed matters more than thoroughness for the first triage. A deeper review happens later for the tools that handle sensitive data.
The evaluation checklist that fits on a single page:
## AI Tool Evaluation
Tool name:
URL:
Requested by:
Date:
### Basic checks
- [ ] Business / Enterprise tier available?
- [ ] DPA available and reviewed?
- [ ] Data processing location documented?
- [ ] Training opt-out confirmed (no-train by default)?
- [ ] Data retention period acceptable?
- [ ] Sub-processors listed in the DPA?
- [ ] Breach notification terms included?
- [ ] Where is the EU representative if the vendor is non-EU?
### Data assessment
- [ ] What data types will this tool see?
- [ ] Does the use case require personal data, or can it work with anonymised inputs?
- [ ] Does the use case require special category data under Article 9?
- [ ] Does the use case involve children's data?
### Decision
- [ ] Approved (added to approved list with date)
- [ ] Approved with conditions:
- [ ] Denied (reason:
Evaluated by:
Date:
That checklist is the second-most-important artifact in the AUP. The first is the approved list itself. Together they let a small team move fast without making the security function the bottleneck.
Things will go wrong. Someone will paste customer data into a free-tier tool. A new AI browser extension will turn out to read every email tab. Your AUP should tell people exactly what to do, in four steps that fit on a sticky note:
The connection most AUPs miss: every reported incident, even the ones you decide not to notify, has to land in your Article 33(5) breach register. That register is the document a supervisory authority asks for first in an audit. An AUP that produces structured incident reports is what feeds it. An AUP that keeps incidents in private email threads is invisible to the audit and worse than nothing.
The amnesty provision at launch is the part most policies skip and the one that pays back the most. Something like: "If you have used unapproved tools with sensitive data before this policy came into force, report it within 30 days with no disciplinary consequences. The point is a complete picture, not scapegoats." Without amnesty, people hide past usage. With it, the discovery list during the rollout doubles or triples, and your real risk picture becomes legible.
The amnesty provision is the highest-yield single sentence in the document. It costs nothing, and the discovery data it produces is the difference between a risk assessment based on real prior usage and one based on the optimistic version employees feel safe to disclose. Make the amnesty window exactly 30 days, longer than feels necessary but short enough to be a real deadline, and tell every team lead about it personally on day one of rollout.
Here is the actual one-page document you can copy and adapt. Replace the placeholders, drop in the tables from the previous sections, and ship version 1.0 this week.
# AI Acceptable Use Policy
Version: 1.0 | Effective: [DATE] | Next review: [DATE + 3 months]
## Purpose
This policy explains how [COMPANY] employees, contractors,
and consultants are expected to use AI tools at work. It exists
so the team can use AI confidently without putting company,
customer, or employee data at risk.
## Scope
Applies to all employees, contractors, freelancers, and third
parties using AI tools for [COMPANY] work, on company or personal
devices, at any location.
## Approved AI tools
[Insert table from §"Approved tools" with named tools, tiers,
approved use cases, and DPA status. Date it. Mark "last reviewed".]
## Tools that are NOT approved
- Consumer-tier accounts (ChatGPT Free / Plus, Copilot Pro,
Gemini Advanced personal accounts)
- Any AI tool signed up for with a personal email
- Any AI browser extension not on the approved list
- Any tool that does not have a Data Processing Agreement available
## Data rules
[Insert Green / Yellow / Red classification from §"Data classification"
with concrete examples per tier. The Red list is the load-bearing
section.]
## Using a new AI tool
1. Submit the request via [SLACK CHANNEL or FORM URL] with the tool
name, URL, intended use case, and the data types involved.
2. The DPO / IT lead reviews within 48 hours and replies with
approve / approve with conditions / deny.
3. Approved tools are added to the table above with the date.
## When something goes wrong
1. Stop using the tool for the purpose that triggered the concern.
2. Report to [NAMED CHANNEL / PERSON]. No blame.
3. Document: tool, data, when, account type.
4. Wait for the DPO / IT lead's assessment.
**Amnesty (until [DATE + 30 days]):** if you have used unapproved
tools with sensitive data before this policy came into force,
report it within 30 days with no disciplinary consequences.
## Responsibilities
- **Employees:** follow this policy. Report incidents. Request
evaluation for new tools. Ask questions if unsure.
- **Team leads:** ensure team awareness during onboarding. Escalate
repeated violations.
- **DPO / IT lead:** maintain the approved list. Evaluate new tools
within 48 hours. Process incidents. Update this policy quarterly.
## Review
This policy is reviewed at least quarterly. Last review: [DATE].
The current version is always at [INTERNAL URL].
## Questions
[NAMED PERSON or CHANNEL]
That is the entire policy. No fifteen sections of definitions, no three pages of legal disclaimers, no repetitive "employees shall ensure compliance with all applicable regulations." The legal team should review it. They should not write it. A policy written by lawyers for lawyers protects the company on paper. A policy written by someone who understands the work protects the company in practice.
The policy is the artifact. The rollout is what makes it real. Thirty days, four weeks, four concrete moves.
Week 1: ship the document. Publish the policy at a stable internal URL. Send a single message to the whole team explaining what changed, what is new (the approved list, the data rules, the amnesty window), and where the new-tool request form lives. Keep it three paragraphs. Resist the urge to attach the PDF.
Week 2: walk every team lead through the policy. A 15-minute call each. Three real scenarios per call: "your team member wants to summarise a customer complaint", "your team member wants to use a new tool you have not heard of", "your team member realises they pasted a contract into the wrong account". If the team lead cannot answer those three questions from the document, the document needs another revision before the team-wide training.
Week 3: team-wide training. A 30-minute session, recorded. Walk through the approved tools, the data classification rules, the new-tool process, and the incident reporting flow. Open Q&A at the end. The recording becomes part of onboarding for new hires.
Week 4: process the amnesty disclosures. Some will arrive in the first 24 hours. Most will arrive in the last week of the window. Each one is documented in the breach register, assessed for severity, and resolved with no disciplinary action regardless of what surfaces. The data this produces is the most accurate picture of past AI usage you will ever get.
After 30 days, the policy is in steady state. The new-tool requests come through the form. The incident reports come through the channel. The approved list is dated and visible. The team lead conversations have surfaced the questions that the document was unclear about, and version 1.1 incorporates the fixes.
Article 4 of the EU AI Act requires providers and deployers of AI systems to ensure "a sufficient level of AI literacy" of their staff and anyone else operating AI systems on their behalf, taking into account the technical knowledge, experience, education, training and the context in which the systems are used. The obligation has been in force since 2 February 2025. Enforcement begins on 2 August 2026.
The Article does not specify a curriculum, certification, or training-hour count. It uses a principles-based standard: the literacy must be "sufficient" for the role and the system. There is no required exam, no mandatory course, no template the European Commission has blessed.
What "sufficient" means in practice has not been litigated. I am not yet sure how supervisory authorities will interpret it once enforcement starts in August 2026. The wording is principles-based, the case law is empty, and the first enforcement actions will set the bar. The honest answer is: nobody knows the floor.
The first signals are starting to arrive, though. As of March 2026, Germany's BNetzA and France's CNIL (in its advisory capacity on AI under the Loi Informatique et Libertés handover) have both indicated that AI literacy will be assessed as part of broader AI Act compliance reviews, not as a standalone obligation. The practical read is that an organisation that can produce a documented AUP, an attendance log for the rollout training, and a quarterly review record will probably meet the bar for the first wave of supervisory inquiries. The organisations that will struggle are the ones with no document, no training record, and no review cadence at all.
What is reasonably defensible today, and almost certainly meets the bar for a small-to-mid team:
Article 4 covers more than just employees. The Commission's guidance is explicit: "staff and other persons dealing with the operation and use of AI systems" includes contractors, freelancers, consultants, procurement teams, compliance and legal teams, executives, and anyone else operating an AI system on the company's behalf. If your AUP applies only to employees, you have a gap. The fix is one line in §"Scope".
The quarterly review is the second half of the obligation. AI provider terms change faster than industry policies usually move. OpenAI has materially updated its terms multiple times since ChatGPT launched. Google and Microsoft adjust their data handling regularly. Anthropic expanded its Google Cloud TPU sub-processor footprint in October 2025 without triggering any contract renegotiation. An approved tool list from six months ago may reference contractual terms that no longer exist. Quarterly is the minimum cadence, and the review should be calendared, not vibes-based.
The five failure modes I have read about most often, with the fix in each case.
Banning everything. A policy that is a list of "don'ts" with no "dos" gets ignored. People conclude that leadership does not understand how they work, and quietly carry on. Every prohibition needs to come with either an approved alternative or an honest "we cannot support this use case yet, here is when we will revisit."
Forgetting contractors and freelancers. The employees follow the policy. The freelance designer hired last month, the offshore development team, and the consultant building the new dashboard have never seen it. Article 4 makes this gap material. The fix is one line in §"Scope" and a checkbox in your contractor onboarding.
Writing once and forgetting. AI vendor terms shift. Tool capabilities expand. New attack patterns emerge. An AUP from 2024 that does not mention prompt injection, AI memory, or sub-processor cascades is now stale. Quarterly review is the minimum.
Treating it as a legal document instead of a practical guide. Legal review is necessary; legal authorship is the failure mode. The document needs to live in the heads of the people doing the work, and that requires it to be written in their voice — not the voice of an external counsel optimising for paper liability.
Skipping the test before publication. Walk through three real scenarios with someone from each major department before you ship version 1.0. Can the support lead figure out whether they can use ChatGPT Team to summarise a customer complaint? Can the developer determine whether Copilot Business is approved for the client project they are working on? If the answer requires re-reading the policy twice, simplify before launch.
Ship version 1.0 this week. The one-page template above plus the four load-bearing sections is enough. Date it, publish it, train the team in 30 days, and put the next quarterly review on the calendar. Article 4 enforcement starts on 2 August 2026; the AUP plus a recorded training session plus an internal attendance record is the cheapest way to demonstrate "sufficient AI literacy" today. A one-page policy published this week protects you more than a fifteen-page policy stuck in three months of legal review. Version 1.0 is better than version 0.
Three tiers of shadow AI in 2026: the browser tab, the in-SaaS toggle, the OAuth-scoped agent. IBM puts the breach delta at $670K, Article 4 enforcement starts 2 August 2026, and a register beats a ban.
A time-anchored runbook for handling the most common AI incident in 2026: a team member pasted personal data into a consumer-tier ChatGPT account. First hour through the policy that stops the next one.
Six questions a regulator, a DPO, or an enterprise customer will ask you about AI and customer data. Grounded in 2025-2026 enforcement, CNIL guidance, and the Court of Rome OpenAI annulment.
Free tool · live
AI Data Flow Checker
Map how personal data flows through your AI integrations and spot the privacy risks before they spot you.