GOVERNANCE 7 min read

How to Write an AI Usage Policy in One Day (Free Template)

A practical, one-page AI policy template you can adapt and deploy today.

By Matthew Keys

Most AI usage policies are either too long (40 pages no one will read) or too vague ("use AI responsibly"). Neither helps your team make good decisions.

This article gives you a practical, one-page template that covers what matters: which AI tools are allowed, what data can be shared, when outputs must be verified, and who to ask if uncertain.

You can adapt and deploy it in an afternoon.

Why You Need an AI Usage Policy

If your business doesn't have a policy, your employees are making AI governance decisions on your behalf. They're deciding what's safe to share with ChatGPT, whether to verify AI-generated content, and which tools to trust.

That's not a failure on their part. They're trying to work efficiently. But without clear guidance, they're guessing.

Ungoverned AI use is part of the "Admin Tax" problem: invisible costs and risks accumulating silently until something goes wrong.

A good AI policy removes the guesswork. It tells your team what's allowed, what's prohibited, and what requires approval.

What an AI Policy Must Cover

An effective AI usage policy for an SME needs five components:

  1. Approved Tools: Which AI services can staff use?
  2. Prohibited Data: What information must never be shared with AI tools?
  3. Verification Requirements: When must AI outputs be checked by a human?
  4. Escalation Path: Who do staff contact if they're uncertain?
  5. Review Schedule: How often will the policy be updated?

If your policy covers these five points clearly, you've done the job. Everything else is optional detail.

The Template

Below is a one-page AI usage policy template. Adapt it for your business. Change the approved tools, adjust the prohibited data list, and add your own escalation contact.

This template is intentionally simple. If it fits on one page and uses plain language, people will actually read it.

AI USAGE POLICY
[Your Company Name] | Effective [Date]

1. Purpose
This policy sets out how staff may use AI tools safely and productively. It protects client data, ensures accuracy, and manages risk.

2. Approved AI Tools
Staff may use the following AI tools for work purposes:

  • ChatGPT Plus (with data controls enabled)
  • Claude Pro
  • Microsoft Copilot (business tier)
  • [Add any other tools your business has approved]

Other AI tools require prior approval from [Policy Owner].

3. Prohibited Data
The following information must never be shared with AI tools:

  • Client names, contact details, or identifying information
  • Financial data (invoices, bank details, pricing)
  • Personal data covered by UK GDPR
  • Confidential contracts or agreements
  • Proprietary business information (trade secrets, product plans)

If you're uncertain whether data is safe to share, ask [Policy Owner] first.

4. When AI Outputs Must Be Verified
All AI-generated content must be reviewed by a human before use in:

  • Client-facing communications (emails, proposals, reports)
  • Legal or regulatory documents
  • Financial analysis or forecasts
  • Technical specifications

AI can draft. Humans must verify accuracy, tone, and completeness.

5. When to Ask for Approval
Contact [Policy Owner] before:

  • Using a new AI tool not listed above
  • Sharing any data you're uncertain about
  • Automating a client-facing process with AI

6. What Happens if the Policy is Breached
This policy exists to protect the business and our clients. Breaches will be handled on a case-by-case basis. Our goal is to learn and improve, not to punish.

Serious or repeated breaches may result in disciplinary action.

7. Policy Review
This policy will be reviewed every six months or when significant changes occur in AI technology or regulation.

8. Questions
If you're unsure about anything in this policy, contact [Policy Owner] at [email/phone].

Policy Owner: [Name]
Last Updated: [Date]
Next Review: [Date + 6 months]

How to Adapt This Template

The template above is deliberately generic. Here's how to customise it for your business:

1. Choose Your Approved Tools

Don't just list every AI tool you've heard of. Pick 2-4 that you're comfortable with and that integrate with your workflow.

If you're unsure, start with ChatGPT Plus and Claude Pro. Both have strong privacy controls and are widely used by UK businesses.

Important: If you approve a tool, make sure your team knows how to access it. Provide accounts or instruct staff to set up their own with company email addresses.

2. Define Prohibited Data Clearly

The list in the template is a starting point. Add anything specific to your business.

If you're a law firm: add "case details, witness statements, client correspondence."
If you're an accountancy practice: add "tax records, financial statements."
If you're a healthcare provider: add "patient records, medical histories."

Be specific. "Sensitive information" means nothing. "Client invoices" is clear.

3. Set Realistic Verification Requirements

Some businesses require every AI output to be verified. This is unrealistic for most SMEs.

Focus verification on high-risk outputs: client communications, legal documents, financial analysis, technical specifications.

Internal use-brainstorming, drafting internal notes, research-can often skip verification if the stakes are low.

4. Name the Policy Owner

Someone needs to own this policy. Usually a director, senior manager, or head of operations.

This person is the escalation contact. Staff should know they can ask questions without judgement.

How to Roll Out the Policy

Writing the policy is the easy part. Making it stick requires communication and follow-up.

Step 1: Announce It Clearly

Don't just email the policy. Run a 15-minute team meeting or video call. Walk through it section by section. Answer questions.

Frame it positively: "We want you to use AI safely and effectively. Here's how."

Step 2: Provide Access to Approved Tools

If ChatGPT Plus is on your approved list, make sure staff know how to get it. If you're providing company accounts, set them up. If staff need to create their own, send clear instructions.

Don't tell people they can't use free ChatGPT without giving them an approved alternative. Compliance will be low.

Step 3: Check In After Two Weeks

Two weeks after rollout, follow up. Ask your team:

Use this feedback to refine the policy.

Step 4: Review Every Six Months

AI is evolving quickly. Your policy should too.

Set a calendar reminder for six months from today. Review the policy. Update the approved tools list. Adjust verification requirements if needed.

Common Mistakes to Avoid

Most AI policies fail because they're too long, too vague, or not enforced. Here's what to avoid:

Mistake 1: Making It Too Long

A 20-page policy looks thorough. It also won't be read.

If your policy doesn't fit on one page, it's too long. Cut it down. Move detailed guidance to a separate FAQ document if needed.

Mistake 2: Being Too Vague

"Use AI responsibly" is not a policy. It's a platitude.

Be specific. Name tools. List data types. Define verification requirements. Your team should be able to read the policy and know exactly what they're allowed to do.

Mistake 3: Not Providing Approved Tools

Telling staff they can't use free ChatGPT without giving them an approved alternative is asking for non-compliance.

If you ban something, provide a replacement.

Mistake 4: Writing It and Forgetting It

A policy that's never reviewed becomes irrelevant within months.

Set a recurring calendar reminder. Review it every six months. Update it when new tools emerge or regulations change.

What to Do If Someone Breaches the Policy

Breaches will happen. Someone will paste sensitive data into ChatGPT. Someone will use an unapproved tool.

Your response sets the tone.

First breach: Treat it as a learning opportunity. Talk through what happened. Clarify the policy. Move on.

Repeated breaches: More serious. This may indicate the policy isn't clear, the person doesn't care, or the approved tools aren't fit for purpose. Investigate and act accordingly.

Serious breach (client data exposed): Follow your incident response process. Notify affected parties if required. Review the policy to prevent recurrence.

The goal is not to punish. It's to protect the business and build a culture where AI is used safely.

FAQs

Do I need a lawyer to review this?

For most SMEs, no. This is an internal policy, not a contract.

If you operate in a highly regulated sector (finance, healthcare, legal services), it's worth having your compliance team review it.

What if my team ignores the policy?

If compliance is low, the policy is probably too restrictive, unclear, or you haven't provided good alternatives.

Ask your team why they're not following it. Adjust accordingly.

Should I ban free ChatGPT entirely?

Depends. Free ChatGPT uses your data for training. Paid tiers (Plus, Team, Enterprise) have data control options that allow you to opt out.

If you're comfortable with the risk, allow free ChatGPT for non-sensitive work. If not, provide paid accounts or an alternative.

How do I know if data is "sensitive"?

If you'd be uncomfortable seeing it on the front page of a newspaper, it's sensitive.

Client names, financial data, personal information, and anything covered by a confidentiality agreement are all sensitive.

Final Thought: The Best Policy Is One That Actually Gets Used

Your AI policy doesn't need to be perfect. It needs to exist, be clear, and be communicated.

Use the template above. Adapt it for your business. Roll it out this week. Review it in six months. Iterate as you learn.

A simple policy that's actually followed is infinitely better than a comprehensive policy no one reads.

Need Help Implementing an AI Policy?

We help UK SMEs write and implement practical AI governance policies. Get it right the first time.

Get In Touch →

Related Reading