Skip to content
— / Resource

AI usage policy — starter template

· 9 min read

A fork-and-customize markdown template for an internal AI usage policy. Why a written policy is the single highest-leverage governance instrument, what one should contain, and the full template you can publish today.

A written AI usage policy is the cheapest piece of governance infrastructure you can deploy. It costs you a working session and a publish step. It unblocks every rollout decision downstream — what tools to evaluate, what data to allow near them, who reviews what. And the absence of one is the single most reliable way to lose IT or security as a partner halfway through a rollout.

This page contains a starter template. It’s not legal advice. It’s a working draft you can fork, fill in, and publish in your company in a week. The full template is also available as a downloadable Markdown file you can paste straight into your handbook tool of choice.

Why write one at all

Three reasons:

  1. It unblocks rollout. Without a policy, every tool evaluation becomes a one-off security conversation. With a policy, you have a list of allowed tools, a process for adding new ones, and a stated position on data classification. Procurement gets faster. Security gets easier. Your CHRO can answer the board’s “what are we doing about AI?” question with a document.

  2. It surfaces the disagreements early. A draft policy makes leadership argue about the questions that will come up later anyway: do we allow code paste into general assistants? Can engineering use Cursor on customer code? Who owns the prompt library? Better to have those arguments now, before a leak forces them.

  3. It signals seriousness to your team. A workforce that sees a written policy treats AI as part of the job. A workforce that doesn’t treats AI as a side hustle. The difference shows up in adoption numbers within months.

What a good policy contains

Six sections, no more:

  1. Allowed tools. Specific, named. Not “AI tools we approve” — actual product names. Update quarterly.
  2. Banned use cases. What you must not do, even with allowed tools. Customer PII into general assistants. Confidential financials into anything not on the allowed list.
  3. Review-before-shipping rules. What requires a human review before going to a customer, the public, a contract, or production code. Define the bar; don’t define the workflow.
  4. Data classification. Plain-language guidance: what data goes into what tools. Reference your existing data classification scheme if you have one; create the simplest possible version if you don’t.
  5. Escalation. Who you ask when the policy doesn’t cover your situation. A name and an email, not a form.
  6. Learning expectations. What you’re expected to learn, by when, and where the curriculum lives.

Anything else can wait. Specifically: do not include legal-jargon-heavy disclaimers, exhaustive vendor lists, or detailed incident response procedures in the policy itself. Those go in linked documents and get updated independently.

The template

Below is the full text of the template. Wherever you see [FILL IN], the policy needs your specifics. The Markdown version downloads cleanly into Notion, Confluence, Coda, GitHub, or any handbook tool that takes Markdown.

# AI Usage Policy

**Owner:** [FILL IN — name and role of the program owner]
**Last updated:** [DATE]
**Reviewed quarterly. Next review: [DATE]]

## 1. Why this exists

[Company] supports the responsible use of AI tools in day-to-day work.
This policy exists so you can use AI productively without having to
ask permission every time, and so we can scale that usage without
introducing new risks for our customers, our data, or our team.

If you're not sure whether something is covered, default to asking.
The escalation contact is in section 5.

## 2. Allowed tools

The following tools are approved for use across the company:

- **[Tool name]** — [what it's used for, e.g. "general drafting and synthesis"]
- **[Tool name]** — [what it's used for]
- **[Tool name]** — [what it's used for]

Adding a new tool requires a request to the program owner. The
turnaround is one business week.

## 3. Banned use cases

Regardless of which approved tool you're using, the following are
prohibited:

- Pasting customer personally identifiable information (PII) into
  any AI tool unless that tool is explicitly listed as approved for
  PII (see section 2).
- Pasting confidential financial information, unannounced product
  details, or material non-public information into any AI tool.
- Using AI-generated content in customer-facing communication
  without the human review described in section 4.
- Using AI to make hiring, firing, performance review, or
  compensation decisions without explicit human judgement on top.

## 4. Review before shipping

The following outputs require a human review before they ship:

- **Customer-facing copy.** Email campaigns, public posts, support
  responses sent to a named customer.
- **Code shipping to production.** Any AI-generated code that lands
  in our main branch must be reviewed by a human engineer who can
  speak to it.
- **Contracts, statements of work, or legal documents.** AI is fine
  as a drafting aid. The final version is reviewed by [FILL IN].
- **Decisions with material business impact.** Hiring, firing,
  pricing changes, contract decisions. AI may inform these; humans
  decide them.

The review bar is "would a competent human have caught this?" Not a
formal process — a sanity check by someone with context.

## 5. Data classification

We treat data in three buckets:

- **Public.** Already published. Free to use in any approved tool.
- **Internal.** Not published, but no harm if leaked. Free to use in
  approved tools that are listed for internal data.
- **Confidential.** Customer data, financials, hiring information.
  Use only in tools listed as approved for confidential data. When
  in doubt, treat as confidential.

If you're unsure how to classify a specific piece of information,
ask.

## 6. Escalation

For any question this policy doesn't cover:
- **Day-to-day questions:** [Name, email, Slack handle]
- **Security questions:** [Name, email, Slack handle]
- **Legal questions:** [Name, email, Slack handle]

A "this isn't covered" message is a feature, not a failure. The
policy gets updated when those messages reveal gaps.

## 7. Learning expectations

We expect everyone to maintain a working understanding of AI tools
relevant to their role. The current curriculum lives at [FILL IN —
internal URL]. New starters complete the foundational track within
their first 90 days; team-specific deep-dives are expected within
the first 180 days.

If you find a gap in the curriculum, tell the program owner. The
curriculum gets updated quarterly.

## Changelog

- **[DATE]:** Initial publication.

How to customize

Three rules of thumb when adapting this template:

  1. Keep it short. A policy that runs to 5,000 words is a policy nobody reads. Aim to fit on one printed page when collapsed. The version above is roughly 600 words.

  2. Name names. Sections 2, 5, and 6 are the most consequential. Vague references like “the security team” generate friction; named individuals generate action. If a name changes, update the policy — it’s a 30-second edit.

  3. Set a review cadence and honor it. Quarterly is the sweet spot. Annual is too slow for a tool category that changes monthly. Monthly creates change fatigue. Pick a quarterly date, put it in the calendar, and don’t skip it.

If your organization already has a data classification scheme, point at it from section 5 instead of writing a new one. If you have a tool-evaluation process, point at it from section 2. The policy is the index; it doesn’t need to contain every detail.

If you’ve taken the assessment, governance score below 40 means this template is your single highest-leverage move. If you’re at 40–69, the bar is higher: revisit your existing policy against this structure and find the gaps.

— / Next move

Where does your org actually stand?

Ten minutes. Three dimensions. A leadership-shareable baseline.