Every organisation using AI needs a policy. Most don’t have one. These templates are free to copy, adapt, and use — written by practitioners, not lawyers. Take what fits your context and make it yours.
Your staff are already using AI tools — with or without your knowledge. Without a policy, you have no visibility into what data is being shared with third-party AI providers, no process for reviewing AI-generated outputs before they reach clients, and no framework for responding when something goes wrong.
A policy doesn’t need to prohibit AI use. It needs to make expectations clear: what tools are approved, what data can be used as input, who reviews AI outputs before they’re acted on, and who owns the outcome when an AI system produces an error.
These templates are starting points. Adapt them to your organisation’s risk tolerance, your regulatory environment, and your actual AI usage patterns. If you need help doing that, the CAIG certification covers exactly this domain.
— Copy this section into a Google Doc or Word document and adapt as needed —
Version 1.0 — [Organisation name] — Effective [date] — Review date: [12 months from effective date]
Purpose.This policy sets out [Organisation name]’s expectations for staff use of AI tools and systems, including large language models (LLMs), AI-assisted writing tools, AI coding assistants, and automated decision support systems. It applies to all employees, contractors, and third parties acting on behalf of the organisation.
Approved tools.Staff may use AI tools that have been reviewed and added to the organisation’s approved list. The approved list is maintained by [IT/Technology/Compliance] and is available at [internal link]. Staff must not use unapproved AI tools for work purposes without prior written approval from [responsible person/team].
Data input restrictions.Staff must not input the following into any AI tool, whether approved or not: (a) personal information about clients, customers, employees, or third parties; (b) confidential business information, including financial data, intellectual property, strategic plans, and client contracts; (c) information subject to legal professional privilege; (d) health, medical, or sensitive personal data; (e) any data classified as confidential or restricted under the organisation’s data classification policy.
Human review requirement.AI-generated outputs must not be submitted, published, acted upon, or shared with third parties without review and approval by a qualified staff member. The reviewing staff member is responsible for the accuracy, appropriateness, and compliance of any output they approve. “The AI generated it” is not a defence against errors or policy violations.
Prohibited uses. Staff must not use AI tools to: (a) generate misleading, fraudulent, or deceptive content; (b) automate decisions that have legal or significant practical effect on individuals without human oversight; (c) create synthetic media (deepfakes, AI-generated voice, fabricated imagery) without explicit authorisation; (d) circumvent security controls, access controls, or data handling requirements; (e) conduct activities that would violate applicable law, regulation, or contractual obligation.
Disclosure. When AI tools have contributed substantially to a work product delivered to a client or external party, staff must disclose this unless the organisation has determined disclosure is not required for that use case. [Adapt this to your sector and contractual obligations.]
Incident reporting.Staff must report any AI-related incident — including errors, unexpected outputs, data exposure, or security concerns — to [responsible person/team] within [24/48] hours of discovery. An AI incident is any event where an AI system produces an output that causes or could cause material harm, regulatory exposure, reputational damage, or data loss.
Policy owner. [Name/role] is responsible for maintaining this policy, reviewing it annually, and updating the approved tools list. Questions about this policy should be directed to [contact].
— End of template —
— Copy and adapt —
Version 1.0 — [Organisation name] — Effective [date]
Purpose.This policy governs how [Organisation name] manages data that is used as input to, processed by, or generated by AI systems. It supplements the organisation’s primary data governance and privacy policies.
Data classification before AI use.Before using data as AI input, staff must confirm its classification. Only data classified as [public / internal use] may be used with externally-hosted AI systems (e.g. cloud LLM APIs) without explicit approval from [Data Owner/DPO]. Confidential or restricted data may only be used with AI systems that process data entirely within the organisation’s approved infrastructure.
Vendor data handling.Before approving an AI tool, [IT/Procurement] must confirm: (a) whether the vendor uses customer inputs to train or fine-tune models; (b) where data is stored and processed; (c) data retention periods; (d) whether data can be requested for deletion; (e) the vendor’s sub-processor list and data processing agreements. This information must be documented in the approved tools register.
Training data. Where the organisation develops or fine-tunes AI models using internal data, the following requirements apply: (a) only data for which the organisation holds appropriate rights may be used; (b) personally identifiable information must be removed or pseudonymised unless explicit consent has been obtained; (c) the training dataset must be documented with provenance, collection date, and any known quality limitations; (d) a data retention decision must be made for the training dataset independent of the resulting model.
AI output data.Outputs generated by AI systems are subject to the same data handling requirements as other organisational data. If an AI output contains personal information — for example, if an LLM reproduces personal data from a training set — this must be treated as a potential data breach and reported accordingly.
— Copy and adapt —
Version 1.0 — [Organisation name] — Effective [date]
What constitutes an AI incident. An AI incident is any event where an AI system produces an output or takes an action that causes, or has the potential to cause: material harm to a person; financial loss; regulatory or legal exposure; reputational damage; data exposure or privacy violation; or discriminatory impact on an individual or group.
Near-misses must be reported.A near-miss — where an AI error was caught before causing harm — must also be reported. Near-miss reporting is how organisations learn before an incident causes damage.
Immediate response (0–4 hours).On discovering an AI incident: (1) Stop the AI system from producing further outputs if it is safe to do so. (2) Preserve evidence — do not delete prompts, outputs, or logs. (3) Notify [incident response contact] immediately. (4) Do not communicate externally about the incident without authorisation from [Legal/Communications].
Assessment (4–24 hours). [Incident response team] will assess: (a) the scope and severity of impact; (b) whether the incident constitutes a notifiable data breach; (c) whether affected parties need to be informed; (d) whether regulators must be notified; (e) what remediation is required before the system resumes operation.
Post-incident review. Within 14 days of resolution, a post-incident review must be completed covering: root cause analysis; what controls failed or were absent; what changes will be made to prevent recurrence; whether the incident reveals a systemic risk requiring policy update. All reviews must be documented and retained for [period].
Use this checklist before approving any AI vendor or tool. Score each item and document responses in your vendor register.
Key AI-relevant data requirements by jurisdiction. This is a practitioner summary — not legal advice. Consult your legal team for regulatory obligations specific to your sector and use case.
The CAIG certification covers AI governance in depth — risk frameworks, regulatory requirements, vendor assessment, policy development, and incident response. Built for governance practitioners who need to do this properly.