Public campaign · PSF-aligned · open to cite

No hidden AI decisions. No ownerless agents. Publish the receipt.

The AI Right-To-Know asks organisations to publish a simple AI Safety Receipt for systems that affect people: owner, purpose, data boundary, human route, and incident process.

AI Safety Receipt
A public artifact people can understand.
1
Identify when AI is used in a workflow that affects people.
2
Name the accountable owner for the system.
3
Publish the data boundary in plain language.
4
Provide a human route for review, correction, or escalation.
5
Keep evidence for incidents, changes, and material failures.
The demand

AI that affects people should come with a receipt.

This is intentionally plain. A receipt does not claim perfection. It tells people the minimum facts they should not have to beg for: what the AI is doing, who owns it, what data it touches, how a person can intervene, and what evidence exists when something goes wrong.

PAI maps the receipt back to the Production Safety Framework and PAI-8 so small organisations can start with transparency, while serious deployments have a path into stronger evidence, Lab review, and formal assurance.

AI Right-To-Know support0

I support the AI Right-To-Know: people should be able to see who owns AI systems that affect them, what data those systems touch, when a human can be reached, and how incidents are handled.

How it spreads

Useful first, citeable second, institutional by design.

1

Generate

A business, MSP, or consultant creates a receipt in minutes.

2

Publish

The receipt gets a stable public URL and registry entry.

3

Share

Customers, staff, partners, and local communities can ask others to match the standard.

4

Improve

The receipt points to concrete PSF actions, not vague promises.