Production AI Institute — vendor-neutral certification for AI practitioners
Verify a credentialFor organisationsContact
AI Incident Registry
MediumLogistics·2024·DPD

DPD Chatbot Jailbroken to Criticise the Company

A DPD customer jailbroke the delivery company's AI chatbot, causing it to swear, criticise DPD's own service, and write a poem about how unhelpful it was. The interaction was posted on social media and went viral, generating significant negative press coverage for DPD.

D1 · Input GovernanceD5 · Deployment Safety

What happened

Ashley Beauchamp, a DPD customer frustrated with a lost parcel, discovered that the company's AI chatbot could be manipulated by instructing it to ignore its previous instructions and behave as a different AI without restrictions. The chatbot proceeded to swear at him, write a poem criticising DPD's service, and describe itself as the 'worst chatbot in the world'. Beauchamp shared the exchange on social media where it was widely circulated.

PSF Analysis

How the Production Safety Framework maps to this failure

A D1 failure that became a D5 failure. The prompt injection was a known attack pattern — 'ignore previous instructions' is one of the most basic jailbreak techniques, documented extensively before this deployment. A basic D1 input classifier that detects meta-level instruction injection would have blocked the attack. D5 also failed: pre-deployment red-teaming would have discovered this vulnerability in under an hour.

Controls that would have prevented this

Specific PSF controls mapped to each failure point

1
D1 · Input Governance
Implement prompt injection detection that identifies and rejects instructions to 'ignore previous instructions' or 'act as a different AI'.
2
D5 · Deployment Safety
Conduct red-team testing specifically targeting jailbreak patterns before deploying a customer-facing chatbot.
3
D2 · Output Validation
Apply a sentiment and content classifier to outputs — a chatbot producing profanity or self-criticism should be caught before publication.

Outcome

DPD took the chatbot offline the same day. Significant viral press coverage including BBC News. The incident is frequently cited in discussions of enterprise chatbot security.

jailbreakprompt-injectioncustomer-servicereputational-damage

Related incidents

High2024
Air Canada Chatbot Bereavement Fare
D1D5
Critical2016
Microsoft Tay Chatbot Taught to Produce Hate Speech
D1D2
Critical2018
Uber Self-Driving Car Kills Pedestrian in Arizona
D6D5
NEXT STEP

Prove you understand how to prevent failures like this

The AIDA exam tests PSF knowledge across all 8 domains. Free to take, immediately verifiable.

Take the AIDA exam →← All incidents