AI vendor assessment checklist
A production AI vendor is not safe because it has a polished demo. Assess it against data handling, security, observability, deployment safety, and vendor resilience before it touches customer workflows.
Use this before procurement signs
Send these questions to the vendor, require written responses, and attach the evidence to your procurement record. A weak answer is not always a blocker, but it becomes an explicit implementation responsibility.
Data handling
PSF D3Does the vendor use customer prompts, files, outputs, or traces to train or improve models?
Where is data processed and stored, including logs, embeddings, evaluation data, and support exports?
Can customer data be deleted on request across primary systems, backups, logs, and vector stores?
Does the vendor provide a DPA, sub-processor list, retention schedule, and cross-border transfer mechanism?
Security and access
PSF D7Is authentication SSO/SAML/OIDC compatible, and can access be scoped by role and environment?
Are API keys, tool tokens, and OAuth grants isolated by workspace, customer, and deployment stage?
Does the vendor document prompt injection, tool abuse, data exfiltration, and supply-chain threat controls?
What security attestations exist, and do they cover the AI-specific service, not only the parent cloud platform?
Observability and audit
PSF D4Can you export prompt, response, model version, latency, cost, tool call, user, and trace identifiers?
Are logs immutable enough for incident reconstruction and audit evidence?
Can quality, safety, latency, and cost degradation trigger alerts rather than passive dashboards?
Does the vendor support tenant-level reporting for enterprise governance and customer assurance?
Deployment safety
PSF D5Can model, prompt, policy, and tool changes be versioned, tested, canaried, and rolled back?
Does the vendor announce model or platform changes with enough notice for regression testing?
Can you pin versions or isolate production from silent behaviour changes?
Is there a documented kill switch for unsafe or degraded AI behaviour?
Vendor resilience
PSF D8Can the deployment move to another model, provider, vector store, or framework without a full rewrite?
Are export formats open enough for migration of prompts, traces, workflows, embeddings, and evaluation sets?
What happens if the vendor changes pricing, deprecates a model, suffers an outage, or terminates access?
Are fallback providers tested, or only named in architecture diagrams?