The professional standard for production AI deployment
Verify a credentialFor organisationsPartner ProgrammeFor nonprofits & NGOsContact
MSP ToolkitCertified IntegratorsFacilitation guide

AI Readiness Assessment.
How to run it properly.

The complete facilitation guide for MSP engineers and consultants running PAI AI Readiness Assessments. Pre-engagement checklist, stakeholder interview questions for every role, PSF-aligned scoring, and how to structure the client readout and roadmap.

Start: Pre-engagement ↓← M365 discovery scripts
Pre-engagementPre-questionnaireInterview guidesPSF scoringReport structureReadout guide

Pre-engagement checklist

Complete every item before on-site work begins. A poorly prepared engagement wastes your time and the client's. Incomplete access on the day is the most common cause of a partial report.

Scope confirmation

  • Confirm number of users in scope
  • Confirm which M365 workloads are licensed (Copilot, Power Platform, Purview)
  • Identify any existing AI tools in use (even informal / shadow AI)
  • Agree on-site vs remote delivery
  • Confirm lead contact and IT admin contact are different people
  • Request Global Reader + Security Reader + Exchange View-Only access

Stakeholders to book

  • CEO or MD — business goals and risk appetite (30 min)
  • IT Manager / CTO — current infrastructure and past initiatives (60 min)
  • Finance lead — cost structure, approval workflows, finance team size (30 min)
  • Operations or Ops Manager — day-to-day process pain (60 min)
  • HR lead — onboarding, people policy, compliance concerns (30 min)
  • At least two end-users from different departments (30 min each)

Documents to request

  • Current IT asset register or licence summary
  • Any existing IT strategy or roadmap documents
  • Org chart (to understand reporting lines and decision-making)
  • Last IT audit or risk assessment (if any)
  • Any existing AI or automation policies
  • Recent incident or change register (to understand operational rhythm)

Logistics

  • Confirm NDA signed before sending pre-questionnaire
  • Send pre-engagement questionnaire 5 business days before on-site
  • Book a separate debrief call for 3–5 days after on-site
  • Confirm laptop with all modules pre-installed (run 01-prereq.ps1 in advance)
  • Prepare blank risk register template
  • Prepare blank interview notes template for each stakeholder

Pre-engagement questionnaire

Send to the primary client contact 5 business days before on-site. Keep it short — 10 questions maximum. The goal is context, not exhaustive data collection. Longer questionnaires are ignored.

Send as: email with plain-text responses OR linked Word doc
1.How many staff does the organisation have, and across how many locations?
2.Which Microsoft 365 plan are you on today (Business Basic / Standard / Premium / E3 / E5)?
3.Do you currently use any AI tools — Copilot, ChatGPT, Otter.ai, or similar?
4.Has your organisation ever run an AI or automation project? If so, what happened?
5.What is the biggest operational pain point you deal with each week?
6.How are documents typically reviewed and approved (email, SharePoint, paper)?
7.Do you have a written AI or data use policy?
8.What does your customer data look like — where is it stored, who can access it?
9.Has your organisation been subject to any compliance obligations (ISO 27001, SOC 2, GDPR/Privacy Act)?
10.What would a successful AI engagement look like to you in 12 months?

Stakeholder interview guides

One guide per stakeholder type. Each includes objective, recommended questions, the reason behind each question, and delivery notes. Not every question needs to be asked — let the conversation flow and use the guide to fill gaps.

👤

CEO / Managing Director

30 minutes

Objective: Understand strategic priorities, risk appetite, and what success looks like at the executive level. This interview sets the ROI context.

Question 1
"What are your top three business priorities for the next 12 months?"
Why ask this
Maps AI opportunities to stated business goals — not just IT priorities.
Question 2
"Where do you feel your team spends too much time on the wrong things?"
Why ask this
Surfaces process inefficiency that AI can address. Look for repetitive, manual, or approval-heavy tasks.
Question 3
"What would a 20% increase in team capacity enable you to do that you can't do today?"
Why ask this
Reframes AI from a cost to a capacity question. Gets the executive thinking about outcomes.
Question 4
"What's your level of comfort with AI making recommendations vs AI taking actions autonomously?"
Why ask this
Establishes the human oversight posture — critical for PSF alignment and scoping.
Question 5
"Have you had any security or data concerns that made you cautious about AI tools?"
Why ask this
Uncovers existing risk perception that will shape your recommendations.
Question 6
"What would you need to see before committing to an AI rollout across the team?"
Why ask this
Identifies decision criteria for the proposal stage.
Delivery notes: Avoid technical detail. Focus on outcomes and business language. If the CEO mentions a competitor using AI, explore that — it's a useful anchor for urgency.
🔧

IT Manager / CTO

60 minutes

Objective: Get the full technical picture: what's deployed, what's broken, what's been tried, and what constraints exist. This is where the scripts output gets validated and extended.

Question 1
"Walk me through a typical day of support requests — what keeps coming up?"
Why ask this
Identifies automation candidates in the support queue.
Question 2
"What Microsoft 365 workloads are you actively using vs licensed but unused?"
Why ask this
Licence spend optimisation and feature gap identification.
Question 3
"Have you looked at Copilot for M365 or Copilot Studio yet? What stopped you?"
Why ask this
Understands adoption blockers — cost, governance, readiness, or awareness.
Question 4
"What does your backup and data governance setup look like?"
Why ask this
Critical for AI safety. Data sprawl is the number one risk to uncontrolled AI access.
Question 5
"How are conditional access and MFA configured? Any gaps you know about?"
Why ask this
Validates the script output. IT managers often know about gaps they haven't had budget to fix.
Question 6
"Are there any integrations between M365 and business systems (CRM, ERP, finance)?"
Why ask this
Identifies Power Platform or Copilot connector opportunities.
Question 7
"What would you want AI to handle that currently requires your team to intervene?"
Why ask this
Gets concrete automation use cases from the person who understands the daily grind.
Question 8
"Have there been any failed IT projects or initiatives in the last 2–3 years?"
Why ask this
Risk awareness. If a previous automation project failed, understand why before proposing another.
Delivery notes: Bring the discovery-summary.html output and walk through RED findings together. The IT manager will confirm, extend, or dispute each one. Note anything that conflicts with the script output.
💰

Finance Lead

30 minutes

Objective: Understand financial processes, approval workflows, and reporting cycles. Finance is typically one of the highest-ROI AI targets and one of the most risk-sensitive.

Question 1
"How does invoice processing work today — from receipt to payment?"
Why ask this
Invoice approval is the single most common high-ROI AI automation target in SMBs.
Question 2
"How much time does your team spend on data entry, reconciliation, or report generation each week?"
Why ask this
Direct ROI input. Get an estimate in hours per person per week.
Question 3
"What financial systems do you use, and do they connect to M365?"
Why ask this
Identifies Power Platform or Copilot connector scope.
Question 4
"How are budget approvals handled — email, a system, or in person?"
Why ask this
Approval chains are a common automation target with measurable speed improvements.
Question 5
"What compliance or audit obligations affect how you store and process financial data?"
Why ask this
Constraints on AI access to financial data. Important for scoping.
Delivery notes: The finance team's time estimates are gold for the ROI model. Push for specific numbers: 'How many invoices do you process a week? How long does each one take?' That's your baseline.
⚙️

Operations / Business Manager

60 minutes

Objective: Map the operational processes in detail. Operations is where most AI automation opportunities live — approvals, reporting, onboarding, scheduling, communication.

Question 1
"Walk me through how a new client or project gets kicked off from start to finish."
Why ask this
End-to-end process mapping. Note every handoff, every email chain, every spreadsheet.
Question 2
"What's the most frustrating process your team deals with every week?"
Why ask this
Frustration = inefficiency = automation opportunity.
Question 3
"How do staff currently request things from IT, HR, or finance?"
Why ask this
Internal service requests are high-frequency, low-complexity — perfect for Power Automate.
Question 4
"How are meetings followed up — who takes notes, who tracks action items?"
Why ask this
Teams + Copilot meeting summaries and action tracking is often an easy first win.
Question 5
"Are there any reporting tasks that happen on a schedule — weekly reports, monthly summaries?"
Why ask this
Scheduled automations with Power Automate are fast to build and immediately visible.
Question 6
"Where does information fall through the cracks in your team?"
Why ask this
Knowledge gaps and information handoff failures are where AI adds immediate, visible value.
Question 7
"How does staff onboarding work step by step?"
Why ask this
Onboarding workflows are a classic automation target — provisioning, comms, training, access.
Delivery notes: Use a whiteboard or document to sketch the process as they describe it. You are building the Current State workflow map that the client will see in their readout. The more concrete, the better the proposal.
👥

HR Lead

30 minutes

Objective: Understand people processes, compliance obligations, and any sensitivity around AI and employment. HR needs careful framing — focus on time savings and consistency, not replacement.

Question 1
"What does your onboarding process look like today, step by step?"
Why ask this
Onboarding is the highest-frequency HR workflow. Every new hire is a data point.
Question 2
"How do staff handle leave requests, performance reviews, or policy acknowledgements?"
Why ask this
Repetitive HR admin is a strong automation target.
Question 3
"What HR systems are you using, and do they integrate with Microsoft 365?"
Why ask this
Integration scope for automations. HRIS data feeds into many downstream processes.
Question 4
"Are there any workforce planning or compliance reporting requirements?"
Why ask this
Automated reporting from HR data is often highly valued by management.
Question 5
"What's the team's general attitude to AI tools? Any concerns you've heard?"
Why ask this
Change management context. If staff are anxious, your deployment plan needs an adoption layer.
Delivery notes: Frame AI as handling the paperwork so HR can focus on people. Never use language like 'automate your job'. Position every recommendation as giving the HR team their time back.
💬

End Users (2–3 different departments)

20–30 minutes each

Objective: Understand the real day-to-day experience. End users often know the most painful processes and the most useful potential wins — and they'll tell you things managers won't.

Question 1
"What's the most annoying or repetitive task you do every day?"
Why ask this
Direct automation target identification. No filter, no politics.
Question 2
"Have you ever tried any AI tools at work or at home?"
Why ask this
Adoption baseline. Heavy personal AI users are your pilot group.
Question 3
"If you had a capable AI assistant for your role, what would you ask it to do first?"
Why ask this
Reveals unfiltered user intent. Often the most creative and accurate scoping input.
Question 4
"What information do you spend the most time looking for?"
Why ask this
Knowledge retrieval is a top Copilot use case. SharePoint + Teams search problems.
Question 5
"What approval or sign-off processes slow your work down the most?"
Why ask this
Approval chain automation targets.
Delivery notes: These interviews are informal. Get them talking. Listen for phrases like 'I always have to', 'every time I need to', 'nobody knows where to find', 'I have to email three people'. Each one is a use case.

PSF domain scoring

Score each of the eight PSF domains RED / AMBER / GREEN using the criteria below. Use your judgement — these are guidelines, not algorithms. A domain can be AMBER overall even if one question scores RED.

Scoring principle: Grade on what is actually deployed and verifiable, not what is planned or intended. If a policy exists in draft but is not enforced, it does not move the score above RED. AMBER means partial implementation. GREEN means consistently applied and verifiable.

D1

Input Governance

Is there a data classification scheme? Do staff know what data can be given to AI tools?

Assessment questions
  • Are sensitivity labels configured in Microsoft Purview?
  • Is there a written AI Acceptable Use Policy?
  • Do staff know they should not paste personal or confidential data into AI tools?
  • Is there a data inventory that covers AI processing?
🟢 Green
Labels configured, AUP in place, staff aware, inventory maintained
🟡 Amber
Labels exist but not consistently applied, AUP exists but not enforced
🔴 Red
No labels, no AUP, staff unaware of data classification obligations
D2

Output Validation

Do humans check AI outputs before acting on them? Are there review steps for high-stakes AI recommendations?

Assessment questions
  • Is there a policy requiring human review of AI outputs in critical workflows?
  • Do agents display citations or sources to allow output verification?
  • Are there automated workflows that act on AI outputs without human approval?
🟢 Green
Formal review process, citations enabled, approval gates in place
🟡 Amber
Informal review habits, no formal policy, some unreviewed actions
🔴 Red
No review process, automated actions without oversight
D3

Data Protection

Is data access scoped correctly? Are DLP policies applied? Are there data sharing risks?

Assessment questions
  • Are DLP policies active in Exchange, SharePoint, and Teams?
  • Is guest access restricted to business necessity?
  • Does SharePoint sharing allow 'Anyone with a link'?
  • Is personal data stored in systems the AI agent can access?
🟢 Green
DLP active, guest access minimal, sharing restricted, data scoped
🟡 Amber
Some DLP gaps, guest access broader than ideal, sharing partially open
🔴 Red
No DLP, Anyone sharing enabled, AI can access unscoped personal data
D4

Observability

Can you see what the AI is doing? Are logs retained and monitored?

Assessment questions
  • Is audit logging enabled in the Microsoft 365 Compliance Center?
  • Are Copilot interaction logs retained?
  • Are there alerts for anomalous AI behaviour?
  • Is there a process to review AI activity logs?
🟢 Green
Audit logging on, logs retained 90+ days, alerts configured, reviewed regularly
🟡 Amber
Logging enabled but not reviewed, alerts not configured
🔴 Red
Audit logging disabled or not configured
D5

Deployment Safety

Is there a phased rollout plan? Has the AI been tested against adversarial inputs?

Assessment questions
  • Was the AI deployment piloted before full rollout?
  • Is there a rollback procedure?
  • Has the agent been tested against edge cases and unusual inputs?
🟢 Green
Phased rollout, rollback documented, adversarial testing done
🟡 Amber
Some testing, no formal rollback procedure
🔴 Red
Deployed org-wide immediately, no testing, no rollback plan
D6

Human Oversight

Are there named owners for each AI agent? Is there an escalation path when AI fails?

Assessment questions
  • Is there a named responsible owner for each deployed AI agent?
  • Do users know who to contact when an AI tool behaves unexpectedly?
  • Are there approval workflows for AI actions above defined thresholds?
  • Is there an AI incident response procedure?
🟢 Green
Named owners, escalation path documented, approval gates in place, incident procedure exists
🟡 Amber
Informal ownership, no escalation path, approval gaps
🔴 Red
No ownership, no incident handling, AI operating without accountability
D7

Security

Is the security baseline sufficient for AI deployment?

Assessment questions
  • Is MFA enforced for all users?
  • Are Conditional Access policies in place?
  • Are privileged roles minimised?
  • Are third-party AI apps reviewed before approval?
🟢 Green
MFA enforced, CA policies active, minimal privileged roles, app governance in place
🟡 Amber
MFA not fully enforced, CA gaps, some over-privileged accounts
🔴 Red
MFA not enforced, no CA policies, excessive Global Admins
D8

Governance Structure

Is there an AI governance structure? Who is accountable for AI decisions at the organisational level?

Assessment questions
  • Is there an AI committee or designated AI lead?
  • Is there a process for approving new AI tools before deployment?
  • Are AI-related risks on the risk register?
  • Is AI on the board or leadership team agenda?
🟢 Green
Named AI lead, approval process, AI on risk register, leadership engaged
🟡 Amber
No formal structure but informal oversight exists
🔴 Red
No governance, no approval process, AI decisions made ad hoc

Report structure

The assessment deliverable is a written report, not a slide deck. The deck (if used) comes from the report — not the other way around. Write the report first. Eight sections, twelve to eighteen pages. The Word template is in your toolkit.

1
Executive Summary
Three paragraphs: what you found, what it means, and what you recommend. Written for the CEO, not the IT manager. Lead with business impact, not technical findings. Include the overall RAG status and the number of critical findings.
1–2 pages
2
Assessment Scope and Methodology
Who was interviewed, what systems were reviewed, which methodology was used (PSF-aligned, PAI-8). Dates of on-site work. Confirm what was out of scope.
1 page
3
Current State Summary
High-level picture of the IT environment, AI tools in use, licence footprint, and operational context. A brief narrative for each department covered in interviews. Not findings — context.
2–3 pages
4
Risk Register
Table of all findings by PSF domain. Each row: finding description, evidence, RAG status, recommended action, priority (P1/P2/P3). RED findings go first. Sort by priority within each colour.
2–4 pages
5
Automation Opportunity Map
The three to five highest-ROI automation opportunities identified from interviews. Each one gets: process description, current effort (hours/week), estimated saving, recommended tool (Power Automate / Copilot / Copilot Studio), implementation complexity, and an ROI summary.
2–3 pages
6
Recommended Roadmap
30 / 60 / 90-day plan. Month 1: resolve RED findings (security, governance). Month 2: build foundations (DLP, labels, Copilot pilot). Month 3: deliver first automation. Each milestone has a named deliverable.
1–2 pages
7
Proposed Engagement Options
Three scoped options: Foundation (governance and security remediation), Standard (foundation + Copilot deployment + one automation), Premium (full roadmap delivery). Each option has a price range, duration, and deliverable list. Let the client choose.
1–2 pages
8
Appendices
Full PSF domain scoring table. Licence inventory. Interview notes (sanitised). Technical discovery output summary. Glossary of terms.
As needed

Delivering the readout

The readout is the most important meeting of the engagement. Done well, it turns a report delivery into a proposal conversation. Here is the structure that works.

0:00–0:10Open with their words

Start by feeding back what the CEO told you in the executive interview. Repeat their priorities. This creates instant alignment — you are solving their problem, not pitching your service.

0:10–0:25Risk register walkthrough

Go through RED findings first. Be direct — 'This is a critical gap. Here is why it matters for AI deployment.' Don't soften. Clients pay for an expert opinion, not reassurance.

0:25–0:40Automation opportunities

Present the top three automation wins. For each one: current effort (hours/week), proposed solution, estimated saving, and time to implement. This is where the ROI model comes in.

0:40–0:50Roadmap presentation

Show the 30/60/90-day roadmap. Keep it concrete. Month 1 = security remediation. Month 2 = Copilot pilot. Month 3 = first automation live. Clients want a sequence, not a plan.

0:50–0:55Engagement options

Present three options. Always three. Option 1 = foundation only. Option 2 = foundation + deployment. Option 3 = full roadmap. The middle option is usually chosen. Let them decide.

0:55–1:00Next steps only

Close with: 'The next step is X. Can we lock that in before we leave today?' Get a commitment — even 'send me the SOW' — before the meeting ends. Follow up within 24 hours.

Scripts, guide, and ROI model.

The three pieces of a complete AI Readiness Assessment: the discovery scripts to get the data, this guide to run the engagement, and the ROI calculator to make the business case.

ROI calculator →← Discovery scriptsCertified Integrator programme →