AI Readiness Checklist
Before your organisation deploys AI tools at scale, work through this checklist. Each item represents a gap that creates real risk if left unaddressed.

AI Readiness
Checklist
Before your organisation deploys AI tools at scale, work through this checklist. Each item represents a gap that creates real risk if left unaddressed.
How to Use This Checklist
Work through each section with your IT provider or security partner. A tick means the item is in place and verified, not just assumed. Items left blank are gaps. Prioritise the gaps in Section 1 and Section 2 before expanding AI tool use further.
1
Data Governance Foundations
These need to be in place before AI touches your data.
You have a current data register that identifies what sensitive data you hold and where it lives
Data is classified by sensitivity (e.g. public, internal, confidential, restricted)
Sensitivity labels are applied to documents and emails in Microsoft 365 (if applicable)
You know which staff have access to which data and why
Access permissions have been reviewed in the last 12 months
There is a documented process for off-boarding staff that includes revoking data access
Your Privacy Act obligations are understood and documented
You have a data breach response plan
2
Identity and Access Security
AI tools inherit the permissions of the users running them. Weak identity security becomes an AI security problem.
Multi-factor authentication is enforced on all accounts, not just offered
Phishing-resistant MFA (hardware keys or certificate-based) is in use for privileged accounts
Legacy authentication protocols are blocked in Microsoft 365
Conditional access policies restrict sign-in to managed, compliant devices
Admin accounts are separate from day-to-day user accounts
Privileged access is reviewed and validated regularly
Single sign-on is used where available to centralise authentication control
Shared or generic accounts have been eliminated or formally justified
3
AI Tool Inventory and Policy
You have a complete list of AI tools currently in use across the organisation, including browser extensions and personal accounts used for work
There is a written policy that defines approved AI tools and acceptable use
Staff have been briefed on what data cannot be processed through public AI tools
There is a process for staff to request approval of new AI tools before using them
Your AI policy has been reviewed by legal or compliance in the context of your professional obligations
Vendor agreements for AI tools have been reviewed for data handling and training clauses
You know whether your AI vendor uses your data to train its models
4
Microsoft Copilot Specific
If applicable to your environment.
SharePoint and OneDrive permissions have been audited and tightened before Copilot deployment
Sensitivity labels are applied to content Copilot can access
You understand what Copilot can surface on behalf of each user role
Copilot interactions are logged and reviewable
Staff have been briefed on Copilot's data scope and appropriate use
A Copilot usage policy is in place
5
Output Quality and Accountability
Staff understand that AI outputs require review before being sent to clients or acted on
There is a defined review requirement for AI-assisted work based on consequence
Staff know they remain professionally accountable for AI-assisted work they sign off on
There is a process for reporting AI errors or unexpected outputs
AI-generated content used in client deliverables is disclosed where required by professional standards
6
Vendor and Third-Party Risk
You have reviewed the data handling terms for every AI tool in use
You know where your data is stored geographically for each tool
You know whether your AI vendors are subject to foreign government access requests that could affect your data
AI vendors with access to sensitive data have been assessed against your supplier risk framework
You have a process for reviewing vendor AI terms when they are updated
7
Incident and Compliance Readiness
You have defined what constitutes an AI-related security incident
Reporting obligations for AI incidents are understood (Privacy Act notifiable data breaches, professional body obligations)
Your cyber liability insurance covers AI-related incidents (confirm with your broker)
You are monitoring regulatory developments on AI governance in your sector
Someone in the organisation has clear ownership of AI governance
0 to 10 ticked: AI deployment is running ahead of governance. Stop and address Section 1 and Section 2 before expanding use.
11 to 25 ticked: Foundational gaps exist. Prioritise the blanks in Sections 1 through 3 and engage your security provider on a remediation plan.
26 to 40 ticked: Reasonable baseline. Focus on the remaining gaps and establish a review cycle.
41 or more ticked: Strong foundation. Maintain through regular review and stay current on regulatory developments.

Need help getting AI ready?
Stealth Cyber helps Australian professional services firms build AI governance frameworks that are practical, not just compliant. Get in touch for a straight conversation about your AI readiness.
Email
contact@stealthcyber.io
Phone
AU: +61 7 5230 8381
US: +1 (855) 774-2595
Offices
Gold Coast, Australia
São Paulo, Brazil
Texas, United States
© 2026 Stealth Cyber Pty Ltd. ABN 72 675 840 632. All rights reserved.