AI Security Cheat Sheet
What every Australian business needs to know about securing AI tools in the workplace.

AI Security
Cheat Sheet
What every Australian business needs to know about securing AI tools in the workplace.
stealthcyber.io
The Core Problem
AI tools move faster than security policies. Most organisations are using AI in some capacity before anyone has defined the rules around it. This cheat sheet covers the essentials so your team is not making security decisions by default.
What to Lock Down First
Data you should never put into a public AI tool
- Client personal information (names, TFNs, contact details)
- Financial records, bank account details, transaction data
- Legally privileged communications
- Staff payroll and HR records
- Commercially sensitive contracts or strategy documents
- Login credentials, API keys, or any authentication material
If it would cause a problem under the Privacy Act or your professional obligations if it leaked, it does not go into a public AI tool.
The difference between safe and unsafe AI tools
| Safe (generally) | Use with caution |
|---|---|
| Microsoft Copilot within your M365 tenancy | ChatGPT free/personal accounts |
| Azure OpenAI with your own data controls | Google Gemini personal accounts |
| Copilot for Microsoft 365 (licensed, configured) | Browser-based AI tools with no enterprise agreement |
| Approved internal AI deployments | Free tiers of any AI platform |
The key distinction: does your data stay within your environment, or does it leave to a third-party server? Enterprise licensing agreements typically include data protection commitments. Free personal accounts typically do not.
Five Rules for Staff Using AI Tools
Only use tools the business has approved.
If it is not on the approved list, raise it before using it for work tasks.
Never paste client data into a public AI tool.
Not even to "just check something quickly."
Treat AI outputs as a first draft, not a final answer.
Review everything before it goes to a client or gets acted on.
Report anything that looks wrong.
If an AI tool behaves unexpectedly, accesses something it should not, or produces output that references information it should not have, report it.
Your professional obligations apply to AI-assisted work.
If you sign off on it, you are accountable for it, regardless of how it was produced.
Microsoft 365 and Copilot Risks
Copilot for Microsoft 365 draws from everything the user can access. If your permissions model is loose, Copilot will surface files and data beyond what any individual user should be seeing.
Before deploying Copilot:
- Review and tighten SharePoint permissions
- Apply Microsoft Purview sensitivity labels to confidential content
- Audit who has access to what across your tenancy
- Understand what Copilot can see on behalf of each user role
Deploying Copilot without a permissions review is a reliable way to expose sensitive information to people who should not have it.
Shadow AI
Shadow AI is any AI tool being used by staff that the business has not formally sanctioned. It is the work equivalent of staff using personal Dropbox accounts to share client files.
Most organisations have more shadow AI use than they realise. Staff find tools useful and start using them. Nobody asked for permission because nobody thought to ask. The data leaves the environment and nobody knows.
How to find it:
Ask your IT or security provider to audit browser extensions, SaaS application usage, and network traffic for known AI service domains. What you find will likely be more than you expected.
AI Incident Types to Report
- Accidental submission of confidential data to a public AI tool
- AI output that references client or business information it should not have access to
- An AI tool behaving unexpectedly or requesting unusual permissions
- Staff using an unsanctioned AI tool with work-related data
- A vendor disclosing that their AI features have been updated to include data sharing
Three Things to Do This Week
Audit what AI tools your staff are currently using. Ask directly and check with your IT provider.
Establish a simple approved tools list. Even an email to staff is better than no policy.
Brief your team on what data cannot go into public AI tools. One conversation prevents most incidents.

Need help securing your AI tools?
Stealth Cyber provides managed cybersecurity and AI governance support for Australian professional services firms. Get in touch for a straight conversation about your AI security posture.
Website
stealthcyber.io
contact@stealthcyber.io
Phone
AU: +61 7 5230 8381
US: +1 (855) 774-2595
Offices
Gold Coast, Australia
São Paulo, Brazil
Texas, United States
© 2026 Stealth Cyber Pty Ltd. ABN 72 675 840 632. All rights reserved.