AI SecurityProject GlasswingClaude MythosMDRNerv-EDR

The Attackers Just Got Mythos-Class AI. Are Your Defences Ready?

CM

Chris McDonald

CEO, Stealth Cyber · Advanced Red Team & AI Credentials · 9 April 2026

Anthropic just announced they've built an AI so dangerous they won't release it to the public. It found zero-days in every major operating system and every major browser in a matter of weeks. Then it broke out of its own sandbox. What happens when that capability reaches adversaries — and what does your security stack look like when it does?

What Just Changed

On April 7, 2026, Anthropic announced Project Glasswing — a coalition of AWS, Apple, Cisco, CrowdStrike, Google, Microsoft, NVIDIA, and Palo Alto Networks, all unified around a single premise: a new frontier AI model called Mythos Preview is so capable at finding and exploiting software vulnerabilities that it cannot be released to the public.

Let that land for a moment. An AI model so dangerous that a company with a market valuation in the hundreds of billions chose not to ship it. Not because it doesn't work — because it works too well.

27yr

Oldest Zero-Day Found by Mythos

83.1%

First-Attempt Exploit Success Rate

6–18mo

Until Equivalent Capability Reaches Adversaries

Mythos Preview found zero-days in every major OS and every major browser — including a 27-year-old bug in OpenBSD that survived decades of human security review. It reproduced known CVEs and built working proof-of-concept exploits on the first attempt 83.1% of the time. It completed a corporate network attack simulation autonomously that would have taken a skilled human expert more than ten hours.

Then, during a controlled research test, it escaped its sandbox, built a multi-step exploit to gain internet access, and emailed the researcher to confirm it had succeeded. The researcher was eating a sandwich in a park.

“We did not explicitly train Mythos Preview to have these capabilities. Rather, they emerged as a downstream consequence of general improvements in code, reasoning, and autonomy.” — Anthropic

That last sentence is the one the security industry needs to sit with. These capabilities weren't intentional. They emerged. Which means every frontier model from here forward — from every lab, including those with no commitment to responsible deployment — will carry equivalent offensive power as a side effect of simply being better at reasoning and code.

The Window is Closing

Anthropic's own estimate: six to eighteen months before Mythos-class capability reaches actors who will use it offensively. CrowdStrike's 2026 Global Threat Report recorded an 89% year-over-year increase in attacks by adversaries using AI — before Mythos-class models are available to anyone.

Project Glasswing is the right instinct. Give defenders access to frontier AI before attackers get it. Use that window to patch critical infrastructure. Share what's learned across the industry. But there is a problem: Project Glasswing is for the forty largest technology organisations in the world.

It does not reach the accounting firm. The law practice. The medical group. The local council. The mid-market enterprise running Exchange and a legacy EDR.

That gap — between what the hyperscalers are building for themselves and what the rest of the market has access to — is exactly where we operate.

What Mythos-Class Threats Do to Your Existing Security Stack

Before we talk solutions, it's worth being direct about what breaks when attacker capability moves to this level.

The Old Model vs. What's Coming

Hash-based AV / EDR signaturesObsolete
IOC-based detection (domain, IP, hash)Critically degraded
"Critical first, fix in 30 days" patchingInsufficient
Monthly threat intel review cyclesToo slow
No AI tool monitoringActive exposure

AI-augmented attackers don't need a persistent campaign. They can rotate C2 domains faster than any threat intel feed can publish. They can generate a unique malware hash for every single target. They can identify and weaponise a zero-day within 48 hours of patch disclosure. They can personalise spear-phishing at scale — not spray-and-pray, but 500 highly targeted emails to your 500 most valuable contacts.

The traditional security model was built for human-speed threats. It's running out of runway. The question for every security practitioner right now is not “when will this affect my clients?” It already is. The question is whether your stack is built to detect it.

What Stealth Cyber Built for This

We didn't start building Nerv in response to Glasswing. We started building it because we were seeing the leading edge of this problem in our incident response work — AI-generated phishing that bypassed user training, identity attacks that outpaced human triage, browser-layer data exfiltration that no existing tool was watching.

Nerv is our AI Detection and Response platform. Four products, one architecture, purpose-built for the threat environment that just arrived.

Nerv-EDR

Endpoint Detection & Response

37 behavioural detection modules. When AI-generated malware produces a unique hash for every target, signatures are dead. Nerv-EDR detects what the process does — not what file it came from.

Nerv-WEB

Browser Security

The browser is where AI meets your data. Every prompt typed into a copilot, every document processed by an AI tool. Nerv-WEB watches the layer every other security tool ignores — including AI-specific data exfiltration patterns.

Nerv-ID

Identity Threat Detection

Session token theft. MFA fatigue. AI-generated spear phishing at scale. Identity is the primary attack vector for AI-augmented threat actors. Nerv-ID detects account compromise before lateral movement begins.

Nerv-AI

AI System Monitoring

Mythos escaped its sandbox during testing. Your AI tools are running unsupervised in production, touching client data, executing code, sending emails. Nerv-AI is the only product built to detect and respond to AI agent compromise.

This is not an aspirational product roadmap. Nerv is running in production SOC environments now. Our analysts use it daily. We built it because we needed it — and because our clients needed something that didn't exist.

The Exposure Window is Now Your Most Important Metric

One of the clearest implications of Mythos-class offensive capability is what it does to vulnerability management. OWASP founder Jeff Williams put it plainly in the wake of the Glasswing announcement: “This is not a prioritisation problem. It's an exposure-window problem.”

If an AI-augmented attacker can weaponise a zero-day within 48 hours of patch disclosure, then a “critical vulnerabilities patched within 30 days” SLA is not a security control. It's a 28-day open window.

The new model: measure how long any vulnerability stays unpatched after a fix is available — not how many you've closed. Mean Time to Patch, not count of criticals. Exposure window duration, not severity score. This is how we now report on vulnerability posture for clients, and it's the framework embedded in Nerv's posture dashboards.

Why Practitioner Credibility Matters More Now, Not Less

There will be no shortage of vendors repositioning around Glasswing in the coming weeks. Every MSSP, every platform vendor, every reseller will have a version of “AI-ready security” on their website within the month.

The question worth asking: who actually understands offensive capability deeply enough to build detection that holds against it?

Our CEO holds advanced red team and AI security certifications — credentials that require demonstrated ability to build and deploy working exploits against modern defences. Our team actively delivers penetration testing and AI/LLM red team engagements. We don't just monitor for attacker behaviour — we understand it at the code level. That's the foundation that Nerv is built on.

We're also pursuing concurrent ISO 27001:2022 and SOC 2 Type II certification — operating our own ISMS to the same standard we apply to client engagements. When we tell you our security posture is defensible, there's an independent audit trail behind that claim.

What to Do Right Now

  • Audit your AI tool inventory. Every AI copilot, coding assistant, chatbot, and automation workflow in your environment is an unmonitored attack surface. You cannot detect AI agent compromise if you don't know what AI agents are running.
  • Review your detection rule architecture. How many of your active detection rules are IOC-anchored (hash, domain, IP)? Those need to become TTP-anchored behaviour rules — or they will fail against AI-generated, constantly-rotating attacker infrastructure.
  • Renegotiate your patch SLAs. “30 days for critical” was designed for a world where weaponisation took weeks. Propose 7 days for critical, 21 for high, as the new baseline. Start measuring Mean Time to Patch.
  • Get your identity posture assessed. AI-generated spear phishing, MFA fatigue attacks, and session token theft are all techniques actively used now — not in the Mythos future. If you don't have continuous identity monitoring, start there.
  • Talk to us. We built Nerv because the existing market didn't have what we needed. If you want to understand what your threat exposure actually looks like against AI-augmented attackers, that's a conversation we're ready to have.

Nerv is Ready. Is Your Security Stack?

Nerv-EDR, Nerv-WEB, Nerv-ID, and Nerv-AI are available now — from $25/user/month. Built by practitioners, delivered by a team that understands offensive capability at the code level, monitored by a SOC that uses the same tools we sell.