Andrew Donato
← Back to Episodes
Episode 3·Monday, April 13, 2026
What I'm Watching

Mythos and Project Glasswing for dummies

Anthropic built an AI that found thousands of security holes in every major operating system. Then they decided not to release it. Here's what you need to know.

Watch on LinkedIn →

TL;DR

Anthropic announced their most powerful AI model ever — Claude Mythos — and instead of releasing it to the public, launched Project Glasswing: giving it to Microsoft, Apple, Amazon, and others to fix their software before attackers build something similar. The Fed and Treasury Secretary met with bank CEOs about the implications. This is AI being used as a shield, not a weapon.

The Full Take

Anthropic — the company behind Claude — just did something nobody in AI has done before. They built a model so capable at one specific thing that they decided not to release it publicly.

The model is called Mythos. What makes it different isn't that it's smarter at writing emails or generating code. It's that it's extraordinarily good at finding software vulnerabilities. We're talking thousands of zero-day flaws discovered across every major operating system, every major web browser, and a range of critical software. It autonomously found and exploited a 17-year-old security flaw in FreeBSD that no human had ever discovered. No human involvement after hitting "go."

Let that sink in. A 17-year-old vulnerability. Found by an AI. Automatically.

Instead of releasing Mythos to the public — which would mean anyone, including bad actors, could use it to find and exploit the same vulnerabilities — Anthropic launched something called Project Glasswing. The idea is straightforward: give the model to the companies that build the software everyone uses every day and say "fix your stuff before someone else builds something like this."

The partner list reads like a who's who of tech: Microsoft, Apple, Amazon, Google, CrowdStrike, Cisco, NVIDIA, JPMorganChase, the Linux Foundation, Palo Alto Networks. Over 50 organizations getting access. Anthropic committed $100 million in usage credits and $4 million in direct donations to open-source security organizations.

This matters even if you're not in tech. The software on your phone, your laptop, your bank's website — it all has undiscovered holes in it. Now there's an AI that can find them in hours instead of years. The question is whether the defenders can patch faster than attackers can build their own version.

The Federal Reserve Chairman and Treasury Secretary literally met with major bank CEOs to discuss the cyber implications. That's how serious this is.

This is the first real example of a major AI company choosing restraint over release. Whether you think it's the right call or an overreaction, the precedent matters. We're entering an era where the question isn't just "can we build this?" but "should we release this?"

Resources

This is the first time a major AI company said 'this is too powerful to release.' What do you think — right call or overreaction?