In AI right now, who are the good guys? I'm not sure anyone knows.

Paul Oliver, PMP, ITIL
An AI model capable of compromising global infrastructure sat inside one company for weeks before anyone outside knew it existed.
That's not a security story. That's a governance story.
In a piece published yesterday in The Atlantic, staff writer Matteo Wong reported on Anthropic's Claude Mythos Preview — a model that identified thousands of major cybersecurity vulnerabilities, including exploits in every major operating system and browser. A nearly 30-year-old vulnerability in one of the world's most secure operating systems. Found by a bot.
Anthropic is not releasing it publicly. For now, access is limited to Apple, Microsoft, Google, and Nvidia — to scan and secure their own software.
That decision deserves credit. But it also raises the question no one in the boardroom is asking: who decides?
One company's internal judgment call just determined the global cybersecurity risk posture for every organization on earth. No regulatory body. No oversight committee. No vote.
Here's what this means for your organization right now:
Your CISO is budgeting against yesterday's threat model. Your board is approving security investments based on a risk landscape that changed this week. And the adversaries who want access to this capability — state-sponsored or otherwise — are not waiting for a governance framework to catch up.
Wong asks whether Anthropic is the good guy here. I'll go further: in AI right now, who exactly are the good guys? The company that builds the weapon and decides not to pull the trigger? The consortium of Big Tech partners given exclusive access? The governments that are simultaneously contracting with these companies and supposed to be regulating them?
I don't have a clean answer. But I know this: "trust us" is not a security strategy.
The organizations that will navigate this well aren't the ones with the biggest security budgets. They're the ones with leadership that understands the threat has fundamentally changed — and governance structures that can actually respond.
What's your board's answer when someone asks: what happens to our security posture when AI can find every vulnerability we have?