Technology
29.5.2026
3
min reading time

Artificial Intelligence After the Anthropic Myth: OpenAI Announces GPT-5.4 Cyber

Just one week after Anthropic triggered alarm bells across Washington and Silicon Valley with its Mythos model, OpenAI has answered with GPT‑5.4‑Cyber — a defensive cybersecurity model whose most important feature is not what it can do, but who is allowed to use it.

OpenAI is rolling out GPT‑5.4‑Cyber through its Trusted Access for Cyber program, first introduced in February. Access is restricted to identity‑verified individuals and approved teams responsible for defending critical software. In the highest access tiers, users receive a cyber‑permissive version of the model with fewer safety restrictions and advanced capabilities, including binary reverse engineering — the ability to analyze compiled software without source code.

This is not just a product announcement. It is a clear signal that the AI industry has crossed a strategic threshold.

Anthropic made the first move with Mythos Preview and Project Glasswing, arguing that the model’s ability to autonomously identify and chain zero‑day vulnerabilities at scale made public release too dangerous. Access was limited to a hand‑picked group of major technology companies, infrastructure providers, and financial institutions, with the explicit goal of fixing vulnerabilities before equivalent capabilities fall into hostile hands.

OpenAI is now adopting the same logic — but at a larger scale.

Rather than relying on hardcoded refusals inside the model, OpenAI is shifting toward identity‑based control. GPT‑5.4‑Cyber is intentionally more permissive than standard GPT‑5.4, but only for users who have passed verification. Instead of limiting what the model can explain, OpenAI is limiting who can access those explanations.

This change matters because cybersecurity is inherently dual‑use. The same tools that help defenders can help attackers. Traditional safety approaches assumed that restricting models would harm attackers and defenders equally. In practice, attackers already use local, modified, or purpose‑built tools. Refusals mostly slow down those trying to defend systems responsibly.

Binary reverse engineering illustrates the point. In real‑world security work, defenders almost never receive clean source code. They receive firmware images, executable binaries, and suspicious payloads. Until now, AI systems routinely refused to engage with such material. GPT‑5.4‑Cyber is designed to remove that barrier for verified defenders.

OpenAI positions this release as preparation for what comes next. The company openly states that future models are expected to reach high levels of cybersecurity capability and will require even stronger safeguards. Codex Security, a companion security agent already in testing, is presented as proof of intent. OpenAI says it has already contributed to fixing more than 3,000 high‑severity vulnerabilities since launching as a research preview.

The broader implication is uncomfortable but unavoidable: access to powerful vulnerability‑discovery AI is becoming a matter of governance, not openness.

Anthropic’s claims that Mythos has uncovered thousands of serious zero‑day flaws in operating systems, browsers, and critical infrastructure prompted briefings with senior US officials and central banks. Even as some analysts question how much of that impact is publicly verifiable today, the underlying direction is clear. The ability to discover vulnerabilities faster than they can be patched changes the balance of cyber risk across finance, healthcare, energy, and national infrastructure.

OpenAI’s response shows that major labs now agree on one thing: frontier cyber capability cannot be treated like consumer AI.

The next phase of cybersecurity will not be defined by who has the best model. It will be defined by who controls access, how trust is verified, and whether defensive use can scale faster than offensive misuse.

In that sense, GPT‑5.4‑Cyber is not just an AI model. It is a blueprint for how dangerous capabilities enter the world.

And once access becomes the battleground, cybersecurity stops being a technical problem alone. It becomes an institutional one.

Provide your feedback on BizChat

Comments

Write a comment

Your submission has been received!
Oops! Something went wrong while submitting the form.

More on the topic

Technology

Politics
14.6.2026
3
min reading time

Drone‑as‑a‑Service is shifting from niche to default - 32$ Billions till 2032

Technology
14.6.2026
3
min reading time

BlackRock and JPMorgan Back Bezos’ Physical AI Bet at a $38B Valuation

Technology
13.6.2026
3
min reading time

Hunting Drones by the Pixel - How Teledyne FLIR’s New Software Pushes C‑UAS to the Edge