OpenAI Lowered the Bar for Cybersecurity Experts — But Who Decides Who's Verified?

GPT-5.4-Cyber is not a new product, but the same model with weakened restrictions for select users. The issue is not about what AI can do, but who gets access to it and how.

104
Share:
Ілюстративне фото: Depositphotos

A week after Anthropic released Claude Mythos for the cybersecurity market, OpenAI responded with its own version — GPT-5.4-Cyber. But the technical innovation here is less interesting than what the company did with its limitations.

The Same Model, Different Rules

GPT-5.4-Cyber is not a separate architecture, but a modified version of GPT-5.4 with lowered failure thresholds for legitimate cyber tasks. OpenAI calls this approach "cyber-permissive": the model will respond to requests that the standard version would reject as potentially dangerous — vulnerability analysis, binary code reverse engineering, malware research.

Among the new capabilities is precisely binary reverse engineering: analysts can search for malicious code in programs and applications without manual disassembly. As SiliconAngle notes, the CTF-benchmark score (capture-the-flag — cybersecurity competition) has grown from 27% on GPT-5 in August 2025 to 76% on the current generation of models. This is not abstract statistics: CTF scenarios simulate real attacks.

Access Through Verification Levels

The model will not be available to the general public. OpenAI is expanding the Trusted Access for Cyber (TAC) program — a multi-level verification system where higher access levels unlock more powerful capabilities. The highest level provides access to GPT-5.4-Cyber with minimal restrictions.

Verification includes KYC-checks (Know Your Customer) and automated identity verification. Access will be granted to verified organizations, security solution providers, and researchers. In parallel, OpenAI maintains additional protective mechanisms: real-time query monitoring and asynchronous blocking for clients on Zero Data Retention surfaces.

"Since cybersecurity capabilities are inherently dual-use, we maintain a cautious approach to deployment"

— OpenAI, GPT-5.4 system card

Dual Use as Built-In Risk

This is where the real conflict lies, not in marketing narrative. A model trained to find vulnerabilities and understand attack logic is by definition useful for both defense and attack. OpenAI acknowledges this openly: in the GPT-5.4 documentation, it is classified as a model with "High cyber capability" according to its internal Preparedness Framework system.

  • Binary code reverse engineering — finds vulnerabilities and helps exploit them
  • Malware analysis — teaches understanding of attacks, but also reproduces their logic
  • Vulnerability research — standard practice for penetration testers and, simultaneously, malicious actors

Anthropic faced the same issue with Claude Mythos. Both companies are betting on access verification as the primary protective mechanism — but KYC-checks do not guarantee that a verified organization will not abuse capabilities or become a victim of credential leaks.

What's Next

OpenAI explicitly states that GPT-5.4-Cyber is preparation for "more powerful models coming this year." That is, the capability bar will continue to rise, while the verification system remains the same.

If no documented instances of abuse appear publicly over the next six months, it will either mean that the TAC system truly works — or that we simply won't find out about them.

World News