OpenAI has launched GPT-5.4-Cyber for approved security teams, expanding its Trusted Access program.

OpenAI has launched GPT-5.4-Cyber for approved security teams, expanding its Trusted Access program.

      In summary: OpenAI is launching GPT-5.4-Cyber, a model specifically fine-tuned for defensive cybersecurity, featuring reduced refusal limits and capabilities for binary reverse engineering. This release is part of the expansion of its Trusted Access for Cyber program, opening access to thousands of verified defenders. This follows Anthropic’s decision to limit its more advanced Mythos model to just 11 organizations, highlighting a philosophical divergence: OpenAI supports broader verified access while Anthropic chooses a more restricted approach.

      OpenAI is making its most advanced cybersecurity model available to thousands of vetted defenders by introducing GPT-5.4-Cyber and enhancing its Trusted Access for Cyber program in direct response to Anthropic's recent Project Glasswing announcement.

      GPT-5.4-Cyber is a version of GPT-5.4 tailored for defensive security tasks. Its key characteristic is a reduced refusal threshold: unlike standard models that block sensitive inquiries related to vulnerability research or malware behavior, this model is designed to respond to such queries if the user is a verified security professional. Additionally, it features binary reverse engineering capabilities, allowing analysts to investigate compiled software for vulnerabilities without needing access to the source code.

      Understanding Trusted Access for Cyber

      The model is part of OpenAI’s Trusted Access for Cyber (TAC) program, which was initially introduced in February along with a $10 million grant fund for cybersecurity. TAC operates as an identity-and-trust framework that controls access to more advanced models based on verification levels. Individual users can authenticate their identity at chatgpt.com/cyber, while organizations can request access for their teams via an OpenAI representative. Security researchers requiring more extensive capabilities can apply for an exclusive invitation-only level.

      With the April update, the program is expanding from a limited pilot to what OpenAI describes as “thousands of verified individual defenders and hundreds of teams defending essential software.” New tiers are being added, with higher verification levels granting access to more advanced features. Users approved for the highest tier will have access to GPT-5.4-Cyber, but there’s a requirement: these top-tier users may need to waive Zero-Data Retention, meaning OpenAI will retain visibility into how the model is utilized.

      This strategy reflects a shift in philosophy. Instead of mainly relying on model-level safeguards against misuse, OpenAI is adopting an access-control approach that checks the identity of users before determining the model's responses. The company emphasizes three guiding principles: democratized access through objective verification, iterative deployment that updates safety systems as new risks emerge, and fostering ecosystem resilience through funding and open-source contributions.

      Context of Anthropic

      The timing of OpenAI's announcement cannot be interpreted without considering Anthropic’s Project Glasswing, unveiled on April 7. Anthropic disclosed that its Claude Mythos Preview model had autonomously identified thousands of zero-day vulnerabilities across major operating systems and web browsers, including an ancient bug in OpenBSD and a long-standing remote code execution flaw in FreeBSD, both discovered and documented without human input.

      In response, Anthropic has significantly limited access: Mythos Preview is exclusive to just 11 organizations, such as Apple, Google, Microsoft, AWS, Cisco, CrowdStrike, and JPMorgan Chase, under a $100 million defense initiative. The model is not publicly accessible, and Anthropic has indicated it may never be due to concerns about the potential for misuse of its exploit-generation capabilities.

      In contrast, OpenAI is pursuing the opposite strategy. While GPT-5.4-Cyber may lack Mythos's raw ability in vulnerability detection, OpenAI aims to make it available to a much broader audience. The underlying argument is that confining powerful security tools to a select few tech giants leaves a vast number of organizations—like those responsible for critical infrastructure, hospitals, local governments, and smaller security firms—without access to the same high-quality defensive technology.

      Capabilities of GPT-5.4-Cyber

      Besides having lower refusal thresholds, the model is optimized for workflows that standard ChatGPT handles inadequately or outright refuses. The standout feature is binary reverse engineering: security analysts can input compiled executables into the model and obtain analysis on potential malware behavior, embedded vulnerabilities, and structural weaknesses—tasks traditionally requiring specialized tools like IDA Pro or Ghidra and considerable manual expertise.

      The model can also manage dual-use inquiries—questions about attack techniques, exploit chains, and types of vulnerabilities—that standard models may flag as potentially harmful. OpenAI notes earlier versions sometimes declined to respond to legitimate defensive queries, causing friction for security experts needing the model to reason about adversarial strategies to bolster defenses.

      Complementing this is Codex Security, OpenAI’s automated code-scanning tool, which has already aided in the resolution of over 3,000 critical and high-severity vulnerabilities across the open-source landscape since its launch. It now encompasses more than 1,000 open-source projects through a complimentary scanning initiative.

      The dual-use challenge

      The inherent tension within cybersecurity AI lies

Other articles

The Intel Nova Lake leak reveals intriguing details about the forthcoming Intel Core Ultra series 4 processors. The Intel Nova Lake leak reveals intriguing details about the forthcoming Intel Core Ultra series 4 processors. A recent leak from Fresh Nova has revealed three supposed die configurations for Intel's forthcoming Core Ultra Series 4 desktop processors, suggesting a potential division of the new lineup into different performance tiers. More than a hundred Chrome extensions have been found causing significant issues. See if you're using any of them. More than a hundred Chrome extensions have been found causing significant issues. See if you're using any of them. A recent report associates 108 Chrome extensions with identity theft, session hijacking, and misuse of browsers, suggesting that if you haven't reviewed your Chrome extensions recently, it's time to examine your seemingly harmless add-ons more closely. OpenAI launches GPT-5.4-Cyber for approved security teams, expanding the Trusted Access program. OpenAI launches GPT-5.4-Cyber for approved security teams, expanding the Trusted Access program. OpenAI introduces GPT-5.4-Cyber, featuring binary reverse engineering for validated defenders, expanding access to thousands while competing with Anthropic's limited Mythos model. Don’t expect an affordable PlayStation 6 Lite anytime soon. Don’t expect an affordable PlayStation 6 Lite anytime soon. New speculation regarding the PlayStation 6 indicates that a genuine budget PS6 Lite is improbable, as handheld-style devices would result in excessive performance and development compromises. Instead, a simplified standard console seems to be a more viable option for Sony. A US judge has determined that the fraud defendant's conversations with Claude, an AI, do not have privilege. A US judge has determined that the fraud defendant's conversations with Claude, an AI, do not have privilege. A US court determined that conversations with AI chatbots are not protected by legal privilege. The case centered around Claude. Clients should consider public AI chats as potentially discoverable in legal proceedings. Anthropic, Google, and Microsoft offered bug bounties for AI agent vulnerabilities but chose to remain silent about the issues. Anthropic, Google, and Microsoft offered bug bounties for AI agent vulnerabilities but chose to remain silent about the issues. Researchers exploited prompt injection to take control of Claude, Gemini, and Copilot AI agents in order to obtain API keys and tokens. Each of the three companies offered bounties but did not make a public announcement.

OpenAI has launched GPT-5.4-Cyber for approved security teams, expanding its Trusted Access program.

OpenAI has introduced GPT-5.4-Cyber, featuring binary reverse engineering for validated defenders, expanding access to thousands as it competes with Anthropic's limited Mythos model.