OpenAI introduces hardware security keys for ChatGPT in collaboration with Yubico and eliminates password login for users deemed high-risk.
TL;DR
OpenAI has introduced Advanced Account Security for ChatGPT and Codex, an optional feature that substitutes passwords with passkeys or hardware security keys, disables email and SMS recovery, and automatically opts users out of model training. The company has collaborated with Yubico to offer co-branded YubiKeys for $68 (in a two-pack), which is significantly less than the retail price. This feature is aimed at journalists, dissidents, and officials, and will become mandatory for members of Trusted Access for Cyber by June 1.
OpenAI has unveiled a security enhancement for ChatGPT accounts that resembles the secure practices of online banking: it uses hardware keys, eliminates passwords, removes email recovery options, and does not provide customer support for lost access. Named Advanced Account Security, this opt-in feature necessitates users to authenticate using two passkeys, two hardware security keys, or a combination of both before logging into ChatGPT or Codex. Once activated, the option for password-based login is permanently turned off, and recovery via email or text message is no longer available. OpenAI has teamed up with Yubico, a hardware authentication company, to sell co-branded YubiKeys for $68, which is less than half the retail price of $126. This feature is accessible to all users, including those on free plans, and is specifically designed for journalists, political dissidents, researchers, and public officials. The introduction of this security measure reflects the fact that, for an increasing number of users, their ChatGPT account secures more sensitive information than their email account.
What it does
Advanced Account Security eliminates traditional login and recovery methods in favor of cryptographic authentication. Users opting in must register two separate credentials from passkeys stored on their devices, YubiKeys, or other FIDO2-compliant hardware tokens. Each credential produces a unique cryptographic key pair that stays on the device, ensuring there are no passwords to steal, no one-time codes to intercept, and no recovery emails that attackers could exploit through social engineering. OpenAI has made it clear that its support team cannot restore access to accounts secured by Advanced Account Security if users lose both credentials. During setup, a recovery key is provided, and losing that key as well results in an unrecoverable account. The architecture borrows zero-trust principles used to protect sensitive government systems and cryptocurrency wallets, applying them to a consumer chatbot.
The feature offers additional safeguards. Sign-in sessions are shortened, minimizing the time a stolen session token can be exploited. Users are alerted to every new login and can review and terminate active sessions from their account settings. Furthermore, enabling Advanced Account Security automatically excludes the user from model training, ensuring that their conversations will not contribute to future versions of ChatGPT. This aspect is particularly important, as it ties the highest level of account security to the highest level of data privacy, creating a user category whose interactions with the system are both cryptographically safe and excluded from OpenAI’s training processes. For users dealing with sensitive information, this dual approach addresses two major concerns concurrently.
Why it matters
This security upgrade comes amid a context that clarifies its purpose. In 2024, the cybersecurity firm Group-IB discovered over 100,000 stolen ChatGPT credentials circulating on dark web marketplaces, obtained from devices infected by information-stealing malware. These credentials gave anyone who bought them complete access to victims’ chat histories, which often included confidential work discussions, personal inquiries, and potentially damaging information. In a separate incident, a breach involving Mixpanel, a third-party analytics provider, exposed ChatGPT user names, email addresses, and technical metadata that could facilitate targeted phishing efforts. The broader industry push toward passwordless authentication stems from the recognition that passwords represent the largest attack surface in consumer technology: industry research predicts that 46 percent of successful cyberattacks on small and medium-sized businesses in 2026 will result from credential reuse.
ChatGPT’s vulnerability is notable due to the sensitive nature of the accounts. An email account stores messages, a bank account contains transaction records, but a ChatGPT account holds the unfiltered questions individuals ask when they think no one is observing: medical concerns, legal issues, relationship troubles, business strategies, proprietary code, and interactions with an AI that retains context across sessions. OpenAI’s Codex Chronicle feature, which periodically captures screenshots of a user’s desktop and transmits them to OpenAI’s servers for processing, raises the stakes for users who opt in. The company is expanding both the amount of sensitive information its products collect and the security infrastructure to safeguard that information. Advanced Account Security represents the protective aspect of this development.
The Yubico deal
OpenAI's collaboration with Yubico is both commercial and strategic. The co-branded products, the YubiKey C NFC and the YubiKey C Nano, are identical to Yubico’s existing offerings but feature OpenAI branding and are sold through OpenAI’s distribution channels at a discounted price
Other articles
OpenAI introduces hardware security keys for ChatGPT in collaboration with Yubico and eliminates password login for users deemed high-risk.
OpenAI's Enhanced Account Security substitutes passwords with hardware keys and passkeys, turns off email recovery, and excludes users from model training. Co-branded YubiKeys are priced at $68 for a pack of two.
