OpenAI Launches $25,000 Bug Bounty for GPT-5.5 Jailbreaks

OpenAI Launches $25,000 Bug Bounty for GPT-5.5 Jailbreaks

Synopsis

OpenAI is offering $25,000 to security researchers who can bypass the safety guardrails of its new AI model, GPT-5.5, through a "bio bug bounty" programme. This initiative invites vetted experts to find universal "jailbreak" prompts, marking a significant step in external adversarial testing for AI safety.

Listen to this article in summarized format

Agencies
Sam Altman, CEO, OpenAI
OpenAI has invited security researchers to try to break its newest AI model and will pay them to do so. The company has announced a Bio Bug Bounty programme for GPT-5.5, offering cash rewards to researchers who can bypass the model’s biological safety guardrails.

Amid growing concerns over AI safety, this marks one of the first instances of a major AI company stress-testing its systems through external expertise.

The programme, which opened for applications on April 23, challenges participants to find a single universal jailbreak prompt capable of getting the model to answer all five questions in a biosafety challenge without triggering any moderation response.

The task must be completed from a clean chat session, meaning no prior conversation or context that could influence the model. GPT-5.5, accessible only through Codex Desktop, is the only model in scope.

The financial incentive is significant. OpenAI is offering $25,000 to the first researcher who achieves a complete universal jailbreak across all five questions. Partial successes may also be rewarded at the company’s discretion, though amounts have not been specified.

Applications close on June 22, 2026, and will be reviewed on a rolling basis. Testing will run from April 28 to July 27. Access is not open to all. OpenAI said it will invite a vetted group of trusted biosecurity red teamers, while also reviewing applications from researchers with relevant experience in AI red teaming, security or biosecurity.

All findings, prompts and communications will be covered by a non-disclosure agreement, meaning participants cannot publicly disclose their results, which is standard practice in security research.

The announcement comes amid a broader trend in the AI industry towards structured adversarial testing, or “red teaming”, as part of safety development processes.

This editorial summary reflects ET Tech and other public reporting on OpenAI Launches $25,000 Bug Bounty for GPT-5.5 Jailbreaks.

Reviewed by WTGuru editorial team.