Today, OpenAI is launching a public Safety Bug Bounty(opens in a new window) program focused on identifying AI abuse and safety risks across our products. As AI technology rapidly evolves, so do the potential ways it can be misused. Our goal is to ensure our systems remain safe and secure against misuse or abuse that could lead to tangible harm.
This new program will complement OpenAI’s Security Bug Bounty(opens in a new window) by accepting issues that pose meaningful abuse and safety risks, even if they don’t meet the criteria for a security vulnerability. Through this program, we look forward to continuing to partner with safety and security researchers to help us identify and address issues that fall outside conventional security vulnerabilities but still pose real risks. Submissions will be triaged by OpenAI’s Safety and Security Bug Bounty teams, and may be rerouted between the two programs depending on scope and ownership.
The new Safety Bug Bounty(opens in a new window) program focuses on AI-specific safety scenarios listed below:
Agentic Risks including MCP
OpenAI Proprietary Information
Account and Platform Integrity
While jailbreaks are out of scope for this program, we periodically run private bug bounty campaigns focused on certain harm types, such as Biorisk content issues in ChatGPT Agent and GPT‑5. We invite interested researchers to apply to these programs when they arise.
Outside of the categories listed above, if researchers identify flaws that facilitate direct paths to user harm and actionable, discrete remediation steps, these may be considered in scope for rewards on a case-by-case basis. General content-policy bypasses without demonstrable safety or abuse impact are out of scope for this program. For example, “jailbreaks” that result in the model using rude language or returning information that is easily findable via search engines are out of scope.
## How to participate
Researchers interested in participating can apply through our Safety Bug Bounty(opens in a new window) program. We look forward to working alongside researchers, ethical hackers, and the safety and security community in the pursuit of a secure AI ecosystem.
Helping developers build safer AI experiences for teens Safety Mar 24, 2026
Creating with Sora safely Safety Mar 23, 2026
How we monitor internal coding agents for misalignment Safety Mar 19, 2026
Our Research * Research Index * Research Overview * Research Residency * OpenAI for Science * Economic Research
Latest Advancements * GPT-5.3 Instant * GPT-5.3-Codex * GPT-5 * Codex
Safety * Safety Approach * Security & Privacy * Trust & Transparency
ChatGPT * Explore ChatGPT(opens in a new window) * Business * Enterprise * Education * Pricing(opens in a new window) * Download(opens in a new window)
Sora * Sora Overview * Features * Pricing * Sora log in(opens in a new window)
API Platform * Platform Overview * Pricing * API log in(opens in a new window) * Documentation(opens in a new window) * Developer Forum(opens in a new window)
For Business * Business Overview * Solutions * Contact Sales
Company * About Us * Our Charter * Foundation(opens in a new window) * Careers * Brand
Support * Help Center(opens in a new window)
More * News * Stories * Livestreams * Podcast * RSS
Terms & Policies * Terms of Use * Privacy Policy * Other Policies
(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)
OpenAI © 2015–2026 Manage Cookies
English United States