OpenAI Launches $25K Bio Bug Bounty for GPT-5.5
OpenAI has announced a Bio Bug Bounty program for GPT-5.5, offering a $25,000 reward for the first universal jailbreak prompt that can bypass its five-question bio safety challenge. The program targets researchers with expertise in AI red teaming, security, or biosecurity. Testing is limited to GPT-5.5 in Codex Desktop. Applications open April 23, 2026, with rolling acceptances until June 22, 2026. Testing runs from April 28 to July 27, 2026. Selected participants must sign a non-disclosure agreement covering all prompts, completions, and findings. Smaller awards may be granted for partial successes. The initiative aims to strengthen safeguards against biorisks in advanced AI.
Key facts
- OpenAI launches Bio Bug Bounty for GPT-5.5
- Reward: $25,000 for first universal jailbreak
- Challenge: defeat five-question bio safety test
- Model in scope: GPT-5.5 in Codex Desktop
- Applications open April 23, 2026
- Application deadline: June 22, 2026
- Testing period: April 28 to July 27, 2026
- Participants must sign NDA
Entities
Institutions
- OpenAI