OpenAI Announces Coordinated Vulnerability Disclosure Policy for Third-Party Software
On September 22, 2025, OpenAI unveiled a new policy aimed at reporting security vulnerabilities in third-party applications. This policy features a detailed disclosure process that begins with the identification and validation of issues, incorporating impact summaries and proof-of-concept examples. Security engineers conduct peer reviews for each submission. Preferred methods for reporting include vendor security emails and private GitHub, steering clear of public trackers and Bug Bounty initiatives. Initial reports remain confidential, with public disclosures occurring only after vendor approval, unless there is ongoing exploitation or legal obligations. Detection techniques utilize AI-driven analysis and audits. The policy underscores collaboration and integrity, crediting vulnerabilities to OpenAI Security Research - Aardvark, with potential modifications as AI technology evolves.
Key facts
- OpenAI released its outbound coordinated vulnerability disclosure policy on September 22, 2025
- The policy governs how OpenAI reports vulnerabilities discovered in third-party software to vendors and open-source maintainers
- Detection methods include AI-powered application security analysis, security research, audits, and fuzzing
- Disclosures undergo internal peer review by security engineers before release
- Initial disclosures are private by default, with public disclosure typically requiring vendor consent
- Exceptions to private disclosure include active exploitation, unresponsive vendors, or legal requirements
- Vulnerabilities are credited to OpenAI Security Research - Aardvark
- The policy may change as security research becomes increasingly automated with AI advances
Entities
Institutions
- OpenAI
- CERTs
- CISA
Locations
- United States