The EU’s GPAI Code of Practice Highlights A Security-Centered Approach to AI
The European Commission has released the final version of the General-Purpose AI (GPAI) Code of Practice, a voluntary framework designed to help developers of general-purpose AI systems meet their obligations under the EU AI Act ahead of its enforcement beginning in August 2025. Developed by 13 independent experts with input from more than 1,000 stakeholders, the Code offers actionable guidance on transparency, copyright, and systemic risk mitigation.
HackerOne contributed comments during the drafting of the Code, emphasizing the importance of integrating proven security practices such as red teaming, vulnerability disclosure, and bug bounty programs. Many of these recommendations are reflected in the final text, reinforcing that securing AI systems is foundational to building trust and ensuring responsible deployment.
Applicability and Benefits of Adopting the Code
The General-Purpose AI Code of Practice is a voluntary set of guidelines to help general-purpose AI developers improve transparency, safety, security, and risk management. This applies to “general-purpose AI” which are a subset of AI models trained on large amounts of data that can be used in many different applications (with an exception for those only used for research or testing before being released).
Although voluntary, signing up to the Code offers clear advantages. The Code helps organizations prepare for the upcoming AI Act by promoting best practices in transparency, security, and risk management. It also signals a strong commitment to responsible AI, building trust with users, partners, and regulators. Additionally, signatories can join a community shaping future AI governance standards. By signing up to the Code, GPAI providers will be presumed to conform with GPAI obligations under the AI Act (i.e., Articles 53 and 55, for systemic‑risk models), helping them demonstrate legal compliance more easily—an advantage not available to non‑signatories.
The AI Office will soon provide details to providers of general-purpose AI models with operations in the EU on how to participate.
Understanding the Structure of the Code
The Code is divided into three core chapters, each addressing a key dimension of GPAI governance:
This section requires providers to disclose relevant information about their models in a structured, accessible way. A standardized Model Documentation Form is included to support this. It aligns closely with HackerOne’s focus on system accountability and auditability, key pillars for assessing AI behavior, understanding risk exposure, and ensuring traceability when models are deployed or integrated downstream.
This focus area offers practical guidance for ensuring that model training and usage comply with EU copyright law. It underscores the broader importance of responsible AI development, including clear documentation of where training data comes from, how it’s used, and whether it’s properly licensed. These issues are closely tied to building trust and transparency in AI systems.
This chapter applies specifically to developers of the most advanced GPAI models, those that may pose systemic risks (risks that could have widespread or significant impacts across sectors or society). The chapter outlines requirements for evaluating, mitigating, and managing these risks using state-of-the-art methods, including:
- Frequent red teaming: HackerOne supports leading AI developers such as Anthropic and Snap in conducting red teaming exercises that simulate adversarial threats, uncovering model weaknesses before they can be exploited.
- Secure third-party vulnerability reporting: HackerOne’s Vulnerability Disclosure Program (VDP) solutions enable AI companies to receive, triage, and respond to security vulnerabilities submitted by external researchers, ensuring that security issues are discovered and resolved quickly and responsibly.
- Bug bounty programs: HackerOne’s platform powers competitive public and private bug bounty programs that incentivize the discovery of security issues in AI systems. These programs are already used by organizations like Adobe, Cloudflare, and Zoom to test and strengthen their AI deployments.
- Whistleblower protections: While HackerOne does not directly manage internal reporting processes, our approach to coordinated disclosure reinforces a culture of openness and accountability, essential components of any secure AI development lifecycle.
The inclusion of these measures shows that strong, proactive security practices, like red teaming, coordinated vulnerability disclosure, and bug bounty programs, are no longer optional for advanced AI systems. They are a core part of responsible development and a necessary step toward managing systemic risk.
Putting the Code of Practice into Action
The General-Purpose AI Code of Practice sets out expectations for how GPAI systems should be built, tested, and deployed in the EU, with a strong focus on transparency, compliance, and risk management.
By including measures like red teaming, vulnerability disclosure, and bug bounty programs, the Code makes clear that strong security practices are essential—not optional—for managing the risks of powerful AI models.
HackerOne is proud to help leading AI organizations put these principles into action. Contact us to learn how we can support your organization in aligning with the Code and building secure, resilient AI systems.