Alignment with EU Principles and Values
Alignment with AI Act and International Approaches
Proportionality to Risks
Future-Proof
Proportionality to the size of the general-purpose AI model provider
Support and growth of the AI safety ecosystem
The 3 broad areas regarding the General Purpose AI Models’ transparency and risk mitigation when built, trained and deployed or integrated into AI systems. The work was structured into working groups that bring sets of measures and sub-measures as well as KPIs and open questions for the next consultation and feedback rounds until April 2025. Although this is the most important step after the AI ACT adoption this summer meant to help with its implementations it also gives a sense of the current state of progress and future policy outlook which is very much open to the input from various stakeholders from business, academia and civil society (what often is identified as red-teaming) showing a high degree of flexibility at this stage. A multi-stakeholder consultation with nearly 430 submissions so far has been included.
Transparency and copyright-related rules
In terms of copyright, Signatories of the Code will draw up and implement an internal policy to comply with Union law on copyright and related rights in line with this Chapter of the Code including a downstream and upstream policy. In a nutshell, it’s a commitment to ensure that the GPAI models have lawful access to copyright-protected content and to identify and comply with rights reservations expressed pursuant to Article 4(3) of Directive (EU) 2019/790.
It includes intended tasks, type and nature of AI systems in which it can be integrated or an Acceptable Use Policy (AUP) is defined as a set of rules that outline how a service or technology can be used. It is a document that provides guidelines to users on what is and isn’t acceptable behavior. The AUP should be consistent with the Signatories’ materials that describe the uses and capabilities of their general-purpose AI model.
Risk identification and assessment for systemic risk
The code also strikes some clear lines on what a systemic risk means in terms of capabilities and propensities.
Dangerous model capabilities — these are model capabilities that may cause systemic risk. Signatories recognise that many of these capabilities are also important for beneficial uses.
These include:
- Cyber-offensive capabilities, Chemical, Biological, Radiological and Nuclear (CBRN) capabilities, and weapon acquisition or proliferation capabilities
- Autonomy, scalability, adaptability to learn new tasks • Self-replication, self-improvement, and ability to train other models
- Persuasion, manipulation, and deception • Long-horizon planning, forecasting, and strategising • Situational awareness
Dangerous model propensities — these are model characteristics beyond capabilities that may cause systemic risk. They include: • Misalignment with human intent and/or values • Tendency to deceive • Bias • Confabulation • Lack of reliability and security • “Goal-pursuing”, resistance to goal modification, and “power-seeking” • “Colluding” with other AI models/systems to do so.
Technical risk mitigation for systemic risk
The most important part is that there is a continuous commitment to risk identification as part of a Safety and Security Framework. Signatories, GPAI models or AI systems deployers commit to continuously and thoroughly identifying systemic risks that may stem from the general-purpose AI model with systemic risk.
They will do this by use of a range of methods from forecasting to best-in-class evaluations to investigate capabilities, propensities, and other effects specified in measures to be detailed further. Signatories commit to identify and keep track of serious incidents, as far as they originate from their general-purpose AI models with systemic risk. There is an important emphasis on documentation and reporting to the AI Office and national competent authorities including possible corrective measures on top of the continuous mitigation efforts.
Governance risk mitigation for systemic risk
The governance component highlights the need for resource allocation at the board and executive level of the GPAI models with systemic risk as well as to enable meaningful independent expert risk and mitigation assessment throughout their lifecycle, as appropriate, especially for high severity tiers. Such independent expert risk and mitigation assessment may involve independent testing of model capabilities, reviews of evidence collected, systemic risks, and the adequacy of mitigations.
The Code is drafted around Determining risk Risc thresholds and risk tolerance, Forecasting model, Continuous monitoring for emergence of risks, Determining effectiveness of risk mitigation measures and cases are the lines along the builders and deployers of GPAI need to demonstrate that the model does not exceed maximum risk thresholds. It also focuses on access control to tools and levels of model autonomy.
For the context:
A range of academics, from Turing award-winner Yoshua Bengio to PhD candidates, have been named chairs and vice-chairs of working groups that will draft a Code of Practice on general-purpose artificial intelligence (GPAI), according to a Monday (30 September) Commission press release.
For providers of general-purpose AI systems like ChatGPT, the AI Act relies heavily on the Code of Practice, which will detail what the Act’s risk management and transparency requirements would entail in practice until standards are finalised, sometime in 2026.
“Chairs and vice-chairs play pivotal roles in shaping the first general-purpose AI Code of Practice,” the Commission said in a press release.
Photo: iStock Photo Getty Images +