Skip to Content

AI Policy & Governance

CDT Europe Statement on the Third General Purpose AI Code of Practice Draft 

Yesterday, the European AI Office unveiled the third draft of the Code of Practice on general purpose AI (GPAI) models. The Code, due to be finalised in May, will play a complementary role to the AI Act by setting out key commitments and measures for GPAI model providers to follow in order to comply with their corresponding obligations under the Act. The Centre for Democracy and Technology Europe (CDT Europe) regrets that this final draft, which is to be put to multi-stakeholder consultation, all but removes fundamental rights from the scope of mandatory risk assessments. 

One of the core elements underpinning the Code of Practice is the systemic risk taxonomy, which outlines the specific risks that GPAI model providers must proactively assess and mitigate. Alongside many others, CDT Europe stressed repeatedly that the taxonomy could be improved to robustly reflect known risks arising from GPAI models, including discrimination, privacy risks, and the prevalence of child sexual abuse material and non-consensual intimate imagery. Despite extensive advocacy, all of these fundamental rights risks have been confined to a subsidiary list of risks optional for GPAI models to consider, with the main risk taxonomy almost entirely focussing on existential risks, such as loss of control and chemical, biological, radiological and nuclear risks.

The removal of discrimination from the selected systemic risk list is a significant regression in the drafting process, and an alarming step backwards for the protection of fundamental rights. We emphasised in each round of feedback the importance of preserving and strengthening the discrimination risk, as well as including privacy risks, child sexual abuse material and non-consensual intimate imagery in the list.” said Laura Lazaro Cabrera, CDT Europe’s Counsel and Director of the Equity and Data Programme.  

Instead, the third draft confirms what many of us had feared – that consideration and mitigation of the most serious fundamental rights risks would remain optional for general-purpose AI model providers. Fundamental rights are not “add-ons”. They are a cornerstone of the European approach to AI regulation.”  

CDT Europe further notes with concern that the third Code of Practice draft actively dissuades providers from assessing optional fundamental rights risks, by instructing them to consider these risks where they are reasonably foreseeable, and to “select” them for further assessment only if they are “specific to the high impact capabilities” of GPAI models with systemic risk. Through these changes, the Code has removed all incentives for providers to account for risks to fundamental rights, leaving it to industry to decide to what extent they assess those risks, if at all. 

It is not too late for the drafters to course-correct. But this draft is the closest to the final product – and foreshadows a significant erosion of fundamental rights in the AI landscape”, commented Lazaro Cabrera.