European Privacy Law Archives - Center for Democracy and Technology https://cdt.org/area-of-focus/privacy-data/european-privacy-law/ Wed, 07 May 2025 07:30:37 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://cdt.org/wp-content/uploads/2019/11/cropped-cdt-logo-32x32.png European Privacy Law Archives - Center for Democracy and Technology https://cdt.org/area-of-focus/privacy-data/european-privacy-law/ 32 32 EU Tech Policy Brief: May 2025 https://cdt.org/insights/eu-tech-policy-brief-may-2025/ Wed, 07 May 2025 00:01:11 +0000 https://cdt.org/?post_type=insight&p=108724 Welcome back to the Centre for Democracy & Technology Europe‘s Tech Policy Brief! This edition covers the most pressing technology and internet policy issues under debate in Europe and gives CDT’s perspective on the impact to digital rights. To sign up for CDT Europe’s AI newsletter, please visit our website. Do not hesitate to contact […]

The post EU Tech Policy Brief: May 2025 appeared first on Center for Democracy and Technology.

]]>
Welcome back to the Centre for Democracy & Technology Europe‘s Tech Policy Brief! This edition covers the most pressing technology and internet policy issues under debate in Europe and gives CDT’s perspective on the impact to digital rights. To sign up for CDT Europe’s AI newsletter, please visit our website. Do not hesitate to contact our team in Brussels.

👁 Security, Surveillance & Human Rights

Building Global Spyware Standards with the Pall Mall Process

As international attention focuses on misuses of commercial spyware, the Pall Mall Process continues to gather momentum. This joint initiative, led by France and the United Kingdom, seeks to establish international guiding principles for the development, sale, and use of commercial cyber intrusion capabilities (CCICs). 

At the Process’s second conference in Paris earlier this month, Programme Director Silvia Lorenzo Perez joined global stakeholders as the process concluded with the adoption of a Pall Mall Code of Practice for States. The Code has been endorsed by 25 countries to date, including 18 EU Member States. It sets out commitments for state action regarding the development, facilitation, acquisition, and deployment of CCICs. It also outlines good practices and regulatory recommendations to promote responsible state conduct in the use of CCICs. 

Pall Mall Process annual event in Paris.
Pall Mall Process annual event in Paris.

CDT Europe will soon publish a comprehensive assessment of the official document to provide deeper insights into its implications. In parallel, and as part of our ongoing work to advance spyware regulation within the EU, CDT Europe is leading preparation of the sixth edition of the civil society roundtable series, “Lifting the Veil – Advancing Spyware Regulation in the EU,” on 13 May. Stakeholders will discuss what meaningful action should look like in the EU, following the political commitments made by the Member States that endorsed the Pall Mall Code of Practice.

CSOs Urge Swedish Parliament to Reject Legislation Undermining Encryption

CDT Europe joined a coalition of civil society organisations, including members of the Global Encryption Coalition, in an open letter urging the Swedish Parliament to reject proposed legislation that would weaken encryption. This legislation, if enacted, would greatly undermine the security and privacy of Swedish citizens, companies, and institutions. Despite its intention to combat serious crime, the legislation’s dangerous approach would instead create vulnerabilities that criminals and other malicious actors could readily exploit. Compromising encryption would leave Sweden’s citizens and institutions less safe than before. The proposed legislation would particularly harm those who rely on encryption the most, including journalists, activists, survivors of domestic violence, and marginalised communities. Human rights organisations have consistently highlighted encryption’s critical role in safeguarding privacy and free expression. Additionally, weakening encryption would also pose a national security threat, as even the Swedish Armed Forces rely on encrypted tools like Signal for secure communication. 

Recommended read: Ofcom, Global Titles and Mobile Network Security, Measures to Address Misuse of Global Titles

 💬 Online Expression & Civic Space

DSA Civil Society Coordination Group Meets with the ODS Bodies Network

Earlier this month, the DSA Civil Society Coordination Group met with the Out-of-Court Dispute Settlement (ODS) Bodies Network for the first time to explore ways to collaborate. Under Article 21 of the Digital Services Act (DSA), ODS Bodies are to provide independent resolution of disputes between users and online platforms. As these bodies start forming and seeking certification, their role in helping users access redress and offering insights into platform compliance is becoming more important.

The meeting introduced the ODS Network’s mission: to encourage cooperation among certified bodies, promote best practices for data-sharing, and engage with platforms and regulators. Civil society organisations, which often support users who have faced harms on platforms, discussed how they could help identify cases that could be referred to ODS Bodies. In return, records from ODS Bodies could become a valuable resource for tracking systemic risks and holding platforms accountable under the DSA.

The discussion further focused on how to raise user awareness of redress options, make ODS procedures more accessible, and strengthen data reporting practices. Participants also outlined next steps for working more closely together, particularly around identifying the types of data that could best support civil society’s efforts to monitor risks and support enforcement actions by the European Commission.

Asha Allen Joins Euphoria Podcast to Discuss Civil Society in the EU

Civil society is under pressure, and now more than ever, solidarity and resilience are vital. These are the resounding conclusions of the latest episode of the podcast Euphoria, featuring CDT Europe’s Secretary General Asha Allen. Asha joined Arianna and Federico from EU&U to unpack the current state of human rights and the growing threats faced by civil society in Europe and beyond. With key EU legislation like the AI Act and Digital Services Act becoming increasingly politicised, they explored how to defend democracy, safeguard fundamental rights, and shape a digital future that truly serves its citizens. Listen now to discover how cross-movement collaboration and rights-based tech policy can help counter rising authoritarianism.

CDT Europe Secretary General Asha Allen speaking with podcasters Federico Terreni and Arianna Labasin from EU&U at the Euphoria Podcast recording.
CDT Europe Secretary General Asha Allen speaking with podcasters Federico Terreni and Arianna Labasin from EU&U at the Euphoria Podcast recording.

Recommended read: FEPs, Silenced, censored, resisting: feminist struggles in the digital age

⚖ Equity and Data

EU AI Act Explainer — AI at Work

In the fourth part of our series on the AI Act and its implications for human rights, we examine the deployment of AI systems in the workplace and the AI Act’s specific obligations aimed at ensuring the protection of workers. In particular, we assess which of the prohibited AI practices could become relevant for the workplace and where potential loopholes and gaps lie. We also focus on the obligations of providers and deployers of high-risk AI systems, which could increase protection of workers from harms caused by automated monitoring and decision-making systems. Finally, we examine to what extent the remedies and enforcement mechanisms foreseen by the AI Act can be a useful tool for workers and their representatives to claim their rights. Overall, we find that the AI Act’s approach to allow more favourable legislation in the employment sector to apply is a positive step. Nevertheless, the regulation itself has only limited potential to protect workers’ rights.

CSOs Express Concern with Withdrawal of AI Liability Directive

CDT Europe joined a coalition of civil society organisations in sending an open letter to European Commission Executive Vice-President Virkkunen and Commissioner McGrath, expressing deep concern over the Commission’s recent decision to withdraw the proposed Artificial Intelligence Liability Directive (AILD) and stressing the urgent need to immediately begin preparatory work on a new, robust liability framework. We argued that the proposal is necessary because individuals seeking compensation for AI-induced harm will need to prove that damage was caused by a faulty AI system, which would be an insurmountable burden without a liability framework. 

Programme Director Laura Lazaro Cabrera also participated in a working lunch hosted by The Nine to discuss the latest trends and developments in AI policy following the Paris AI Summit. Among other aspects, Laura tackled the deregulatory approach taken by the European Commission, the importance of countering industry narratives, and the fundamental rights concerns underlying some of the key features of the AI Act.

Equity and Data Programme Director Laura Lazaro Cabrera speaking on a panel at the “Post-Paris AI Summit: Key Trends and Policies” event hosted by The Nine.
Equity and Data Programme Director Laura Lazaro Cabrera speaking on a panel at the “Post-Paris AI Summit: Key Trends and Policies” event hosted by The Nine.

Recommended read: Tech Policy Press, Human Rights are Universal, Not Optional: Don’t Undermine the EU AI Act with a Faulty Code of Practice

🆕 New Team Member!

Marcel Mir Teijeiro, AI Policy Fellow in CDT Europe's Equity and Data programme.
Marcel Mir Teijeiro, AI Policy Fellow in CDT Europe’s Equity and Data programme.

CDT Europe’s team keeps growing! At the beginning of April, we welcomed Marcel Mir Teijeiro as the Equity and Data programme’s New AI Policy Fellow. He’ll work on the implementation of the AI Act and CDT Europe’s advocacy to protect the right to effective remedy for AI-induced harms. Previously, Marcel participated in the Code of Practice multistakeholder process for General-Purpose AI Models, advising rights-holder groups across the cultural and creative industries on transparency and intellectual property aspects. A Spanish qualified lawyer, he also helped develop a hash-based technical solution for training dataset disclosure shared with the AI Office, U.S. National Institute for Standards and Technology, and the UK AI Safety Institute. We are excited to have him on board, and look forward to working with him!

🗞 In the Press

⏫ Upcoming Events

Tech Policy in 2025: Where Does Europe Stand?: On May 15, CDT Europe and Tech Policy Press are co-hosting an evening of drinks and informal discussion, “Tech Policy in 2025: Where Does Europe Stand?”. It will be an opportunity to connect with fellow tech policy enthusiasts, share ideas, and figure out what the future holds for tech regulation in Europe. The event is currently sold out, but you can still join the waitlist in case some spots open up! 

Lifting the Veil – Advancing Spyware Regulation in the EU: CDT Europe, together with the Open Government Partnership, is hosting the sixth edition of the Civil Society Roundtable Series: “Lifting the Veil – Advancing Spyware Regulation in the EU.” The roundtable will gather representatives from EU Member States, EU institutions, and international bodies alongside civil society organisations, technologists, legal scholars, and human rights defenders for an in-depth exchange on the future of spyware regulation. The participation is invitation-only, so if you think you can contribute to the conversation, feel free to reach out at eu@cdt.org.

CPDP.ai 2025: From 21 to 23 May, CDT Europe will participate in CPDP.ai 18th International Conference. Each year, CPDP gathers academics, lawyers, practitioners, policymakers, industry, and civil society from all over the world in Brussels, offering them an arena to exchange ideas and discuss the latest emerging issues and trends. This year, CDT Europe will be hosting two workshops on AI and spyware, in addition to our Secretary General Asha Allen speaking on a panel on the intersection of the DSA and online gender-based violence. You can still register to attend the conference.

The post EU Tech Policy Brief: May 2025 appeared first on Center for Democracy and Technology.

]]>
EU AI Act Brief – Pt. 4, AI at Work https://cdt.org/insights/eu-ai-act-brief-pt-4-ai-at-work/ Mon, 14 Apr 2025 19:44:15 +0000 https://cdt.org/?post_type=insight&p=108334 AI ACT SERIES: CDT Europe has been publishing a series of blog posts and briefing papers on the EU AI Act and what it means for human rights. To receive these briefings in your inbox, don’t forget to subscribe to our AI Bulletin here. Below is the fourth post of the series where we examine the deployment of AI […]

The post EU AI Act Brief – Pt. 4, AI at Work appeared first on Center for Democracy and Technology.

]]>
Graphic for EU AI Act Brief–Pt. 4, AI at Work. Yellow gradient background, black and dark yellow text.
Graphic for EU AI Act Brief–Pt. 4, AI at Work. Yellow gradient background, black and dark yellow text.

AI ACT SERIES: CDT Europe has been publishing a series of blog posts and briefing papers on the EU AI Act and what it means for human rights. To receive these briefings in your inbox, don’t forget to subscribe to our AI Bulletin here. Below is the fourth post of the series where we examine the deployment of AI systems in the workplace and the EU AI Act’s specific obligations aimed at ensuring the protection of workers.

[ PDF version ]

***

In the past years, the use of algorithmic management and decision-making systems in the workplace has become more and more widespread: a recent OECD survey found that over 70% of consulted managers reported that their firms used at least one automated tool to instruct, monitor or evaluate employees. This increase in use is understandably being met with apprehension. A survey conducted this year by the European Commission underscores workers’ overwhelming support for rules regulating the use of AI in the workplace, endorsing the European Trade Union Confederation’s previous calls for a Directive on algorithmic systems in the workplace that would specifically tackle some of the emerging challenges. 

The EU’s AI Act, the first cross-cutting landmark regulation of AI, recognises the risks involved in the deployment of AI systems in the workplace and it creates specific obligations aimed at ensuring the protection of workers through prohibitions and increased safeguards, with varying levels of success. 

Building on the previous explainers in this series, this brief zooms in on the specific aspects of the AI Act that are most relevant in the context of employment and the rights of workers in light of existing EU legislation on the protection of workers. 

This explainer will focus on the obligations of employers using AI systems in the workplace. Under the AI Act taxonomy, employers using AI will qualify as deployers of an AI system, regardless of whether an AI system is developed in-house – in which case they could be considered to be both providers and deployers – or acquired for use in the workplace.

Prohibited AI systems: red lines in the employment context

In line with its risk-based approach, the AI Act prohibits several AI practices which it considers to pose an unacceptable risk – several of which directly or indirectly are relevant to the workplace.  While only a prohibition on the use of emotion recognition systems in the workplace explicitly relates to the employment context, several other prohibited AI systems have the potential to adversely impact the rights of workers, such as biometric categorisation systems or social scoring systems. We explore the prohibitions with the most salient impacts on the workforce below, in order of strength. 

Biometric categorisation – entirely prohibited

The Act prohibits AI systems which categorise individuals based on their biometric data to deduce or infer a series of attributes, including race, political opinions, and trade union membership among others (Article 5(1)(g)). This prohibition captures an employer relying on biometric categorisation to find out whether an individual belongs to a specific trade union, which could lead to negative consequences for that individual worker. This prohibition could similarly be relevant in the context of recruitment, for example if a job advertisement is only shown to certain groups of people based on their prior categorisation.

Emotion recognition – (Mostly) prohibited in employment settings

Acknowledging the well-established unreliability of emotion recognition systems (Recital 44), the AI Act prohibits the placing in the market and use of AI systems that infer emotions from individuals in the workplace, except when such systems are put in place for medical or safety reasons (Article 5(1)(f)).  Emotion recognition under the Act is defined not in terms of an AI system’s capability, but in terms of its purpose, namely “identifying or inferring emotions or intentions of natural persons on the basis of their biometric data”. The Act excludes from the definition systems to recognise physical states, such as pain or fatigue (Recital 18), which are otherwise permitted.  

The guidelines on prohibited AI practices issued by the EU AI Office provide key clarifications on the scope of the prohibition. First, the guidelines apply a broad interpretation of “workplace”, clarifying that the prohibition extends to the recruitment process – in other words, job applicants or candidates are protected even in the absence of a formal employment or contractual relationship. Second, the guidelines clarify that the exception for medical and safety reasons should be interpreted narrowly, with any proposed interventions being required to be (i) responsive to an explicit need, (ii) limited to what is “strictly necessary”, including limits in time, personal application and scale, and (iii) accompanied by sufficient safeguards. Consequently, the guidelines specify that the “medical reasons” exception cannot be relied upon to legitimise  the detection of general aspects of wellbeing, including monitoring of stress levels. Likewise, “safety reasons” pertain only to the protection of life and health, and cannot be relied upon to legitimise the use of emotion recognition for the purposes of protecting property interests, for example to protect against theft or fraud. 

Despite the welcome clarifications above, the guidelines introduce carve-outs not foreseen in the text of the prohibitions itself. Notably, they exclude systems deployed for personal training purposes as long as the results are not shared with human-resources responsible persons and cannot impact the work relationship of the person trained or their professional progression. This carve-out enables employers to require workers to undergo emotion recognition for training purposes – even if the results are not shared, a third-party company contracted to provide such training could inform the employer whether such training was undertaken or not. Moreover, the guidelines state that crowd-control measures in public spaces continue to be allowed even if this means that employees present in the area will be subject to emotion recognition, given that this is not the primary aim of the measure. Consequently, employees working for example at a sports stadium could still be lawfully subject to emotion recognition according to the guidelines.

Social scoring – prohibited on a case-by-case basis

Furthermore, the AI Act prohibits systems used for social scoring of individuals or groups based on their social behaviour or known or inferred characteristics whenever the score leads to detrimental treatment in an unrelated context or to detrimental treatment disproportionate to the social behaviour or its gravity (Article 5(1)(c)). In the workplace context, the latter is likely to be more relevant, and could include situations where a worker is fired or demoted based on their behaviour and inferred personality traits – such as perceived introversion or aloofness – such that treatment is unjustified or disproportionate to the social behaviour itself or its gravity. However, whether or not the practical consequence of a poor social scoring results in disproportionate treatment will likely ultimately turn on the facts of the specific case at hand. In this regard, it is crucial to note that the Act itself notes that the social scoring prohibition does not apply to lawful evaluation practices carried out for a specific purpose (Recital 31), and the guidelines on prohibited practices specifically cite specific employee evaluations as an example of lawful evaluation practices, noting that “they are not per se prohibited, if lawful and undertaken in line with the AI Act and other applicable Union law and national law”. The guidelines therefore signal that the use of social scoring in worker evaluations is not de facto prohibited, while cautioning that it could fall foul of the AI Act if all elements of the prohibition were met. 

Real-time biometric identification – permitted

Finally, the AI Act prohibits real-time remote biometric identification specifically in the context of law enforcement (Article 5(1)(h)), implicitly acquiescing to the lawfulness of its use whenever used for purposes other than law enforcement. Such systems can therefore potentially be lawfully introduced and used by the employer to surveil workers under the AI Act, even as they might be subject to restrictions under the General Data Protection Regulation or other laws.

Limited protections from high-risk systems

The bulk of the AI Act is dedicated to regulating the development and deployment of high-risk AI systems, which are overall permitted but subject to safeguards, ranging from general notice requirements to the availability of effective remedies. 

An AI system can be considered high-risk under the Act if it is listed in Annex III of the Act. This includes systems deployed in employment and self-employment, in particular i) recruitment and selection, ii) promotions and termination, iii) allocation of tasks and monitoring and iv) evaluation of performance (Annex III 4(a)).

As we have commented numerous times, one of the key shortcomings of the Act is that it allows the possibility for an AI system deployed in any of the settings described in Annex III – including those set out above – to escape the high-risk classification if it is considered that a given system does not pose a significant risk of harm to the health, safety or fundamental rights of individuals (Article 6(3)). If a system is not recognised as being high-risk by a provider, most of the AI Act obligations are inapplicable – including those pertaining to deployers. Nevertheless, providers deeming an AI system not to be high-risk despite being covered by Annex III are asked to document this assessment (Article 6(4)), and register their system in a publicly available database (Article 49(2)). The AI Act further requires deployers who are public authorities not to use a high-risk AI system if it has not been listed by a provider in the publicly available database, creating an additional safeguard for their employees (Article 26(8)), but no similar restriction operates for private sector employees.

The high-risk classification is essential for key fundamental rights protections to kick in. High-risk systems are subject to risk management obligations, which include the identification of risks that the high-risk AI system can pose to health, safety or fundamental rights, transparency obligations towards deployers, and guarantees relative to human oversight, among others. 

Deployers of a high-risk AI system – which includes employers – specifically have several key obligations enabling the transparency and accountability of the use of AI systems in the workplace. These obligations vary based on the identity of the deployer.

Obligations applying to all deployers

The AI Act imposes general obligations on deployers, including ensuring some level of human oversight and monitoring the functioning of an AI system. 

Where the workplace is concerned, the AI Act creates a concrete notice obligation for deployers, requiring deployers of high-risk AI systems to inform workers’ representatives and affected workers that they will be subject to an AI system prior to putting such a system in place (Article 26(7)). The recitals leave the door open to go beyond mere notice requirements, noting that the Act is without prejudice to worker consultation procedures laid down in EU law (Recital 92) – however existing laws cover consultation procedures in a patchwork manner. The Workers’ Safety and Health Directive requires consultation with workers and/or their representatives on the planning and introduction of new technologies, specifically regarding the consequences of the choice of equipment, the working conditions and the working environment for the safety and health of workers (Article 6(3)(c)). The Directive on informing and consulting employees obliges employers beyond a given size to consult with their employees on decisions likely to lead to substantial changes in work organisation, while leaving the regulation of the practical arrangements to the Member States (Article 4(2)(c)). Consequently, this Directive has the potential to cover a wider scope of AI systems with implications for workers’ rights, besides their safety and health. Nevertheless, it is unclear whether the introduction of AI would fall within Member States’ definition of “substantial changes”. 

The consultation obligation set out in Directive 2002/14/EC has been interpreted by the recently adopted Platform Work Directive to include “decisions likely to lead to the introduction of or to substantial changes in the use of automated monitoring systems or automated decision-making systems” (Article 13(2)). This Directive also regulates in detail the information digital labour platforms need to provide to platform workers, their representatives and national competent authorities in the context of automated monitoring and decision-making systems (Article 9). It is, however, important to keep in mind that this Directive only applies to work organised through a digital labour platform (Article 2(1)(a) and (b)). This includes work performed completely online, including professional tasks such as software development or translation services, or in a hybrid manner combining online communication with a real-world activity, for instance the provisions of transportation services or food delivery (see Recital 5). It therefore remains to be seen to what extent the obligation to consult under Directive 2002/14/EC also applies to regular workspaces.

From a rights perspective, consultations are only the starting point – how they are conducted, and the extent to which the results are taken on board are crucial to ensure their effectiveness. The AI Act leaves the possibility for more favourable legislation for workers in the Union or Member States open (Article 2(11)). Consequently, for instance, whether workers or their representatives have a veto over the introduction of AI systems depends on the national law and collective agreements in place.

Obligations applying to deployers who are public authorities or perform public services

The AI Act creates additional obligations for deployers who are public authorities, which are held to a higher standard. As already explored above, public authorities cannot deploy a high-risk AI system that has not been previously identified and registered as such by a provider in a public database. Further, the Act requires public authorities to conduct a fundamental rights impact assessment (FRIA) prior to the deployment of an AI system identified as high-risk in Annex III (Article 27) and the registration of a high-risk AI system being used in a publicly available database (Article 26(8)).  While these obligations are crucial in ensuring the transparency and accountability of use of an AI system in the workplace, there are important nuances to be taken into account. 

The obligation to conduct a FRIA applies not only to entities governed by public law, but also – crucially – to private entities performing public services, which the AI Act considers to cover entities providing services “linked to tasks in the public interest”, such as in the areas of education, healthcare, social services, housing, and the administration of justice (Recital 96). The list provided is non-exhaustive, opening up the possibility for entities performing other functions to be covered. FRIAs are a unique feature and perhaps the most positive aspect under the AI Act. Unfortunately however, this obligation only applies in the narrow circumstances identified above, meaning that the majority of private employers are not required to assess the impact of the AI system’s use on the fundamental rights of their employees before deployment. Once conducted, there is no obligation on the employer to disclose the full results of the FRIA beyond notifying the national regulator of the outcome, limiting the potential for employee awareness and oversight. 

Beyond conducting a FRIA, the AI Act requires public sector deployers or any entity acting on their behalf to register any high-risk AI systems used in a public database, providing basic information on the AI system in an accessible manner (Article 71), and specifically including a summary of the fundamental rights impact assessment and data protection impact assessment (Annex VIII Section C). On this basis, workers could expect to see a brief summary of any anticipated fundamental rights impacts, as well as any mitigations undertaken by their employer.

Remedies, enforcement and governance

As explained in a previous blog post, the AI Act contains only a limited number of remedies, which are solely available for individuals having been subjected to a high-risk AI system within the meaning of Annex III. These remedies consist of the right to an explanation for a decision taken based on the output of a high-risk AI system, as well as the right to lodge a complaint. 

The AI Act gives individuals subject to a decision based on a high-risk system’s output the right to a clear and meaningful explanation by the deployer of the system (Article 86), building on the right not to be subjected to automated decision-making (ADM) with legal or similar effects on individuals, laid down in the General Data Protection Regulation (GDPR). The GDPR further requires the data controller to inform individuals about the existence of automated decision-making, the logic involved as well as the significance and consequences of such processing (Articles 13(2)(f) and 14(2)(g)). Where GDPR creates a base layer of protection shielding individuals from the serious consequences of automation, the AI Act introduces an additional dimension of protection by entitling individuals to information about consequential decisions taken not solely through automated means, but nonetheless relying on its support. 

The right to a clear and meaningful explanation can be a useful tool for employees to open up the “black box” of an algorithmic management or decision-making system and understand its logic, potentially enabling them to assess whether they have been adversely affected. However, the Act is not clear whether the explanation is to be provided proactively or whether individuals are entitled to receive it only upon request. In the latter case, the burden would be on employees to remain alert to any decisions likely taken with the support of AI systems. Further, as most employers will probably struggle to fully comprehend the logic of the AI system themselves, such explanations may be inaccurate or incomplete and will therefore not always contribute to a better understanding of the situation. Lastly, the explanation – if meaningfully given – is no guarantee of corrective action, which will have to be sought outside of the scope of the AI Act. 

The AI Act creates the right for any individual to lodge a complaint before a national market surveillance authority if they consider any part of the AI Act has been infringed, regardless of whether they have been personally affected or not (Article 85).

For example, an employee could bring a complaint if:

  • They did not receive an explanation for a decision taken based on the output of a performance-monitoring AI system at work;
  • Their public sector employer deployed a high-risk AI system at the workplace without disclosing it in the public database of AI systems; or
  • Their private sector employer failed to give prior notice to the workforce about a high-risk AI system being rolled out at work.

As we have previously analysed, the right to lodge a complaint is limited as it does not include an obligation for a national authority to investigate or to respond. Nevertheless, it is an additional regulatory avenue for individuals suspecting foul play and any violation of the AI Act. 

The AI Act creates several oversight mechanisms to invite sector-specific expertise in the enforcement of the AI Act. Notably, the AI Act provides for the designation of fundamental rights authorities at national level who may request and access documentation created in observance of the obligations of the AI Act in accessible language and format to exercise their mandate (Article 77(1)). In some Member States, those authorities include institutions active in the context of workers’ rights and labour law, such as labour inspectorates or occupational health and safety institutions. These authorities can therefore ask for the necessary information on the deployed AI system to facilitate the exercise of their mandate and protect the rights of workers. 

Finally, the AI Act establishes an Advisory Forum to provide technical expertise and advice with a balanced membership from industry, start-ups, SMEs, civil society and academia. While there is no explicit involvement of social partners on it, the Forum could provide an important platform for stakeholders to specifically bring in the perspectives of workers and their rights.

Conclusion

In conclusion, while the AI Act’s minimum harmonisation approach in the context of employment is a positive step, allowing more favourable laws to apply, the regulation itself has only limited potential to protect workers’ rights – with its main contributions being the restriction of the use of emotion recognition in the workplace, creation of notice obligations and explanation mechanisms. In particular, the obligations of employers deploying high-risk systems come with significant loopholes and flaws. Likewise, workers and their representatives have limited remedies available in the case of AI-induced harm. Potential secondary legislation could strengthen workers’ rights to be meaningfully consulted before the introduction of algorithmic management and decision-making tools. It should furthermore require all employers to consider the fundamental rights impact of those systems and ensure their transparency and explainability to workers and their representatives.

As the AI Act is gradually implemented, important aspects to monitor are the use of notice and  – where applicable under existing EU or national law – consultation mechanisms at the worker level, as well as the interpretation and operationalisation of the right to obtain an explanation. Another crucial area of inquiry will be the extent to which private entities can be considered to be providing public services on a case-by-case basis. It is therefore vital that CSOs and workers’ rights organisations are meaningfully engaged in the AI Act’s implementation and enforcement processes.

Read the PDF version.

The post EU AI Act Brief – Pt. 4, AI at Work appeared first on Center for Democracy and Technology.

]]>
Joint Civil Society Open Letter on the Withdrawal of the AI Liability Directive https://cdt.org/insights/joint-civil-society-open-letter-on-the-withdrawal-of-the-ai-liability-directive/ Mon, 07 Apr 2025 07:32:58 +0000 https://cdt.org/?post_type=insight&p=108167 On 7 April 2025, CDT Europe joined a coalition of civil society organisations in sending an open letter to Executive Vice-President Virkkunen and Commissioner McGrath, expressing deep concern over the Commission’s recent decision to withdraw the proposed Artificial Intelligence Liability Directive (AILD). While acknowledging that the proposal has room for improvement, we stress the urgent […]

The post Joint Civil Society Open Letter on the Withdrawal of the AI Liability Directive appeared first on Center for Democracy and Technology.

]]>
On 7 April 2025, CDT Europe joined a coalition of civil society organisations in sending an open letter to Executive Vice-President Virkkunen and Commissioner McGrath, expressing deep concern over the Commission’s recent decision to withdraw the proposed Artificial Intelligence Liability Directive (AILD). While acknowledging that the proposal has room for improvement, we stress the urgent need to immediately begin preparatory work on a new, robust liability framework. 

In its original proposal, the European Commission stressed that “safety and liability are two sides of the same coin”.  We agree. A liability framework is essential to ensure that individuals harmed by AI systems—particularly consumers and vulnerable citizens—can effectively seek compensation without facing insurmountable legal barriers, and therefore increase public trust in AI-powered products .

The letter highlights the critical importance of non-fault based liability, pointing out that it is often impossible for affected individuals to prove that a specific AI system caused harm. In addition, it underscores the insufficiency of the revised Product Liability Directive, which fails to address key gaps such as deployer accountability and harms like discrimination. 

The withdrawal of the AILD risks creating a fragmented legal landscape across EU Member States, undermining both consumer protection and legal certainty for businesses. We call for harmonised, ambitious EU-wide AI liability rules to build trust in AI technologies, promote fairness, and foster sustainable innovation.

Read the full letter.

The post Joint Civil Society Open Letter on the Withdrawal of the AI Liability Directive appeared first on Center for Democracy and Technology.

]]>
Joint Civil Society Letter Urging the EU Institutions to Protect Fundamental Rights in the Code of Practice for General Purpose AI Final Draft https://cdt.org/insights/joint-civil-society-letter-urging-the-eu-institutions-to-protect-fundamental-rights-in-the-code-of-practice-for-general-purpose-ai-final-draft/ Fri, 28 Mar 2025 14:37:48 +0000 https://cdt.org/?post_type=insight&p=108086 A broad coalition of civil society organisations, including CDT Europe, has raised alarm over the latest draft of the EU’s Code of Practice for General Purpose AI (GPAI). In a joint letter addressed to Executive Vice-President Virkkunen, the group expresses deep concern that crucial protections for fundamental rights have been downgraded to mere voluntary suggestions. […]

The post Joint Civil Society Letter Urging the EU Institutions to Protect Fundamental Rights in the Code of Practice for General Purpose AI Final Draft appeared first on Center for Democracy and Technology.

]]>
A broad coalition of civil society organisations, including CDT Europe, has raised alarm over the latest draft of the EU’s Code of Practice for General Purpose AI (GPAI). In a joint letter addressed to Executive Vice-President Virkkunen, the group expresses deep concern that crucial protections for fundamental rights have been downgraded to mere voluntary suggestions. These changes risk undermining the AI Act’s intended framework and eroding accountability for upstream model providers.

The coalition warns that the third draft radically weakens the Code’s approach to systemic risks by shifting responsibility away from model developers and making critical risk categories optional. This not only contradicts the AI Act’s principles but also stands at odds with international consensus on AI safety. With privacy and democratic freedoms at stake, the letter calls for urgent revisions to ensure the Code aligns with the AI Act’s purpose: upholding fundamental rights and ensuring robust protections across the AI lifecycle.

Read the full letter.

The post Joint Civil Society Letter Urging the EU Institutions to Protect Fundamental Rights in the Code of Practice for General Purpose AI Final Draft appeared first on Center for Democracy and Technology.

]]>
Third Draft of the General-Purpose AI Code of Practice Misses the Mark on Fundamental Rights https://cdt.org/insights/third-draft-of-the-general-purpose-ai-code-of-practice-misses-the-mark-on-fundamental-rights/ Tue, 18 Mar 2025 13:57:57 +0000 https://cdt.org/?post_type=insight&p=107937 The third draft of the General-Purpose AI Code of Practice was published last week.  As we expressed in our recent statement, CDT Europe is disappointed about the recent changes made in the third draft Code of Practice to the systemic risk taxonomy.  Under the AI Act, two sets of obligations apply to general-purpose AI (GPAI) […]

The post Third Draft of the General-Purpose AI Code of Practice Misses the Mark on Fundamental Rights appeared first on Center for Democracy and Technology.

]]>
The third draft of the General-Purpose AI Code of Practice was published last week. 

As we expressed in our recent statement, CDT Europe is disappointed about the recent changes made in the third draft Code of Practice to the systemic risk taxonomy. 

Under the AI Act, two sets of obligations apply to general-purpose AI (GPAI) models and those that pose systemic risks. Models will be understood to pose a systemic risk based either on a case-by-case basis by reference to predetermined criteria, or presumed to do so where they exceed a specified benchmark in terms of training compute. If a GPAI model poses systemic risks on either basis, additional risk assessment and mitigation obligations apply on the GPAI model providers. However, the Act does not specify the specific risks that providers should assess and mitigate for – instead, it leaves this important task to the Code of Practice, which sets out to define these precise risks in its systemic risk taxonomy. 

Inclusion of fundamental rights risks in the Code of Practice’s systemic risk taxonomy is crucial to compel GPAI model providers to assess and mitigate risks to fundamental rights that their models may pose. Since its first draft, the Code has followed a two-tiered approach to systemic risks, operating a “selected systemic risks” list – which are mandatory for providers to assess – in Appendix 1.1 and optional risks “for potential consideration” in Appendix 1.2. Most fundamental rights risks are included in the optional list under Appendix 1.2, with the third draft presenting the novelty of having added the risk of illegal, large-scale discrimination to this list. 

This analysis specifically considers the implications of the systemic risk taxonomy as currently scoped, and addresses some of the arguments raised in the text of the Code to justify the latest version of the taxonomy.

Read CDT Europe’s full analysis.

The post Third Draft of the General-Purpose AI Code of Practice Misses the Mark on Fundamental Rights appeared first on Center for Democracy and Technology.

]]>
CDT Europe Statement on the Third General Purpose AI Code of Practice Draft  https://cdt.org/insights/cdt-europe-statement-on-the-third-general-purpose-ai-code-of-practice-draft/ Wed, 12 Mar 2025 14:29:59 +0000 https://cdt.org/?post_type=insight&p=107869 Yesterday, the European AI Office unveiled the third draft of the Code of Practice on general purpose AI (GPAI) models. The Code, due to be finalised in May, will play a complementary role to the AI Act by setting out key commitments and measures for GPAI model providers to follow in order to comply with […]

The post <strong>CDT Europe Statement on the Third General Purpose AI Code of Practice Draft </strong> appeared first on Center for Democracy and Technology.

]]>
Yesterday, the European AI Office unveiled the third draft of the Code of Practice on general purpose AI (GPAI) models. The Code, due to be finalised in May, will play a complementary role to the AI Act by setting out key commitments and measures for GPAI model providers to follow in order to comply with their corresponding obligations under the Act. The Centre for Democracy and Technology Europe (CDT Europe) regrets that this final draft, which is to be put to multi-stakeholder consultation, all but removes fundamental rights from the scope of mandatory risk assessments. 

One of the core elements underpinning the Code of Practice is the systemic risk taxonomy, which outlines the specific risks that GPAI model providers must proactively assess and mitigate. Alongside many others, CDT Europe stressed repeatedly that the taxonomy could be improved to robustly reflect known risks arising from GPAI models, including discrimination, privacy risks, and the prevalence of child sexual abuse material and non-consensual intimate imagery. Despite extensive advocacy, all of these fundamental rights risks have been confined to a subsidiary list of risks optional for GPAI models to consider, with the main risk taxonomy almost entirely focussing on existential risks, such as loss of control and chemical, biological, radiological and nuclear risks.

The removal of discrimination from the selected systemic risk list is a significant regression in the drafting process, and an alarming step backwards for the protection of fundamental rights. We emphasised in each round of feedback the importance of preserving and strengthening the discrimination risk, as well as including privacy risks, child sexual abuse material and non-consensual intimate imagery in the list.” said Laura Lazaro Cabrera, CDT Europe’s Counsel and Director of the Equity and Data Programme.  

Instead, the third draft confirms what many of us had feared – that consideration and mitigation of the most serious fundamental rights risks would remain optional for general-purpose AI model providers. Fundamental rights are not “add-ons”. They are a cornerstone of the European approach to AI regulation.”  

CDT Europe further notes with concern that the third Code of Practice draft actively dissuades providers from assessing optional fundamental rights risks, by instructing them to consider these risks where they are reasonably foreseeable, and to “select” them for further assessment only if they are “specific to the high impact capabilities” of GPAI models with systemic risk. Through these changes, the Code has removed all incentives for providers to account for risks to fundamental rights, leaving it to industry to decide to what extent they assess those risks, if at all. 

It is not too late for the drafters to course-correct. But this draft is the closest to the final product – and foreshadows a significant erosion of fundamental rights in the AI landscape”, commented Lazaro Cabrera.

The post <strong>CDT Europe Statement on the Third General Purpose AI Code of Practice Draft </strong> appeared first on Center for Democracy and Technology.

]]>
CDT Europe’s AI Bulletin: February 2025 https://cdt.org/insights/cdt-europes-ai-bulletin-february-2025/ Fri, 28 Feb 2025 04:51:53 +0000 https://cdt.org/?post_type=insight&p=107575 Policymakers in Europe are hard at work on all things artificial intelligence, and CDT Europe is here with our monthly Artificial Intelligence Bulletin to keep you updated. We cover laws and policies that relate to AI, and their implications for Europe, fundamental rights, and democracy. To receive the AI Bulletin, you can sign up here. […]

The post CDT Europe’s AI Bulletin: February 2025 appeared first on Center for Democracy and Technology.

]]>
Policymakers in Europe are hard at work on all things artificial intelligence, and CDT Europe is here with our monthly Artificial Intelligence Bulletin to keep you updated. We cover laws and policies that relate to AI, and their implications for Europe, fundamental rights, and democracy. To receive the AI Bulletin, you can sign up here.

EU Mantra at the French AI Summit: Innovation and Deregulation

The third global summit on AI — and the first since the Artificial Intelligence Act entered into force — presented an ideal opportunity for the European Union to promote its hard-fought regulatory framework before a global audience. 

Disappointingly, the opposite happened. The European Commission’s statements at the Summit emphasised innovation and deregulation, as opposed to robust implementation and enforcement of the AI Act. Introducing a panel tackling the Code of Practice process, Commissioner Henna Virkkunen — responsible for tech policy — promised an “innovation-friendly” implementation of the AI Act. These remarks were consistent with European Commission President Ursula Von der Leyen’s later remarks to the AI Summit, which emphasised innovation in making the case for European leadership in the global AI race. The AI Act was only mentioned in passing as von der Leyen enumerated the EU strengths enabling AI development, while promising to cut red tape for companies. 

In that speech, von der Leyen also announced €200 billion for AI investment, including €20 billion for AI gigafactories that would provide the infrastructure for training large AI models. 

The French AI summit statements have been followed by further indications of the EU bloc’s shifting approach to AI. This week, von der Leyen announced plans to boost defence spending for targeted European capability areas, including military uses of AI. 

First AI Act Implementation Guidelines Published

Early February saw two separate AI Act implementation milestones: the publication of the guidelines outlining prohibited AI practices, and guidelines defining AI systems under the AI Act. 

The prohibited AI practices guidelines build on and further interpret the prohibitions set out in Article 5 of the AI Act. The guidelines provide several examples of the prohibited AI practices likely banned under Article 5, as well as practices falling outside of their scope. The guidelines overall apply a robust interpretation of the AI Act’s prohibitions, and clarify the interplay between the AI Act and existing legal frameworks such as the General Data Protection Regulation, the Law Enforcement Directive, the Unfair Commercial Practices Directive, and the Digital Services Act.  

The guidelines defining AI systems clarify which types of AI systems come within the scope of the AI Act. The guidelines exclude four types of systems, notably systems for improving mathematical optimisation, basic data processing systems, systems based on classical heuristics, and simple prediction systems. Early reactions have criticised the AI systems guidelines for their lack of clarity, and raised questions as to the extent of the exclusion. 

While neither set of guidelines is binding, they will likely steer the interpretation of the AI Act by regulators and courts.

AI Liability Directive To Be Withdrawn

On 11 February, hot on the heels of the French AI Summit, the European Commission announced in an annex to its 2025 work programme the intended withdrawal of its proposal for an AI Liability Directive (AILD), stating that there was no foreseeable agreement on the file. As we’ve explained, the proposal aimed to address the difficulties individuals face in making liability claims for AI-induced harms. Its withdrawal inevitably delays development of robust avenues enabling the right to an effective remedy. 

The largely unexpected withdrawal was announced just as European Parliament discussions on the file resumed, and after the file’s rapporteur launched a public consultation to collect multistakeholder input. Even more glaringly, Commissioner Michael McGrath defended the AI Liability Directive to MEPs the same day the withdrawal was announced, raising questions as to the level of internal communication on the merits and viability of the file.  

The withdrawal is not yet final: under the interinstitutional agreement for better law-making, co-legislators could still ask the Commission to revisit its decision or reissue a proposal. As a first step in this direction, lawmakers have invited Commissioner Virkkunen to explain the withdrawal before Parliament. Further, as the withdrawal notice stated, the Commission will also assess whether another proposal should be tabled or another type of approach should be chosen. 

Third Code of Practice Draft Delayed

The publication of the third draft of the Code of Practice, set to take place last week, was delayed following the AI Office’s announcement that it approved the drafters’ request for additional time to ensure that the third draft — the final draft to be put to public consultation — reflected stakeholder groups’ comprehensive feedback. The AI Office is yet to announce a potential publication week for the third draft, but the CoP process’ official timeline suggests that it could be published as late as March. Despite this timeline shift, the deadline for finalizing and publishing the Code of Practice is still 2 May 2025 —  a deadline set by the AI Act itself. 

Postponement of the draft’s publication comes as industry players such as Meta and Google expressed discontent with the draft Code of Practice, and civil society organisations threatened to withdraw from the process altogether. 

In Other ‘AI & EU’ News 

  • The AI Act’s impact on businesses will be assessed as part of the European Commission’s simplification agenda. Among other simplification proposals, the European Commission will undertake a broader assessment of the “digital acquis” — which includes several laws beyond the AI Act, including the General Data Protection Regulation — to establish whether it “adequately reflects the needs and constraints of businesses” such as small and medium enterprises, and others. 
  • The European Parliament Research Service published a short brief on the interplay between the AI Act and the GDPR on the subject of algorithmic discrimination, noting the possible legal bases that could be leveraged under GDPR to enable the processing of sensitive data for the purposes of detecting and addressing discrimination — itself an objective under the AI Act.
  • The Italian data protection authority ordered DeepSeek to block its chatbot in the country two days after it requested information, and the relevant companies reportedly argued that they had no obligation to provide information to the Garante as they were not subject to its jurisdiction. The Garante cited the “entirely unsatisfactory” response from the companies in the order requiring DeepSeek to block the chatbot, noting that they would open a formal investigation. Several other data protection authorities are currently already investigating DeepSeek
  • The French data protection authority, the CNIL, published new guidance on how the GDPR applies to AI models, focusing on the exercise of data subject rights.

Content of the Month 📚📺🎧

CDT Europe presents our freshly curated recommended reads for the month. For more on AI, take a look at CDT’s work.  

The post CDT Europe’s AI Bulletin: February 2025 appeared first on Center for Democracy and Technology.

]]>
Press Release: Withdrawal of the AI Liability Directive Proposal Raises Concerns Over Justice for AI Victims https://cdt.org/insights/press-release-withdrawal-of-the-ai-liability-directive-proposal-raises-concerns-over-justice-for-ai-victims/ Wed, 12 Feb 2025 13:51:11 +0000 https://cdt.org/?post_type=insight&p=107391 (BRUSSELS) – Yesterday, the European Commission announced the withdrawal of its proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive). The proposal aimed to address the difficulty individuals face when having to identify the liable entity and proving the requirements for a successful liability claim in the context […]

The post Press Release: Withdrawal of the AI Liability Directive Proposal Raises Concerns Over Justice for AI Victims appeared first on Center for Democracy and Technology.

]]>
(BRUSSELS) Yesterday, the European Commission announced the withdrawal of its proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive). The proposal aimed to address the difficulty individuals face when having to identify the liable entity and proving the requirements for a successful liability claim in the context of an opaque AI system. This decision is the latest development in an ongoing drive by the European Commission to cut “red tape” for the private sector in a misguided effort to live up to its ambition to prioritise competitiveness and innovation.

The Centre for Democracy & Technology Europe (CDT Europe) is deeply disappointed by this development, which represents a significant setback for the protection of the right to an effective remedy of victims of AI-induced harms.

“The AI Liability Directive was set to put forward a framework to ease the burden on individuals to pursue justice when wronged by an AI system. Its withdrawal is a departure from European values of transparency and accountability as well as fundamental rights, sending an alarming message that even the most basic procedural safeguards are fair play in the rush to embrace innovation”, said Laura Lazaro Cabrera, CDT Europe’s Counsel and Director of the Equity and Data Programme.

“Harms caused by AI systems and models are notoriously difficult to prove, owing to their complexity and lack of transparency. This leaves individuals with limited avenues to seek redress when they suffer harms induced by AI”, she further explained. 

While CDT Europe acknowledges the limits of the proposal in its current form as well as its significant potential for improvement, we nevertheless stress that it is vital to have rules in place which tackle the specific barriers individuals face in the context of AI-induced harms. This is especially important in light of the limited remedies available to individuals under the AI Act, which creates a complaints mechanism but sets no obligations for relevant authorities to follow through. 

We were encouraged by recent signs in the European Parliament to work on the file, following a report by the European Parliamentary Research Service which recommended taking the proposal forward. It is both disconcerting and preoccupying to see the withdrawal at a time when discussions in the Parliament on the proposal had restarted, and prior to the conclusion of a public consultation on the text which had been launched by the file’s rapporteur. CDT Europe will continue to advocate for the preservation of fundamental rights, including effective redress, in connection with harms caused by AI. 

The post Press Release: Withdrawal of the AI Liability Directive Proposal Raises Concerns Over Justice for AI Victims appeared first on Center for Democracy and Technology.

]]>
CDT Europe’s AI Bulletin: January 2025 https://cdt.org/insights/cdt-europes-ai-bulletin-january-2025/ Thu, 30 Jan 2025 07:30:00 +0000 https://cdt.org/?post_type=insight&p=107196 Policymakers in Europe are hard at work on all things artificial intelligence, and CDT Europe is here with our monthly Artificial Intelligence Bulletin to keep you updated. We cover laws and policies that relate to AI, and their implications for Europe, fundamental rights, and democracy. To receive the AI Bulletin, you can sign up here.  […]

The post CDT Europe’s AI Bulletin: January 2025 appeared first on Center for Democracy and Technology.

]]>
Policymakers in Europe are hard at work on all things artificial intelligence, and CDT Europe is here with our monthly Artificial Intelligence Bulletin to keep you updated. We cover laws and policies that relate to AI, and their implications for Europe, fundamental rights, and democracy. To receive the AI Bulletin, you can sign up here

Second Draft of General-Purpose AI Code of Practice Published

In December, the European Commission published the second draft of the General-Purpose AI Code of Practice (CoP), and opened the document for feedback. This draft substantially elaborates on the first, notably now including substantive details in the documentation sections, and guidance for assigning risk level to an AI system in the taxonomy of systemic risks. That taxonomy remains largely unchanged, and still excludes some well-evidenced risks, such as risk to privacy or prevalence of non-consensual intimate imagery and child sexual abuse material. Some of those relevant risks are set to be discussed in a closed workshop available to a subset of the CoP participants on 30 January. 

Other aspects of the CoP have sparked concern among organisations representing rightsholders. A joint letter, led by the European Publishers Council, calls for critical revisions to the draft to avoid erosion of EU copyright standards by failing to require strict compliance with existing EU laws.  

You can read our comments on the second draft’s systemic risk taxonomy’s approach to fundamental rights, and our comments on the first draft, on our website. 

According to the public timeline for the Code of Practice process, the next draft is expected to be released and simultaneously made available for comments to CoP participants on 17 February. 

Template for Training Data Transparency Opened for Targeted Feedback

In a Code of Practice working group meeting, the European AI Office outlined the template for providing a detailed summary of data used to train a GPAI model, a transparency measure mandated under the AI Act. While the AI Office’s presentation is now public, only CoP participants will be able to provide feedback, despite the fact that the AI Act’s text does not suggest that the transparency template should be prepared within the CoP process.

Code of Practice participants seeking to provide feedback will have to do so by 31 January 2025. 

Guidelines Forthcoming on Definition of AI Systems and Prohibited Practices

Guidelines defining an AI system and prohibited practices under the AI Act are set to be published by the end of this week, ahead of Chapters I and II of the AI Act coming into force on 2 February 2025. The guidelines, which the AI Act mandated the European Commission to develop, were preceded by a December 2024 consultation. In a departure from established practice, the consultation did not include a version of the draft guidelines for respondents to comment on, but instead consisted of a questionnaire. 

Civil society organisations stressed in a joint statement that the definition of AI systems should remain broad and flexible, so the AI Act’s scope would not exclude harmful AI systems deemed “simple”, flagging the ease with which developers could avoid liability for AI-fuelled harms.

The guidelines will have to answer key questions around the Act’s prohibitions, including what level of human assessment is required to overcome the ban on criminal profiling, as well as which types of offences the bans will cover, what types of scoring the social scoring ban will capture, and the level of targeting required for the web scraping ban to apply. On the whole, respondents to the consultation argued for a narrow interpretation of the exceptions and loopholes included in the Act.

You can read our response to the consultation here

Timeline Revealed for the AI Liability Directive

A timeline for the AI Liability Directive (AILD) – draft proposed rules for non-contractual civil liability in connection with artificial intelligence systems – has been formally announced by the office of the file’s rapporteur, MEP Axel Voss. According to the timeline shared, the first of three open consultations on the file will launch on 3 February 2025 and remain open for six weeks. The draft Directive is scheduled for a vote by the European Parliament’s plenary by January 2026.

AILD negotiations were put on hold during the final stages of AI Act negotiations. Following the publication of an assessment of the AILD’s impact by the European Parliament Research Service in September 2024, member states at the European Union Council resumed conversations on the AILD, steered by the Hungarian presidency as part of its programme. In our blog post on the AILD, we discuss the impact assessment and the AILD’s key provisions.  

In Other ‘AI & EU’ News 

  • The European Data Protection Board published two papers with the goal of helping data protection authorities reflect on questions around evaluating bias in AI tools, and effectively implementing  data subject rights in connection with AI models. The papers consider sources of bias and methodologies for detecting it, as well as practical implementation of data subjects’ rights to erasure and rectification as enshrined in the General Data Protection Regulation (GDPR).
  • The European Data Protection Board published its opinion on AI models and GDPR in December, asserting that the GDPR allows AI models to legally process data. The opinion, issued at the request of the Irish data protection authority, held that models actually processing personal data could be considered anonymous for GDPR purposes, if the likelihood of extracting or obtaining personal data using reasonable means was insignificant.
  • This week, the Italian data protection authority Garante asked the companies behind the DeepSeek chatbot to provide information on the personal data collected by the chatbot. This request follows the Garante’s fine against OpenAI for 15 million euros in December 2024 over GDPR breaches by ChatGPT. Among the breaches Garante identified were OpenAI’s failure to come up with an appropriate legal basis for processing personal data used to train ChatGPT, and failure to provide adequate transparency to its users. 
  • The European Commission will host an open, livestreamed webinar on 20 February on the subject of AI literacy – itself an obligation under the AI Act – in the context of the Commission’s AI Pact efforts.   
  • The European Commission revealed the seven locations of the forthcoming European AI factories. The factories, an initiative spearheaded by the Commission to bring the EU closer to its goal of becoming the “AI continent”, will deploy AI-optimised supercomputers and upgrade existing systems. 

Content of the Month 📚📺🎧

CDT Europe presents our freshly curated recommended reads for the month. For more on AI, take a look at CDT’s work.  

The post CDT Europe’s AI Bulletin: January 2025 appeared first on Center for Democracy and Technology.

]]>
CDT Europe’s Second Contribution to the Code of Practice Process on GPAI Models  https://cdt.org/insights/cdt-europes-second-contribution-to-the-code-of-practice-process-on-gpai-models/ Mon, 27 Jan 2025 15:33:46 +0000 https://cdt.org/?post_type=insight&p=107152 The Centre for Democracy and Technology Europe has provided feedback to the second draft of the Code of Practice for General-Purpose AI (GPAI) Models by the AI Office. This round of feedback, submitted by CDT Europe through a closed survey as a participant to the Code of Practice process, follows our first set of comments […]

The post CDT Europe’s Second Contribution to the Code of Practice Process on GPAI Models  appeared first on Center for Democracy and Technology.

]]>
The Centre for Democracy and Technology Europe has provided feedback to the second draft of the Code of Practice for General-Purpose AI (GPAI) Models by the AI Office. This round of feedback, submitted by CDT Europe through a closed survey as a participant to the Code of Practice process, follows our first set of comments and responds to the second of at least three drafts that will be produced in the coming months. The final version of the Code is expected to be announced in May 2025.

In this round of feedback, we focussed our comments on the systemic risk taxonomy offered by the draft code. We stressed the following points: 

  • The addition of several new considerations underlying the identification of systemic risks can easily cause confusion, not least because they depart from the systemic risk definition set in the AI Act and may be read as an exhaustive list of considerations.  The draft should state that the listed elements are merely indicative and that a risk can be considered systemic within the meaning of the AI Act for reasons not listed, and even if it does not satisfy the listed considerations. 
  • The scoping of the risk of “large-scale” and “illegal” discrimination is unduly narrow. The notion of “large-scale” discrimination is anathema to the rationale underlying anti-discrimination law, which seeks to protect minority groups, and the focus on “illegal” discrimination fails to capture the full breadth of characteristics leading to actual discrimination. 
  • The “large-scale, harmful manipulation” systemic risk continues to be broadly scoped and raises significant freedom of expression concerns. For instance, the example provided of “coordinated and sophisticated manipulation campaigns leading to harmful distortions” could be broadly interpreted and legitimise censorship. Every political or advertising campaign is an attempt to persuade: yet the risk as drafted gives undue latitude to the developers to decide what constitutes manipulation or a harmful distortion, enabling undue restrictions on the right to freedom of expression.  
  • Privacy and data protection risks should be included in the mandatory “selected systemic risks” category as opposed to being listed in the optional “additional risks category”. Privacy and data protection risks are included in risk taxonomies in multiple global AI governance instruments and their relevance in the AI model context from a regulatory perspective was underscored in the European Data Protection Board’s opinion on AI models.

The post CDT Europe’s Second Contribution to the Code of Practice Process on GPAI Models  appeared first on Center for Democracy and Technology.

]]>