European Policy Archives - Center for Democracy and Technology https://cdt.org/area-of-focus/european-policy/ Wed, 07 May 2025 07:30:37 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://cdt.org/wp-content/uploads/2019/11/cropped-cdt-logo-32x32.png European Policy Archives - Center for Democracy and Technology https://cdt.org/area-of-focus/european-policy/ 32 32 EU Tech Policy Brief: May 2025 https://cdt.org/insights/eu-tech-policy-brief-may-2025/ Wed, 07 May 2025 00:01:11 +0000 https://cdt.org/?post_type=insight&p=108724 Welcome back to the Centre for Democracy & Technology Europe‘s Tech Policy Brief! This edition covers the most pressing technology and internet policy issues under debate in Europe and gives CDT’s perspective on the impact to digital rights. To sign up for CDT Europe’s AI newsletter, please visit our website. Do not hesitate to contact […]

The post EU Tech Policy Brief: May 2025 appeared first on Center for Democracy and Technology.

]]>
Welcome back to the Centre for Democracy & Technology Europe‘s Tech Policy Brief! This edition covers the most pressing technology and internet policy issues under debate in Europe and gives CDT’s perspective on the impact to digital rights. To sign up for CDT Europe’s AI newsletter, please visit our website. Do not hesitate to contact our team in Brussels.

👁 Security, Surveillance & Human Rights

Building Global Spyware Standards with the Pall Mall Process

As international attention focuses on misuses of commercial spyware, the Pall Mall Process continues to gather momentum. This joint initiative, led by France and the United Kingdom, seeks to establish international guiding principles for the development, sale, and use of commercial cyber intrusion capabilities (CCICs). 

At the Process’s second conference in Paris earlier this month, Programme Director Silvia Lorenzo Perez joined global stakeholders as the process concluded with the adoption of a Pall Mall Code of Practice for States. The Code has been endorsed by 25 countries to date, including 18 EU Member States. It sets out commitments for state action regarding the development, facilitation, acquisition, and deployment of CCICs. It also outlines good practices and regulatory recommendations to promote responsible state conduct in the use of CCICs. 

Pall Mall Process annual event in Paris.
Pall Mall Process annual event in Paris.

CDT Europe will soon publish a comprehensive assessment of the official document to provide deeper insights into its implications. In parallel, and as part of our ongoing work to advance spyware regulation within the EU, CDT Europe is leading preparation of the sixth edition of the civil society roundtable series, “Lifting the Veil – Advancing Spyware Regulation in the EU,” on 13 May. Stakeholders will discuss what meaningful action should look like in the EU, following the political commitments made by the Member States that endorsed the Pall Mall Code of Practice.

CSOs Urge Swedish Parliament to Reject Legislation Undermining Encryption

CDT Europe joined a coalition of civil society organisations, including members of the Global Encryption Coalition, in an open letter urging the Swedish Parliament to reject proposed legislation that would weaken encryption. This legislation, if enacted, would greatly undermine the security and privacy of Swedish citizens, companies, and institutions. Despite its intention to combat serious crime, the legislation’s dangerous approach would instead create vulnerabilities that criminals and other malicious actors could readily exploit. Compromising encryption would leave Sweden’s citizens and institutions less safe than before. The proposed legislation would particularly harm those who rely on encryption the most, including journalists, activists, survivors of domestic violence, and marginalised communities. Human rights organisations have consistently highlighted encryption’s critical role in safeguarding privacy and free expression. Additionally, weakening encryption would also pose a national security threat, as even the Swedish Armed Forces rely on encrypted tools like Signal for secure communication. 

Recommended read: Ofcom, Global Titles and Mobile Network Security, Measures to Address Misuse of Global Titles

 💬 Online Expression & Civic Space

DSA Civil Society Coordination Group Meets with the ODS Bodies Network

Earlier this month, the DSA Civil Society Coordination Group met with the Out-of-Court Dispute Settlement (ODS) Bodies Network for the first time to explore ways to collaborate. Under Article 21 of the Digital Services Act (DSA), ODS Bodies are to provide independent resolution of disputes between users and online platforms. As these bodies start forming and seeking certification, their role in helping users access redress and offering insights into platform compliance is becoming more important.

The meeting introduced the ODS Network’s mission: to encourage cooperation among certified bodies, promote best practices for data-sharing, and engage with platforms and regulators. Civil society organisations, which often support users who have faced harms on platforms, discussed how they could help identify cases that could be referred to ODS Bodies. In return, records from ODS Bodies could become a valuable resource for tracking systemic risks and holding platforms accountable under the DSA.

The discussion further focused on how to raise user awareness of redress options, make ODS procedures more accessible, and strengthen data reporting practices. Participants also outlined next steps for working more closely together, particularly around identifying the types of data that could best support civil society’s efforts to monitor risks and support enforcement actions by the European Commission.

Asha Allen Joins Euphoria Podcast to Discuss Civil Society in the EU

Civil society is under pressure, and now more than ever, solidarity and resilience are vital. These are the resounding conclusions of the latest episode of the podcast Euphoria, featuring CDT Europe’s Secretary General Asha Allen. Asha joined Arianna and Federico from EU&U to unpack the current state of human rights and the growing threats faced by civil society in Europe and beyond. With key EU legislation like the AI Act and Digital Services Act becoming increasingly politicised, they explored how to defend democracy, safeguard fundamental rights, and shape a digital future that truly serves its citizens. Listen now to discover how cross-movement collaboration and rights-based tech policy can help counter rising authoritarianism.

CDT Europe Secretary General Asha Allen speaking with podcasters Federico Terreni and Arianna Labasin from EU&U at the Euphoria Podcast recording.
CDT Europe Secretary General Asha Allen speaking with podcasters Federico Terreni and Arianna Labasin from EU&U at the Euphoria Podcast recording.

Recommended read: FEPs, Silenced, censored, resisting: feminist struggles in the digital age

⚖ Equity and Data

EU AI Act Explainer — AI at Work

In the fourth part of our series on the AI Act and its implications for human rights, we examine the deployment of AI systems in the workplace and the AI Act’s specific obligations aimed at ensuring the protection of workers. In particular, we assess which of the prohibited AI practices could become relevant for the workplace and where potential loopholes and gaps lie. We also focus on the obligations of providers and deployers of high-risk AI systems, which could increase protection of workers from harms caused by automated monitoring and decision-making systems. Finally, we examine to what extent the remedies and enforcement mechanisms foreseen by the AI Act can be a useful tool for workers and their representatives to claim their rights. Overall, we find that the AI Act’s approach to allow more favourable legislation in the employment sector to apply is a positive step. Nevertheless, the regulation itself has only limited potential to protect workers’ rights.

CSOs Express Concern with Withdrawal of AI Liability Directive

CDT Europe joined a coalition of civil society organisations in sending an open letter to European Commission Executive Vice-President Virkkunen and Commissioner McGrath, expressing deep concern over the Commission’s recent decision to withdraw the proposed Artificial Intelligence Liability Directive (AILD) and stressing the urgent need to immediately begin preparatory work on a new, robust liability framework. We argued that the proposal is necessary because individuals seeking compensation for AI-induced harm will need to prove that damage was caused by a faulty AI system, which would be an insurmountable burden without a liability framework. 

Programme Director Laura Lazaro Cabrera also participated in a working lunch hosted by The Nine to discuss the latest trends and developments in AI policy following the Paris AI Summit. Among other aspects, Laura tackled the deregulatory approach taken by the European Commission, the importance of countering industry narratives, and the fundamental rights concerns underlying some of the key features of the AI Act.

Equity and Data Programme Director Laura Lazaro Cabrera speaking on a panel at the “Post-Paris AI Summit: Key Trends and Policies” event hosted by The Nine.
Equity and Data Programme Director Laura Lazaro Cabrera speaking on a panel at the “Post-Paris AI Summit: Key Trends and Policies” event hosted by The Nine.

Recommended read: Tech Policy Press, Human Rights are Universal, Not Optional: Don’t Undermine the EU AI Act with a Faulty Code of Practice

🆕 New Team Member!

Marcel Mir Teijeiro, AI Policy Fellow in CDT Europe's Equity and Data programme.
Marcel Mir Teijeiro, AI Policy Fellow in CDT Europe’s Equity and Data programme.

CDT Europe’s team keeps growing! At the beginning of April, we welcomed Marcel Mir Teijeiro as the Equity and Data programme’s New AI Policy Fellow. He’ll work on the implementation of the AI Act and CDT Europe’s advocacy to protect the right to effective remedy for AI-induced harms. Previously, Marcel participated in the Code of Practice multistakeholder process for General-Purpose AI Models, advising rights-holder groups across the cultural and creative industries on transparency and intellectual property aspects. A Spanish qualified lawyer, he also helped develop a hash-based technical solution for training dataset disclosure shared with the AI Office, U.S. National Institute for Standards and Technology, and the UK AI Safety Institute. We are excited to have him on board, and look forward to working with him!

🗞 In the Press

⏫ Upcoming Events

Tech Policy in 2025: Where Does Europe Stand?: On May 15, CDT Europe and Tech Policy Press are co-hosting an evening of drinks and informal discussion, “Tech Policy in 2025: Where Does Europe Stand?”. It will be an opportunity to connect with fellow tech policy enthusiasts, share ideas, and figure out what the future holds for tech regulation in Europe. The event is currently sold out, but you can still join the waitlist in case some spots open up! 

Lifting the Veil – Advancing Spyware Regulation in the EU: CDT Europe, together with the Open Government Partnership, is hosting the sixth edition of the Civil Society Roundtable Series: “Lifting the Veil – Advancing Spyware Regulation in the EU.” The roundtable will gather representatives from EU Member States, EU institutions, and international bodies alongside civil society organisations, technologists, legal scholars, and human rights defenders for an in-depth exchange on the future of spyware regulation. The participation is invitation-only, so if you think you can contribute to the conversation, feel free to reach out at eu@cdt.org.

CPDP.ai 2025: From 21 to 23 May, CDT Europe will participate in CPDP.ai 18th International Conference. Each year, CPDP gathers academics, lawyers, practitioners, policymakers, industry, and civil society from all over the world in Brussels, offering them an arena to exchange ideas and discuss the latest emerging issues and trends. This year, CDT Europe will be hosting two workshops on AI and spyware, in addition to our Secretary General Asha Allen speaking on a panel on the intersection of the DSA and online gender-based violence. You can still register to attend the conference.

The post EU Tech Policy Brief: May 2025 appeared first on Center for Democracy and Technology.

]]>
CDT Europe’s AI Bulletin: April 2025 https://cdt.org/insights/cdt-europes-ai-bulletin-april-2025/ Tue, 29 Apr 2025 22:22:26 +0000 https://cdt.org/?post_type=insight&p=108506 AILD Withdrawal Maintained Despite Concerns from Civil Society and Lawmakers On 7 April, CDT Europe joined a coalition of civil society organisations in sending an open letter to European Commission Executive Vice-President Virkkunen and Commissioner McGrath, expressing deep concern over the Commission’s recent decision to withdraw the proposed Artificial Intelligence Liability Directive (AILD) and stressing […]

The post CDT Europe’s AI Bulletin: April 2025 appeared first on Center for Democracy and Technology.

]]>
AILD Withdrawal Maintained Despite Concerns from Civil Society and Lawmakers

On 7 April, CDT Europe joined a coalition of civil society organisations in sending an open letter to European Commission Executive Vice-President Virkkunen and Commissioner McGrath, expressing deep concern over the Commission’s recent decision to withdraw the proposed Artificial Intelligence Liability Directive (AILD) and stressing the urgent need to immediately begin preparatory work on a new, robust liability framework. We argued that the proposal is necessary because individuals seeking compensation for AI-induced harm will need to prove that damage was caused by a faulty AI system, which would be an insurmountable burden without a liability framework.  

In a scheduled hearing before the European Parliament’s JURI Committee Commissioner Virkkunen defended the withdrawal, restating the need to reduce overlapping obligations and ensure simpler compliance with the digital acquis for businesses. Crucially, she suggested fully implementing and enforcing the AI Act before any new legislation would be proposed. 

Following the hearing, the Rapporteur of the Directive, Axel Voss, as well as the Rapporteur of the AI Act, Brando Benifei, sent a joint letter to the European Commission expressing their concern over the proposed withdrawal. They recalled that several key proposals of the European Parliament were withdrawn during the AI Act negotiations based on the promise that the AILD would address those concerns. They also noted the persisting gaps for victims of AI-specific harms, and suggested that the Commission include an updated proposal as part of the upcoming Digital Omnibus Package. 

AI Continent Plan Unveiled by the European Commission

The European Commission published the AI Continent Action Plan on 9 April, outlining their strategy to support AI scale-up in the EU through five distinct pillars including computing infrastructure, data, regulatory simplification, and attracting talent. The most notable suggestions include a Data Union Strategy and regulatory simplification measures, both aimed at reducing compliance burdens and removing structural bottlenecks for AI developers and deployers. 

The Data Union Strategy, set for release in Q3 2025, is designed to improve access and use of high-quality and sector-specific data across the EU by improving cross-border data availability, including by reducing the legal and technical conditions for data-sharing. In this regard, the Plan announces a public consultation set to open in May 2025, where stakeholders will be asked to describe current barriers to accessing data and how to simplify compliance with EU data rules. 

The Action Plan similarly considers regulatory simplification in connection with the AI Act, announcing as a first step the July 2025 establishment of an AI Act Service Desk to provide practical compliance guidance, interactive tools, and direct support for startups and SMEs. However, in a public consultation launched simultaneously, the European Commission prompts stakeholders to identify regulatory challenges and recommend further measures to facilitate compliance and possible simplification of the AI Act, paving the way for further deregulatory efforts. 

Finally, the plan includes a proposal for a Cloud and AI Development Act, expected by early 2026, to fast-track environmental permits for data centres, enable a common EU cloud services marketplace, and scale the EU’s computing infrastructure, explicitly seeking to triple EU data centre capacity by 2035.

The Commission’s AI Continent Action Plan sets out a roadmap for five consultative processes in total:

  1. A call for evidence for a European Strategy for AI in science, with a submission deadline of 5 June 2025
  2. A call for evidence and public consultation on the Apply AI Strategy, with a submission deadline of 4 June 2025
  3. A public consultation on the Data Union Strategy, expected to open in May 2025
  4. A call for evidence and public consultation on the Cloud and AI Development Act, with a submission deadline of 4 June 2025
  5. A call for interest on AI GigaFactories, with a submission deadline of 20 June 2025

Public Consultation on Guidelines for General-Purpose AI Models Opened

The European Commission opened a public consultation seeking input that will feed into the upcoming guidelines under the AI Act on general-purpose AI (GPAI) models, which are distinct from the ongoing Code of Practice process. These guidelines are aimed to provide more clarity on various issues, including the definition of GPAI models; the definition of providers along the value chain; the clarification of what ‘placing on the market’ entails; and specifications regarding the exemption for open-source models. They will also provide more detail on the enforcement approach taken by the AI Office. 

The guidelines will complement the Code of Practice on GPAI by explaining what signing and adhering to the Code of Practice means for companies. While the Code of Practice addresses GPAI model providers’ obligations, the guidelines clarify to whom and when those obligations apply. According to the consultation, both the guidelines and the final Code of Practice are expected to be published before August 2025. The consultation is open for all interested stakeholders until 22 May. 

In Other ‘AI & EU’ News 

  • The deadline for the final draft of the Code of Practice on general-purpose AI models to be published is 2 May. However, the latest consultation by the European Commission on GPAI models suggests that the publication may take place in either May or June this year.  
  • The Irish Data Protection Commission (DPC) opened an investigation into the Grok AI model developed by xAI. In particular, the DPC will examine whether training the model on publicly-accessible posts by EU users on the platform X is compliant with xAI’s obligations under the General Data Protection Regulation.
  • Following Meta’s announcement that it would train its AI using public content shared by adults on their products in the EU, several data protection authorities — including those from France, Belgium, the Netherlands, and Hamburg —  notified EU residents that they can take steps to object to the processing. Users wishing to object will have to do so before 27 May.
  • 30 MEPs warned the European Commission against watering down its definition of open-source AI. The letter’s signatories asked the Commission to clarify that certain models, such as those in Meta’s Llama series, are not considered open-source under the AI Act, given that Meta does not share the training code of its models and prohibits the use of its models to train other AI systems. They therefore asked the Commission to consider developing guidance on the meaning of open-source for the purpose of enforcing the AI Act, taking into account international standards including the Open Source Initiative.
  • Spain’s AI draft bill has come under fire by academics and civil society organisations for a provision that exempts public authorities from administrative fines. Critics argue that the exemption could weaken enforcement of AI safeguards and dilute protection of individual rights. For example, misuse of prohibited technologies, such as real-time remote biometric identification, by public bodies would result only in a warning and cessation of the activity. Civil society is calling for removal of the exemption, as well as introduction of disciplinary measures for officials, including disqualification from public office.
  • The next public webinar in the AI Pact series, which aims to promote knowledge-sharing and provide participants with a better understanding of the AI Act and its implementation, will be held on 27 May. You can find more information, as well as recordings of the past events, here.

Content of the Month 📚📺🎧

CDT Europe presents our freshly curated recommended reads for the month. For more on AI, take a look at CDT’s work.  

The post CDT Europe’s AI Bulletin: April 2025 appeared first on Center for Democracy and Technology.

]]>
EU Tech Policy Brief: April 2025 https://cdt.org/insights/eu-tech-policy-brief-april-2025/ Tue, 01 Apr 2025 21:26:17 +0000 https://cdt.org/?post_type=insight&p=108123 Welcome back to the Centre for Democracy & Technology Europe‘s Tech Policy Brief! This edition covers the most pressing technology and internet policy issues under debate in Europe and gives CDT’s perspective on the impact to digital rights. To sign up for CDT Europe’s AI newsletter, please visit our website. Do not hesitate to contact […]

The post EU Tech Policy Brief: April 2025 appeared first on Center for Democracy and Technology.

]]>
Welcome back to the Centre for Democracy & Technology Europe‘s Tech Policy Brief! This edition covers the most pressing technology and internet policy issues under debate in Europe and gives CDT’s perspective on the impact to digital rights. To sign up for CDT Europe’s AI newsletter, please visit our website. Do not hesitate to contact our team in Brussels.

👁 Security, Surveillance & Human Rights

Citizen Lab Unveils Surveillance Abuses in Europe and Beyond                                       

​The recent Citizen Lab report regarding deployment of Paragon spyware in EU Member States, particularly Italy and allegedly in Cyprus and Denmark, highlights a concerning trend of surveillance targeting journalists, government opponents, and human rights defenders. Invasive monitoring of journalist Francesco Cancellato, members of the NGO Mediterranea Saving Humans, and human rights activist Yambio raises serious concerns about press freedom, fundamental rights, and the broader implications for democracy and rule of law in the EU. 

The Italian government’s denial that it authorised surveillance, while reports indicate otherwise, indicates a lack of transparency and accountability. Reportedly, the Undersecretary to the Presidency of the Council of Ministers admitted that Italian intelligence services used Paragon spyware against Mediterranean activists, citing national security justifications. This admission highlights the urgent need for transparent oversight mechanisms and robust legal frameworks to prevent misuse of surveillance technologies. 

Graphic for Citizen Lab report, which reads, "Virtue or Vice? A First Look at Paragon's Proliferating Spyware Options". Graphic has a yellow background, and a grayscale hand reaching through great message bubbles.
Graphic for Citizen Lab report, which reads, “Virtue or Vice? A First Look at Paragon’s Proliferating Spyware Options”. Graphic has a yellow background, and a grayscale hand reaching through great message bubbles.

Lack of decisive action at the European level in response to these findings is alarming. Efforts to initiate a plenary debate within the European Parliament have stalled due to insufficient political support, reflecting a broader pattern of inaction that threatens civic space and fundamental rights across the EU. This inertia is particularly concerning given parallel developments in France, Germany, and Austria, where legislative measures are being considered to legalise use of surveillance technologies. In light of the European Parliament’s PEGA Committee findings on Pegasus and equivalent spyware, it is imperative that EU institutions and Member States establish clear, rights-respecting policies governing the use of surveillance tools. Normalisation of intrusive surveillance without adequate safeguards poses a direct challenge to democratic principles and the protection of human rights within the EU.

Recommended read: Amnesty International, Serbia: Technical Briefing: Journalists targeted with Pegasus spyware

 💬 Online Expression & Civic Space

DSA Civil Society Coordination Group Publishes Analysis on DSA Risk Assessment Reports

Key elements of the Digital Services Act’s (DSA) due diligence obligations for Very Large Online Platforms and Search Engines (VLOPs/VLOSEs) are the provisions on risk assessment and mitigation. Last November, VLOPs and VLOSEs published their first risk assessment reports, which the DSA Civil Society Coordination Group, convened and coordinated by CDT Europe, took the opportunity to jointly assess. We identified both promising practices to adopt and critical gaps to address in order to improve future iterations of these reports and ensure meaningful DSA compliance.

Our analysis zooms in on key topics like online protection of minors, media pluralism, electoral integrity, and online gender-based violence. Importantly, we found that platforms have overwhelmingly focused on identifying and mitigating user-generated risks, as a result focusing less on risks stemming from the design of their services. In addition, platforms do not provide sufficient metrics and data to assess the effectiveness of the mitigation measures they employ. In our analysis, we describe what data and metrics future reports could reasonably include to achieve more meaningful transparency. 

Graphic with a blue background, with logo for the DSA Civil Society Coordination Group featuring members' logos. In black text, graphic reads, "Initial Analysis on the First Round of Risk Assessments Reports under the EU Digital Services Act".
Graphic with a blue background, with logo for the DSA Civil Society Coordination Group featuring members’ logos. In black text, graphic reads, “Initial Analysis on the First Round of Risk Assessments Reports under the EU Digital Services Act”.

CDT Europe’s David Klotsonis, lead author of the analysis, commented, “As the first attempt at DSA Risk Assessments, we didn’t expect perfection — but we did expect substance. Instead, these reports fall short as transparency tools, offering little new data on mitigation effectiveness or meaningful engagement with experts and affected communities. This is a chance for platforms to prove they take user safety seriously. To meet the DSA’s promise, they must provide real transparency and make civil society a key part of the risk assessment process. We are committed to providing constructive feedback and to fostering an ongoing dialogue.”

Recommended read: Tech Policy Press, A New Framework for Understanding Algorithmic Feeds and How to Fix Them 

⚖ Equity and Data

Code of Practice on General-Purpose AI Final Draft Falls Short

Following CDT Europe’s initial reaction to the release of the third Draft Code of Practice on General-Purpose AI (GPAI), we published a full analysis highlighting key concerns. One major issue is the Code’s narrow interpretation of the AI Act, which excludes fundamental rights risks from the list of selected risks that GPAI model providers must assess. Instead, assessing these risks is left as an option, and is only required if such risks are created by a model’s high-impact capabilities.

This approach stands in contrast to the growing international consensus, including the 2025 International AI Safety Report, which acknowledges the fundamental rights risks posed by GPAI. The Code also argues that existing legislation can better address these risks, but we push back on this claim. Laws like the General Data Protection Regulation, the Digital Services Act, and the Digital Markets Act lack the necessary tools to fully tackle these challenges.

Moreover, by making it optional to assess fundamental rights risks, the Code weakens some of its more promising provisions, such as requirements for external risk assessments and clear definitions of unacceptable risk tiers. 

In response to these concerns, we joined a coalition of civil society organisations in calling for a revised draft that explicitly includes fundamental rights risks in its risk taxonomy.

Global AI Standards Hub Summit 

At the inaugural global AI Standards Hub Summit, co-organised by the Alan Turing Institute, CDT Europe’s Laura Lazaro Cabrera spoke at a session exploring the role of fundamental rights in the development of international AI standards. Laura highlighted the importance of integrating sociotechnical expertise and meaningfully involving civil society actors to strengthen AI standards from a fundamental rights perspective. Laura emphasised the need to create dedicated spaces for civil society to participate in standards processes, tailored to the diversity of their contributions and resource limitations.  

Image featuring Programme Director for Equity and Data Laura Lazaro Cabrera speaking at a panel with three other panelists on the role of fundamental rights in standardisation, at the Global AI Standard Hub Summit
Image featuring Programme Director for Equity and Data Laura Lazaro Cabrera speaking at a panel with three other panelists on the role of fundamental rights in standardisation, at the Global AI Standard Hub Summit

Recommended read: Tech Policy Press, Human Rights are Universal, Not Optional: Don’t Undermine the EU AI Act with a Faulty Code of Practice

🆕 Job Opportunities in Brussels: Join Our EU Team

We’re looking for two motivated individuals to join our Brussels office and support our mission to promote human rights in the digital age. 

The Operations & Finance Officer will play a key role in keeping our EU office running smoothly—managing budgets, coordinating logistics, and ensuring strong operational foundations for our advocacy work. 

We’re also seeking an EU Advocacy Intern to support our policy and advocacy efforts, with hands-on experience in research, event planning, and stakeholder engagement. 

Apply before 23 April 2025 by sending your cover letter and CV to hr@cdt.org. For more information, visit our website

🗞 In the Press

⏫ Upcoming Event

Pall Mall Process Conference: On 3 and 4 April, our Director for Security and Surveillance Silvia Lorenzo Perez will participate in the annual Pall Mall Process Conference in Paris. 

The post EU Tech Policy Brief: April 2025 appeared first on Center for Democracy and Technology.

]]>
CDT Europe’s AI Bulletin: March 2025 https://cdt.org/insights/cdt-europes-ai-bulletin-march-2025/ Wed, 26 Mar 2025 17:51:29 +0000 https://cdt.org/?post_type=insight&p=108071 Policymakers in Europe are hard at work on all things artificial intelligence, and CDT Europe is here with our monthly Artificial Intelligence Bulletin to keep you updated. We cover laws and policies that relate to AI, and their implications for Europe, fundamental rights, and democracy. To receive the AI Bulletin, you can sign up here. […]

The post CDT Europe’s AI Bulletin: March 2025 appeared first on Center for Democracy and Technology.

]]>
Policymakers in Europe are hard at work on all things artificial intelligence, and CDT Europe is here with our monthly Artificial Intelligence Bulletin to keep you updated. We cover laws and policies that relate to AI, and their implications for Europe, fundamental rights, and democracy. To receive the AI Bulletin, you can sign up here.

Third GPAI Code of Practice Draft Excludes Discrimination 

The third version of the General-Purpose AI (GPAI) Code of Practice – and the last to be put to multistakeholder consultation – was published on 11 March, alongside a FAQ page on the Code of Practice. The draft is now split into four parts dealing with commitments, transparency, copyright, and safety and security, respectively. The latter section addresses obligations relative to risk assessment and mitigations, and has been subject to significant changes that regress the draft’s fundamental rights protections. 

As we covered in our initial reaction to the draft, the list of risks that are mandatory to assess — also known as the “selected” systemic risks taxonomy — now excludes discrimination and is largely focussed on existential risks. Discrimination was moved to the list of risks that are optional to assess, joining other risks to fundamental rights such as privacy harms and increased spread of child sexual abuse material or non-consensual intimate imagery. The draft cautions GPAI model providers to only assess these risks when they are specific to models’ high-impact capabilities. 

As we addressed in fuller comments to the draft, the explanations given – that fundamental rights risks don’t arise from high-impact capabilities, and that the EU digital rulebook better accounts for these risks – do not stand up to scrutiny and fail to justify the changes. A wide range of organisations have critically reacted to the changes in the systemic risk taxonomy, while also acknowledging some positives, such as that the draft’s provisions concerning external assessment were strengthened, and it now requires greater consideration of acceptability of risks by model providers.

This draft Code of Practice will undergo one final round of review before the final version is presented and published by 2 May. The AI Office and the AI Board will subsequently review the draft and publish their assessment, but the decision to go forward with the Code ultimately rests on the European Commission. The EC can choose to either approve the Code through an implementing Act, or – if the code is not finalised or simply deemed inadequate – to provide common rules for how GPAI model providers should follow their obligations by 2 August, the same date those obligations become applicable. Independent of this process, the European Commission can request standardisation of the rules for GPAI models. Once those standards are finalised, covered providers of GPAI models will be presumed to comply with their obligations under the AI Act. 

Spain Takes a Robust Approach to Prohibited AI Practices

The Spanish government approved a bill implementing the AI Act at the national level, marking the first step towards its formal adoption. Notably, the bill sets out narrow conditions under which remote biometric identification (RBI) may be lawfully used for law enforcement purposes. The practice is in principle prohibited by the AI Act, but technically allowed for three law enforcement purposes – search of missing persons and victims of specified crimes, prevention of threats or terrorist attacks, and identification of suspects of specified criminal offences. It can only be lawfully carried out in a member state where it is explicitly authorised by implementing national legislation, which can be stricter – but not broader – than the terms set by the Act. The Spanish bill as written only authorises RBI use for one of the three purposes the AI Act lists, namely to locate and identify individuals suspected of committing criminal offences of a given degree of seriousness, as specified in Annex II of the Act.

The bill builds on the AI Act by categorising infringements in three categories: minor, severe, and very severe. Any use of an AI practice the law prohibits, including RBI use outside of the draft law’s only exception, is deemed very severe. Failure to notify users when they directly interact with an AI system, or to label AI-generated content in line with the AI Act’s requirements, will constitute a “severe” infringement.

Italian Draft Law Aspires to Set Limits on AI Uses in Critical Sectors 

An Italian government law decree approved by the Senate sets general conditions for the use of AI, delineating and limiting the uses of AI in critical sectors. 

The law specifies that minors below 14 years old may only access AI systems with parental consent. It identifies key areas that stand to benefit from AI – such as the healthcare sector and the working environment – and also emphasises key safeguards, such as creating an AI Observatory within the Ministry of Labor, and limiting uses of AI systems in the judicial sector to administrative purposes specifically excluding legal research. 

Further, the law amends aspects of Italian criminal law to cover the use of AI in committing criminal offenses. The law notably introduces a new legal offense for dissemination of AI-generated content – ostensibly including deepfakes – without a person’s consent, resulting in unjust damage, that is punishable by imprisonment from one to five years.

The law is in draft form and will need to be approved by the Italian Chamber of Deputies.

AI Identified as a Key Priority in Europe’s Defence Strategy 

A joint white paper released last week by the European Commission and the High Representative for Foreign Affairs and Security Policy on AI in European Defence identified AI as a key area of priority defense capability, noting that new ecosystems and value chains for cutting-edge technologies such as AI “can feed into civilian and military applications”. The paper highlights AI-powered robots as a concrete area of opportunity. 

The white paper announces a strategic dialogue with the defence industry to identify regulatory hurdles and address challenges ahead of presenting a dedicated Defence Omnibus Simplification proposal by June 2025. This new simplification proposal adds to the five recently announced simplification initiatives — reviews of legislation from the digital, agricultural, and other domains — outlined in the Commission’s communication on simplification.

The paper further announces a forthcoming European Armament Technological Roadmap to be published this year — “leveraging investment into dual use advanced technological capabilities at EU, national and private level” — that will focus on AI and quantum in an initial phase.

In Other ‘AI & EU’ News 

  • Digital rights NGO noyb filed a second complaint against OpenAI after a Norwegian user queried ChatGPT for information related to his name, and the chatbot inaccurately responded that the individual by that name was a convicted murderer. The complaint, filed with the Norwegian data protection authority Datatilsynet, argues that OpenAI violates GDPR’s data accuracy principle by allowing ChatGPT to create defamatory outputs about users.
  • A proposed amendment to the Hungarian Child Protection Act seeks to allow using facial recognition to identify Pride protest attendees, and to ban Pride events. The proposal would likely be precluded by the AI Act’s prohibition on conducting remote biometric identification for law enforcement purposes in publicly accessible spaces, which became applicable in February this year.  
  • The European Commission is building a network of model evaluators to define how general-purpose AI models with systemic risk should be evaluated in accordance with the legal requirements of the AI Act and the GPAI code of practice. 

Content of the Month 📚📺🎧

CDT Europe presents our freshly curated recommended reads for the month. For more on AI, take a look at CDT’s work.  

The post CDT Europe’s AI Bulletin: March 2025 appeared first on Center for Democracy and Technology.

]]>
Civil Society Responds to DSA Risk Assessment Reports: An Initial Feedback Brief https://cdt.org/insights/dsa-civil-society-coordination-group-publishes-an-initial-analysis-of-the-major-online-platforms-risks-analysis-reports/ Mon, 17 Mar 2025 10:00:50 +0000 https://cdt.org/?post_type=insight&p=107918 The DSA Civil Society Coordination Group, in collaboration with the Recommender Systems Taskforce and People vs Big Tech, has released an initial analysis of the first Risk Assessment Reports submitted by major platforms under Article 42 of the DSA. This analysis identifies both promising practices and critical gaps, offering recommendations to improve future iterations of […]

The post Civil Society Responds to DSA Risk Assessment Reports: An Initial Feedback Brief appeared first on Center for Democracy and Technology.

]]>
The DSA Civil Society Coordination Group, in collaboration with the Recommender Systems Taskforce and People vs Big Tech, has released an initial analysis of the first Risk Assessment Reports submitted by major platforms under Article 42 of the DSA. This analysis identifies both promising practices and critical gaps, offering recommendations to improve future iterations of these reports and ensure meaningful compliance with the DSA.

The Digital Services Act (DSA) represents a landmark effort to create a safer and more transparent online environment. Central to this framework are yearly risk assessments required under Articles 34 and 35, which mandate Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) to identify, assess, and mitigate systemic risks posed by their services.

Identifying Useful Practices

The first round of RA Reports showcased varying approaches to risk identification and mitigation, but also different formats to present information. While the reports across platforms and services will inevitably differ to some extent, by identifying practices from each platform that were the most conducive to meaningful transparency, we aim to set a baseline for future iterations. To showcase this, we zoom-in on key topics like the online protection of minors, media pluralism, online gender-based violence and explore features from different reporting formats that we found compelling.

The Crucial Role of Platform Design

A recurring theme in the analysis of the RA Reports is the underrepresentation of design-related risks. While platforms occasionally acknowledged the role of their systems — such as recommender algorithms — in amplifying harmful content, these references were often indirect or insufficiently explored. Design choices, particularly those driven by engagement metrics, can significantly contribute to systemic risks, including mental health issues, political polarisation, and the spread of harmful content. Despite this, many reports focused primarily on content moderation rather than addressing how platform design itself might be a root cause of harm. Future RA Reports must prioritise assessing design-related risks, ensuring that mitigation measures target not only user-generated risks but also the systemic risks embedded in platform architecture. By doing so, platforms can better align with the DSA’s objectives and create safer digital environments for all users.

Transparency Builds Trust

Trust with users and regulators can only be fostered through transparency. Many RA Reports lacked verifiable data to substantiate claims about the effectiveness of mitigation measures. For instance, a number of reports referenced existing policies and data without providing new, DSA-specific assessments. Platforms must disclose quantitative and qualitative data, such as metrics on exposure to harmful content and user engagement with control tools, to demonstrate compliance and build trust. The brief includes a detailed table with the minimum level of disclosure that would be necessary to assess the effectiveness of mitigation measures, that we believe could be made public without posing a risk to trade secrets.

The Need for Meaningful Stakeholder Engagement

Finally, meaningful consultation with civil society, researchers, and impacted communities is essential to identifying and mitigating systemic risks. Yet, none of the RA Reports analysed detail how external expertise was incorporated into their assessments. Platforms must engage stakeholders systematically, reflecting their insights in risk assessments and mitigation strategies. This approach not only ensures compliance with DSA Recital 90 but also strengthens the credibility of the reports.

Recommendations

The first round of RA Reports under the DSA marks an important step toward greater accountability. However, significant gaps remain. To advance user safety and foster trust, platforms must:

  1. Focus on design-related risks, particularly those tied to recommender systems.
  2. Enhance transparency by providing verifiable data on mitigation measures.
  3. Engage meaningfully with stakeholders to ensure risk assessments reflect real-world harms.

By addressing these gaps, VLOPs and VLOSEs can align with the DSA’s objectives, contribute to a safer digital environment, and rebuild trust with users and regulators. Civil society remains committed to supporting this process through ongoing analysis and collaboration. Together, we can ensure that the DSA’s promise of a safer online space becomes a reality.

Read the full report.

The DSA CSO Coordination Group, convened and coordinated by CDT Europe, is an informal coalition of civil society organisations, academics and public interest technologists that advocates for the protection of human rights, in the implementation and enforcement of the EU Digital Services Act.

The post Civil Society Responds to DSA Risk Assessment Reports: An Initial Feedback Brief appeared first on Center for Democracy and Technology.

]]>
EU Tech Policy Brief: March 2025 https://cdt.org/insights/eu-tech-policy-brief-march-2025/ Wed, 12 Mar 2025 21:57:53 +0000 https://cdt.org/?post_type=insight&p=107889 Welcome back to the Centre for Democracy & Technology Europe‘s Tech Policy Brief! This edition covers the most pressing technology and internet policy issues under debate in Europe and gives CDT’s perspective on the impact to digital rights. To sign up for CDT Europe’s AI newsletter, please visit our website. Do not hesitate to contact […]

The post EU Tech Policy Brief: March 2025 appeared first on Center for Democracy and Technology.

]]>
Welcome back to the Centre for Democracy & Technology Europe‘s Tech Policy Brief! This edition covers the most pressing technology and internet policy issues under debate in Europe and gives CDT’s perspective on the impact to digital rights. To sign up for CDT Europe’s AI newsletter, please visit our website. Do not hesitate to contact our team in Brussels.

👁 Security, Surveillance & Human Rights

CDT Europe at RightsCon 2025

CDT US and CDT Europe took part in the 13th edition of RightsCon, held in Taipei from 24-27 February. CDT Europe’s Silvia Lorenzo Perez participated in several sessions addressing abuse of commercial spyware in the EU and beyond from several angles, including investor accountability, the complexities of defining spyware, financial tracking (“follow the money” reporting), litigation strategies, global regulatory efforts, and the geopolitical factors shaping policy responses.

Silvia spoke on two key panels addressing spyware regulation and global accountability. The first, hosted by the Human Rights Center at the University of Minnesota, explored the challenges and lessons of spyware regulation, using the European Parliament’s PEGA Committee Report as a starting point. It incorporated insights from UN Special Procedures, observations from UNODC and OHCHR, and civil society perspectives on ground-level realities and necessary reforms. The discussion also considered the EU’s leadership role in global spyware regulation, particularly where multilateral efforts have struggled. 

CDT Europe and CDT US Security and Surveillance teams in front of the official RightsCon sign.
CDT Europe and CDT US Security and Surveillance teams in front of the official RightsCon sign.

The second panel, organised by the Spyware Accountability Initiative, convened global experts to assess progress in spyware accountability, covering research, litigation, state regulation, and investor engagement. Silvia reflected on developments over the past year and expectations for the year ahead. 

CDT Europe also took advantage of the presence of key European partners at RightsCon to host a strategic brainstorming session, identifying challenges and opportunities to advance regulatory and litigation efforts in the region.

Focus on the Pall Mall Process for the Future of Spyware Regulation

Spyware and cyber threats were also a major topic at CyberNext Brussels 2025 on 5 March, where CDT Europe’s Silvia Lorenzo Perez appeared on a panel to discuss commercial spyware’s growing impact in Europe and its risks to cybersecurity, national security, and human rights. In response to concerns about lacking oversight of States’ spyware uses, Silvia emphasised the need for governments to limit their use of spyware and ensure that national security claims are necessary, proportionate, and clearly defined by law, and respect the essence of fundamental rights. Those claims should also be subject to strong oversight, in line with the standards set by the European Court of Human Rights and the Court of Justice of the European Union.

Silvia’s panel also discussed the Pall Mall Process, a joint initiative by France and the UK to establish guiding principles for development, sale, and use of commercial spyware. While the process offers an important multistakeholder platform for dialogue, robust safeguards are still crucial to prevent spyware misuse and protect fundamental rights. Panelists also emphasised the importance of EU-level action in addressing the proliferation of commercial spyware.                                               

Recommended read: Council of Europe, Europe Press Freedom Report – 2024: Confronting Political Pressure, Disinformation, and the Erosion of Media Independence 

 💬 Online Expression & Civic Space

International Women’s Day 2025

To mark International Women’s Day 2025, our Secretary General Asha Allen participated in two important occasions to discuss the state of play for gender equality in Europe.

During a conversation with Laeticia Thissen for the FEPS Talks podcast, Asha reflected on the impact of online gender-based violence on democratic and civic space participation. The two also discussed how harmful content moderation practices result in self-censorship of feminist and LGBTQI+ voices, an effect widely criticised by advocates. 

CDT Europe Secretary General Asha Allen speaking at a panel for the “Equality in Digital" event at the European Parliament, organised by MEP Elena Sancho Murillo.
CDT Europe Secretary General Asha Allen speaking at a panel for the “Equality in Digital” event at the European Parliament, organised by MEP Elena Sancho Murillo.

At the “Equality in Digital” event at the European Parliament, organised by Member of the European Parliament Elena Sancho Murillo, Asha joined a panel to exchange on how to implement regulations such as the DSA, Directive on Violence against Women, and AI Act, to effectively address the harms that AI systems can cause for victims.

Addressing Online Threats Against Journalists and Fact-Checkers

On 11 March, CDT Europe’s David Klotsonis took part in a thematic roundtable on journalistic protections, organised by the OSCE Representative on Freedom of the Media. The discussion focused on strengthening Big Tech accountability and engagement to enhance journalists’ safety, with key topics including safety-by-design measures and escalation channels.

David emphasised the need for safety-by-design approaches to mitigate online risks for journalists. While platforms often rely on ex-post moderation, their engagement-driven models can amplify online violence. The Digital Services Act (DSA) includes critical provisions, such as Article 22 on Trusted Flaggers, to help identify harmful content including threats against journalists. However, their effectiveness is undermined by resource constraints, low adoption rates, and public skepticism. Additionally, attacks seeking to delegitimize the media have increasingly targeted fact-checkers and Trusted Flaggers themselves. To ensure these tools fulfill their purpose, national regulators (Digital Services Coordinators) must prioritise adequate resourcing and clear communication to build public trust in mechanisms designed to protect journalists and uphold media integrity.

Recommended read: Tech Policy Press, A New Framework for Understanding Algorithmic Feeds and How to Fix Them

⚖ Equity and Data

Global AI Action Summit: A Missed Opportunity

CDT Europe’s Laura Lazaro Cabrera attended the French AI Summit, held in Paris on 10 and 11 February, which was largely a missed opportunity for EU policymakers to defend and promote the bloc’s Artificial Intelligence Act on a global stage. Instead, representatives promised an innovation-friendly implementation of the AI Act and committed to cutting red tape for companies. Despite the limited spaces created by the French AI Summit to hold frank and meaningful discussions about the risks posed by AI, several Summit side events explored these questions in depth. Many were organised by civil society organisations critically reflecting on the global approach to AI governance and devising strategies to move forward. 

Group picture with Programme Director Laura Lazara Cabrera, CDT CEO Alexandra Reeve Givens, and civil society representatives at the French AI Summit.
Group picture with Programme Director Laura Lazaro Cabrera, CDT CEO Alexandra Reeve Givens, and civil society representatives at the French AI Summit.

One of these events, organised by Renaissance Numérique, notably explored how to meaningfully leverage civil society participation in global AI governance processes. Civil society representatives took stock of the French AI Summit and brainstormed crucial changes needed to build on lessons learned. At the event, Laura delivered a summary of panelists’ and discussants’ inputs, and set the scene for the concluding remarks. 

Third Draft Code of Practice on General-Purpose AI Released 

On 11 March, the European Commission unveiled the third draft of the Code of Practice on general-purpose AI (GPAI)  – the last draft to be put to final consultation. This latest draft splits the Code of Practice into four parts, dealing with commitments, transparency, copyright, and safety and security respectively. The latter section addresses obligations relative to risk assessment and mitigations, and has been subject to significant changes amounting to a significant regression in terms of fundamental rights protections. The list of mandatory risks to be assessed — also known as “selected” systemic risks taxonomy — has been substantially watered down, and now excludes discrimination, which has been relegated to the list of optional risks alongside CSAM/NCII and risks to privacy. The draft moreover dissuades GPAI model providers from considering these risks, cautioning them to only assess them if they are specific to the high-impact capabilities of GPAI models. Read our initial analysis on our website.

Recommended read: Reuters, Spain to impose massive fines for not labelling AI-generated content 

🗞 In the Press

⏫ Upcoming Events 

AI Standards Hub Global Summit

On Monday 17 March, our Equity & Data programme director Laura Lazaro Cabrera is speaking at the AI Standards Hub Global Summit 2025 on a panel with CSO representatives and experts. They’ll explore the role of civil society and human rights expertise in shaping AI standards, and challenges in effectively integrating fundamental rights considerations to current standards-setting processes. You can register to attend the event either in person or online.

The post EU Tech Policy Brief: March 2025 appeared first on Center for Democracy and Technology.

]]>
Press Release: Withdrawal of the AI Liability Directive Proposal Raises Concerns Over Justice for AI Victims https://cdt.org/insights/press-release-withdrawal-of-the-ai-liability-directive-proposal-raises-concerns-over-justice-for-ai-victims-2/ Wed, 12 Feb 2025 16:32:45 +0000 https://cdt.org/?post_type=insight&p=107477 (BRUSSELS) – Yesterday, the European Commission announced the withdrawal of its proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive). The proposal aimed to address the difficulty individuals face when having to identify the liable entity and proving the requirements for a successful liability claim in the context of an […]

The post Press Release: Withdrawal of the AI Liability Directive Proposal Raises Concerns Over Justice for AI Victims appeared first on Center for Democracy and Technology.

]]>
(BRUSSELS) – Yesterday, the European Commission announced the withdrawal of its proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive). The proposal aimed to address the difficulty individuals face when having to identify the liable entity and proving the requirements for a successful liability claim in the context of an opaque AI system. This decision is the latest development in an ongoing drive by the European Commission to cut “red tape” for the private sector in a misguided effort to live up to its ambition to prioritise competitiveness and innovation.

The Centre for Democracy & Technology Europe (CDT Europe) is deeply disappointed by this development, which represents a significant setback for the protection of the right to an effective remedy of victims of AI-induced harms.

“The AI Liability Directive was set to put forward a framework to ease the burden on individuals to pursue justice when wronged by an AI system. Its withdrawal is a departure from European values of transparency and accountability as well as fundamental rights, sending an alarming message that even the most basic procedural safeguards are fair play in the rush to embrace innovation”, said Laura Lazaro Cabrera, CDT Europe’s Counsel and Director of the Equity and Data Programme.

“Harms caused by AI systems and models are notoriously difficult to prove, owing to their complexity and lack of transparency. This leaves individuals with limited avenues to seek redress when they suffer harms induced by AI”, she further explained. 

While CDT Europe acknowledges the limits of the proposal in its current form as well as its significant potential for improvement, we nevertheless stress that it is vital to have rules in place which tackle the specific barriers individuals face in the context of AI-induced harms. This is especially important in light of the limited remedies available to individuals under the AI Act, which creates a complaints mechanism but sets no obligations for relevant authorities to follow through. 

We were encouraged by recent signs in the European Parliament to work on the file, following a report by the European Parliamentary Research Service which recommended taking the proposal forward. It is both disconcerting and preoccupying to see the withdrawal at a time when discussions in the Parliament on the proposal had restarted, and prior to the conclusion of a public consultation on the text which had been launched by the file’s rapporteur. CDT Europe will continue to advocate for the preservation of fundamental rights, including effective redress, in connection with harms caused by AI. 

The post Press Release: Withdrawal of the AI Liability Directive Proposal Raises Concerns Over Justice for AI Victims appeared first on Center for Democracy and Technology.

]]>
EU Tech Policy Brief: January 2025 https://cdt.org/insights/eu-tech-policy-brief-january-2024/ Wed, 05 Feb 2025 00:45:21 +0000 https://cdt.org/?post_type=insight&p=107297 Welcome back to the Centre for Democracy & Technology Europe‘s Tech Policy Brief, where we highlight some of the most pressing technology and internet policy issues under debate in Europe, the U.S., and internationally, and give CDT’s perspective on the impact to digital rights. To sign up for this newsletter, or CDT Europe’s AI newsletter, […]

The post EU Tech Policy Brief: January 2025 appeared first on Center for Democracy and Technology.

]]>
Welcome back to the Centre for Democracy & Technology Europe‘s Tech Policy Brief, where we highlight some of the most pressing technology and internet policy issues under debate in Europe, the U.S., and internationally, and give CDT’s perspective on the impact to digital rights. To sign up for this newsletter, or CDT Europe’s AI newsletter, please visit our website.

📢 2025 Team Update 

CDT Europe’s team is back together! We’re thrilled to kick off the new year with the full team back in action. This January, we welcomed two team members, Joanna Tricoli, joining the Security, Surveillance and Human Rights Programme as a Policy and Research Officer, and Magdalena Maier, joining the Equity and Data Programme as a Legal and Advocacy Officer. Plus, our Secretary General, Asha Allen, has returned to the office – we’re so glad to have her back!

 Full CDT Europe team is pictured at CDT Europe’s office in Brussels.
 Full CDT Europe team is pictured at CDT Europe’s office in Brussels.

👁 Security, Surveillance & Human Rights

PCLOB Dismissals Put EU-U.S. Data Transfers At Risk

On 27 January, the Trump Administration dismissed three Democratic members of the Privacy and Civil Liberties Oversight Board (PCLOB), an independent government entity that facilitates transparency and accountability in U.S. surveillance. This lost the body its quorum, preventing it from commencing investigations or issuing reports on intelligence community activities that may threaten civil liberties. It is unclear when replacements will be appointed and operations will resume, but based on past instances, the process is likely to take a long time. 

The PCLOB plays a crucial role in protecting privacy rights and keeping intelligence agencies in check. It is also a key part of the EU-U.S. Data Privacy Framework (DPF), established in 2023 after years of negotiations following the Court of Justice of the EU’s invalidation of Privacy Shield. The DPF provides EU citizens with rights to access, correct, or delete their data, and offers redress mechanisms including independent dispute resolution and arbitration. Under the Framework, the PCLOB is responsible for overseeing and ensuring U.S. intelligence follows key privacy and procedural safeguards. As we pointed out in a Lawfare piece, weakening this Oversight Board raises serious concerns about DPF’s validity, since the EU now faces greater challenges in ensuring that the U.S. upholds its commitments — with the entire DPF and transatlantic data flows at risk. 

Venice Commission Asks for Strict Spyware Regulations   

In its long-awaited report released last December, the Venice Commission addressed growing concerns about spyware use, and the existing legislative frameworks regulating the technology in all Council of Europe Member States. The report is based on the Commission’s examination of whether those laws provide enough oversight to protect fundamental rights, and was done in response to a request from the Parliamentary Assembly of the Council of Europe following revelations about concerning uses of Pegasus spyware.

In the report, the Commission emphasised the need for clear and strict regulations due to spyware’s unprecedented intrusiveness, which can interfere with the most intimate aspects of our daily lives. To prevent misuse, it laid out clear guidelines for when and how governments can use such spyware surveillance tools, to ensure that privacy rights are respected and abuse is prevented.

Recommended read: The Guardian, WhatsApp says journalists and civil society members were targets of Israeli spyware 

💬 Online Expression & Civic Space

Civil Society Aligns Priorities on DSA Implementation

Last Wednesday, CDT Europe hosted the annual DSA Civil Society Coordination Group in-person meeting at its office, bringing together 36 participants from across Europe to strategise and plan for 2025 on topics including several aspects of Digital Services Act (DSA) enforcement. 

 DSA Coordination Group Meeting, hosted by CDT Europe in Brussels.
 DSA Coordination Group Meeting, hosted by CDT Europe in Brussels.

The day began with a focused workshop by the Recommender Systems Task Force on the role of recommender systems in annual DSA Risk Assessment reports, which Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) must complete to assess and mitigate the systemic risks posed by their services. The workshop addressed key challenges in interpreting these reports, particularly in the absence of data to substantiate claims about how effective mitigations are. 

That session was followed by a broader workshop on DSA Risk Assessments. With the first round of Risk Assessment and Audit reports now published, constructive civil society feedback on those reports can help improve each iteration, pushing towards the ultimate goal of meaningful transparency that better protects consumers and society at large.

Transparency and Accountability Are Needed from Online Platforms

Recently, at a multistakeholder event on DSA Risk Assessments, CDT Europe’s David Klotsonis facilitated a session on Recommender Systems. With the first round of Risk Assessment reports widely considered unsatisfactory by civil society, much of the conversation focused on how to foster greater and more meaningful transparency through these assessments. Participants highlighted that, without data to underpin the risk assessments, robust and informed evaluation by the public is impossible. Even in the absence of such data, however, the discussion underscored that consistent and meaningful engagement with relevant stakeholders—including those from digital rights organisations in the EU—remains crucial. Civil society reflections are key to ensure that these reports could be even more useful, and to drive the transparency and accountability necessary for better platform safety.

Recommended read: Tech Policy Press, Free Speech Was Never the Goal of Tech Billionaires. Power Was.

⚖ Equity and Data

CDT Europe Responds to EC Questionnaire on Prohibited AI Practices

CDT Europe participated in the public stakeholder consultation on which practices the AI Act prohibits, to inform the European Commission’s development of guidelines for practically implementing those prohibitions (which will apply beginning 2 February 2025). In our response, we highlighted that the prohibitions — as set out in the final AI Act text — should be further clarified to cover all potential scenarios where fundamental rights may be impacted. We also argued that exceptions to these prohibitions must be interpreted narrowly. 

Second Draft of the General-Purpose AI Code of Practice Raises Concerns

In December, the European Commission published the second draft of the General-Purpose AI (GPAI) Code of Practice (CoP). Despite significant changes and some improvements, several aspects of the draft continue to raise concerns among civil society. The systemic risk taxonomy, a key part of the draft that sets out the risks GPAI model providers must assess and mitigate, remains substantially unchanged. 

In earlier feedback, CDT Europe suggested key amendments to bring the draft in line with fundamental rights, such as including the risk to privacy or the prevalence of non-consensual intimate imagery and child sexual abuse material. On a different front, organisations representing rights-holders have called for critical revisions to the draft to avoid eroding EU copyright standards, noting that the CoP in its current form fails to require strict compliance with existing EU laws. 

Our comments on the second-draft systemic risk taxonomy’s approach to fundamental rights are available on our website. CDT Europe will continue to engage with the process, with the next draft expected to be released and simultaneously be made available for comments to CoP participants on 17 February

EDPB Opinion on Personal Data and AI Models: How Consequential Is It?

In an early January IAPP panel, our Equity & Data Programme Director Laura Lazaro Cabrera discussed the role of the latest EDPB opinion on AI models and GDPR in closing a long debate: Does the tokenisation process underlying AI models prevent data processing, in the traditional sense, from taking place? Ultimately, this line of reasoning would take AI models entirely outside of the General Data Protection Regulation (GDPR)’s scope.

Equity & Data Programme Director Laura Lazaro Cabrera speaking at IAPP’s online panel on the latest EDPB Opinion on Personal Data and AI Models.
Equity & Data Programme Director Laura Lazaro Cabrera speaking at IAPP’s online panel on the latest EDPB Opinion on Personal Data and AI Models.

The panel unpacked the opinion’s nuances, noting that it allowed for situations where a model could be considered legally anonymous — and thereby outside the GDPR’s scope — even when personal data could be extracted, if the likelihood of doing so using “reasonable means” was “insignificant”. As the panel highlighted, the opinion is strictly based on the GDPR and did not refer to the AI Act, but will inevitably inform how regulators approach data protection risks in the AI field. Those risks are currently under discussion in several AI Act implementation processes, such as those for the GPAI Code of Practice and the forthcoming template for reporting on a model’s training data.

Recommended read: POLITICO, The EU’s AI bans come with big loopholes for police

🦋 Bluesky

We are on Bluesky! As more users join the platform (including tech policy thought leaders), we’re finding more exciting content, and we want you to be part of the conversation. Be sure to follow us at @cdteu.bsky.social, and follow our team here. We also created a starter pack of 30+ EU tech journalists, to catch the latest digital news in the bubble. 

🗞 In the Press

⏫ Upcoming Events 

AI Summit: On 10 and 11 February, France will host the Artificial Intelligence Action Summit, gathering heads of State and Government, leaders of international organisations, CEOs, academia, NGOs, artists and members of civil society, to discuss the development of AI technologies across the world and their implications for human rights. CDT President Alexandra Reeve Givens and CDT Europe Programme Director Laura Lazaro Cabrera will attend the conference. Laura will be making the closing remarks at an official side event to the Summit hosted by Renaissance Numérique. Registration is open here.

RightsCon: Our Security, Surveillance and Human Rights Programme Director Silvia Lorenzo Perez will participate in a panel discussion on spyware at the 2025 RightsCon Edition, taking place from 24 to 27 February in Taipei. Each year, RightsCon convenes business leaders, policy makers, government representatives, technology experts, academics, journalists, and human rights advocates from around the world to tackle pressing issues at the intersection of human rights and technology.

The post EU Tech Policy Brief: January 2025 appeared first on Center for Democracy and Technology.

]]>
CDT Europe Response to the Consultation on Data Access in the DSA https://cdt.org/insights/cdt-europe-response-to-the-consultation-on-data-access-in-the-dsa/ Tue, 17 Dec 2024 12:28:02 +0000 https://cdt.org/?post_type=insight&p=106786 CDT Europe welcomes the European Commission’s initiative to provide independent researchers with access to platforms’ data, initially through Article 40 of the Digital Services Act and now with further specifications outlined in the Draft Delegated Act. Several VLOPs have made access to their data more difficult in the last year, but transparency that allows for […]

The post CDT Europe Response to the Consultation on Data Access in the DSA appeared first on Center for Democracy and Technology.

]]>
CDT Europe welcomes the European Commission’s initiative to provide independent researchers with access to platforms’ data, initially through Article 40 of the Digital Services Act and now with further specifications outlined in the Draft Delegated Act. Several VLOPs have made access to their data more difficult in the last year, but transparency that allows for research remains the primary tool for understanding how online services contribute to systemic risks to society and the best avenues to mitigate them.

The draft delegated act clarifies many important details, and much of the feedback that was provided from stakeholders during the call-for-evidence (including CDT Europe’s feedback) has been incorporated into the text. In what follows, we aim to build on this great effort by providing suggestions on what should be further detailed.

Overview of key recommendations:

– Expand the independence requirements for applicant researchers to prevent government overreach;
– Explicitly mention the role of CSOs, including CSOs outside the EU, as potential applicant researchers;
– Further detail the requirements for the data inventory to ensure completeness;
– Empower researchers to initiate the mediation process;
– Empower independent experts to assess the quality of data inventories and be part of the mediation process;
– Extend the timeline for DSCs responding to data access applications.

Find our full consultation online and in pdf.

The post CDT Europe Response to the Consultation on Data Access in the DSA appeared first on Center for Democracy and Technology.

]]>
EU Tech Policy Brief: December 2024 https://cdt.org/insights/eu-tech-policy-brief-december-2024/ Mon, 09 Dec 2024 22:26:44 +0000 https://cdt.org/?post_type=insight&p=106702 Welcome back to the Centre for Democracy & Technology Europe‘s Tech Policy Brief for the last edition of the year! This edition highlights some of the most pressing technology and internet policy issues under debate in Europe, the U.S., and internationally, and gives CDT’s perspective on the impact to digital rights. To sign up for […]

The post EU Tech Policy Brief: December 2024 appeared first on Center for Democracy and Technology.

]]>
Welcome back to the Centre for Democracy & Technology Europe‘s Tech Policy Brief for the last edition of the year! This edition highlights some of the most pressing technology and internet policy issues under debate in Europe, the U.S., and internationally, and gives CDT’s perspective on the impact to digital rights. To sign up for CDT Europe’s AI newsletter, please visit our website.

Please do not hesitate to contact our team in Brussels: Laura Lazaro Cabrera, Silvia Lorenzo Perez, Aimée Duprat-Macabies, David Klotsonis, and Giulia Papapietro.

👁 Security, Surveillance & Human Rights

Civil Society Strategises on Tackling Spyware

Spyware remains high on the EU agenda; in Poland, an arrest was recently made in relation to the governmental probe on the use of Pegasus. In this context, on 20 November, CDT Europe convened the Spyware coordination group to strategise on EU-level actions to tackle spyware. The discussion focused on key areas of regulation and advocacy, aiming to build consensus. Points of convergence included the need for definitions of key terms that can adapt to the rapid evolution of spyware technologies, and the strict prohibition of spyware use against journalists.

Photograph of Spyware Coordination Group Hybrid Workshop at CDT Europe’s Office.
Photograph of Spyware Coordination Group Hybrid Workshop at CDT Europe’s Office.

Participants also explored the potential of internal market regulation as a legal basis for addressing the commercial spyware market and industry. Insights from the European Media Freedom Act (EMFA) and the EU Cybersecurity Framework informed discussions, particularly regarding litigation strategies to challenge Article 4 implementation and leverage cybersecurity policies to mitigate spyware threats. 

The workshop highlighted the shared urgency of curbing spyware misuse through coordinated, impactful advocacy and legal action.

All Eyes on Member States’ Actions on Spyware

At the various Pall Mall Process meetings that took place on the sidelines of the Paris Peace Forum, CDT Europe’s Silvia Lorenzo Perez engaged in critical discussions where she highlighted the urgent need for coordinated global action on spyware. 

At a panel held by the Swedish government and Access Now, victims shared powerful testimonies on the devastating impact of spyware abuse. These accounts underscored the urgent need for robust regulatory action to protect human rights defenders. This meeting was followed by a multistakeholder roundtable focused on combating the spread of commercial spyware. 

The day concluded with a Pall Mall Process meeting to review measures aimed at preventing spyware proliferation globally. While governments recognise the dangers of spyware, translating concerns into enforceable legal frameworks remains a challenge. The EU now has a unique opportunity to lead, with Member States at the table tasked with driving critical reforms. With the Polish Presidency of the Council of the EU at the helm, the time is now for bold leadership to address spyware abuse and protect both national security and individual rights.

Recommended read: The Guardian, Ronan Farrow on surveillance spyware: ‘It threatens democracy and freedom”

 💬 Online Expression & Civic Space

Trusted Flaggers in the DSA: Challenges and Opportunities

Implementation of the Digital Services Act (DSA) is at a busy phase, with online platforms starting to release their first annual risk assessment and audit reports (CDT Europe and other CSOs published a joint letter on the process). Another crucial part of the regulation’s implementation rests with the Trusted Flagger Mechanism, which helps combat illegal content online by granting certified entities priority processing of flagged material. CDT Europe and EU DisinfoLab organised a webinar on the topic on 21 November, where over 30 participants, including civil society organisations (CSOs), Digital Services Coordinators, and the European Commission, explored current challenges and opportunities. The system faces significant hurdles, including resource constraints for CSOs applying for certification, misinformation campaigns undermining public trust in Trusted Flaggers, and low uptake due to complex, burdensome processes and unclear benefits. With only 15 certifications granted so far, the mechanism is underutilised. 

Some key recommendations from the event include:

  • Ensuring sustainable funding for CSOs to meet Trusted Flagger obligations’
  • Developing proactive communication strategies to counter misinformation and clarify the role of Trusted Flaggers to the wider public; and
  • Establishing a working group to harmonise practices, support applicants, and address challenges like application complexity.

In our full outcomes report blog, we identify key opportunities for CSOs.

A Human Rights-Centered Application of the DSA

CDT Europe’s Research and Policy Officer David Klotsonis joined a workshop in Vienna, organised by the DSA Human Rights Alliance and hosted by the Organisation for Security and Co-operation in Europe (OSCE). The event focused on exploring principles for a Global Human Rights-Centered application of the Digital Services Act. The participants discussed lessons from other jurisdictions and conflict zones to shape thoughtful DSA implementation, while considering the risks of applying the law to different regulatory environments without accounting for unique vulnerabilities. As the “Brussels Effect” continues to generate buzz, it’s crucial to unpack its real-world implications. How can laws, when removed from their original institutional context, unintentionally—or deliberately—undermine human rights? This workshop offered a timely platform for reflection, and was a source of important insights.

Online Gender-Based Violence: What Now?

Graphic with purple background and white text reading, "Online Gender-Based Violence in the EU: What Now?" Graphic also depicts woman standing in front of a laptop emitting emojis.
Graphic with purple background and white text reading, “Online Gender-Based Violence in the EU: What Now?” Graphic also depicts woman standing in front of a laptop emitting emojis.

Online gender-based violence (OGBV) continues to be a widespread and alarming issue, fueled by misogynistic narratives, that affects women in Europe and around the world. On the International Day for the Elimination of Violence against Women. and in the context of the 16 Days of Activism against Gender-Based Violence, CDT Europe highlighted the EU’s progress on the issue, such as the Directive on combating violence against women and the Digital Services Act. Despite these advancements, problems persist in ensuring the online space is free of this gendered harm. In our blog, we explored the obstacles ahead, emphasising the need for cultural change and effective implementation. 

Recommended read: The Verge, Meta says it’s mistakenly moderating too much 

⚖ Equity and Data

An Ongoing Battle for Full Accountability for AI Harms

In our latest blog post, we reflected on persistent gaps in EU regulation that hinder accountability for AI-induced harms. Transparency, an inherent challenge for AI systems, is a crucial prerequisite to identifying harms. The AI Act goes some way towards ensuring a base level of transparency in some circumstances, but neglects the importance of procedural safeguards to ensure individuals’ legal access to remedies. This was never the AI Act’s intention, as it was conceptualised around the same time as the AI Liability Directive (AILD), a proposal that outlined basic steps towards easing procedural burdens for complainants in recognition of the hurdles posed by AI’s opaque functioning. Despite the AILD’s process-oriented nature and modest impositions, the draft law is struggling to get off the ground — even as the effective remedies issue in AI remains unaddressed. 

Making the Case for Robust European Regulation

Counsel and Programme Director for Equity and Data Laura Lazaro Cabrera speaks at Euronews’ Tech Summit.
Counsel and Programme Director for Equity and Data Laura Lazaro Cabrera speaks at Euronews’ Tech Summit.

In a debate hosted by Euronews as part of their Tech Summit on 4 December, CDT Europe’s Laura Lazaro Cabrera shared the stage with representatives from DG JUST and CEPS to discuss regulation for consumer protection in the digital age. In the discussion, Laura highlighted the importance of ensuring laws regulating tech include both substantive and procedural safeguards to truly guarantee robust consumer protection. She also noted the importance of challenging the false dichotomy between innovation and regulation, underscoring the value of high product standards and their essential role in preserving health, safety, and fundamental rights. She also questioned the false assumption that underperforming products falling short of robust standards would lead to Europeans missing out — rather, it’s companies that would be missing out on the European market should they fail to find ways to conform. 

Recommended read: The Guardian, Deus in machina: Swiss church installs AI-powered Jesus

🦋 Bluesky

We are on Bluesky! As more users join the platform (including tech policy thought leaders), we’re finding more exciting content, and we want you to be part of the conversation. Be sure to follow us at @cdteu.bsky.social! You can also follow our starter pack of EU tech journalists, to catch the latest digital news in the bubble. Find us also on Mastodon and LinkedIn.

⏫ Upcoming Events 

Liberal Forum Roundtable: On 10 December, our Equity and Data Programme Director Laura Lazaro Cabrera will participate to the the European Liberal Forum’s conference on “The Era of AI: Harnessing AI for Humanity”, bringing together MEPs, APAs, political advisors, civil society, academia, and corporate sector representatives to engage in Chatham House discussions on the role of the EU in advancing AI over the next mandate. 

Kofi Annan Foundation: On 11 December, Laura will speak at the “Comparative lessons from the EU and the US elections in the age of Artificial Intelligence” event organised by Democracy Reporting International (DRI) and the Kofi Annan Foundation (KAF) to reflect upon the risks and challenges generative AI represents for European democracy.

The post EU Tech Policy Brief: December 2024 appeared first on Center for Democracy and Technology.

]]>