Transparency & Accountability Archives - Center for Democracy and Technology https://cdt.org/area-of-focus/free-expression/transparency-accountability/ Wed, 07 May 2025 17:02:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://cdt.org/wp-content/uploads/2019/11/cropped-cdt-logo-32x32.png Transparency & Accountability Archives - Center for Democracy and Technology https://cdt.org/area-of-focus/free-expression/transparency-accountability/ 32 32 EU Tech Policy Brief: April 2025 https://cdt.org/insights/eu-tech-policy-brief-april-2025/ Tue, 01 Apr 2025 21:26:17 +0000 https://cdt.org/?post_type=insight&p=108123 Welcome back to the Centre for Democracy & Technology Europe‘s Tech Policy Brief! This edition covers the most pressing technology and internet policy issues under debate in Europe and gives CDT’s perspective on the impact to digital rights. To sign up for CDT Europe’s AI newsletter, please visit our website. Do not hesitate to contact […]

The post EU Tech Policy Brief: April 2025 appeared first on Center for Democracy and Technology.

]]>
Welcome back to the Centre for Democracy & Technology Europe‘s Tech Policy Brief! This edition covers the most pressing technology and internet policy issues under debate in Europe and gives CDT’s perspective on the impact to digital rights. To sign up for CDT Europe’s AI newsletter, please visit our website. Do not hesitate to contact our team in Brussels.

👁 Security, Surveillance & Human Rights

Citizen Lab Unveils Surveillance Abuses in Europe and Beyond                                       

​The recent Citizen Lab report regarding deployment of Paragon spyware in EU Member States, particularly Italy and allegedly in Cyprus and Denmark, highlights a concerning trend of surveillance targeting journalists, government opponents, and human rights defenders. Invasive monitoring of journalist Francesco Cancellato, members of the NGO Mediterranea Saving Humans, and human rights activist Yambio raises serious concerns about press freedom, fundamental rights, and the broader implications for democracy and rule of law in the EU. 

The Italian government’s denial that it authorised surveillance, while reports indicate otherwise, indicates a lack of transparency and accountability. Reportedly, the Undersecretary to the Presidency of the Council of Ministers admitted that Italian intelligence services used Paragon spyware against Mediterranean activists, citing national security justifications. This admission highlights the urgent need for transparent oversight mechanisms and robust legal frameworks to prevent misuse of surveillance technologies. 

Graphic for Citizen Lab report, which reads, "Virtue or Vice? A First Look at Paragon's Proliferating Spyware Options". Graphic has a yellow background, and a grayscale hand reaching through great message bubbles.
Graphic for Citizen Lab report, which reads, “Virtue or Vice? A First Look at Paragon’s Proliferating Spyware Options”. Graphic has a yellow background, and a grayscale hand reaching through great message bubbles.

Lack of decisive action at the European level in response to these findings is alarming. Efforts to initiate a plenary debate within the European Parliament have stalled due to insufficient political support, reflecting a broader pattern of inaction that threatens civic space and fundamental rights across the EU. This inertia is particularly concerning given parallel developments in France, Germany, and Austria, where legislative measures are being considered to legalise use of surveillance technologies. In light of the European Parliament’s PEGA Committee findings on Pegasus and equivalent spyware, it is imperative that EU institutions and Member States establish clear, rights-respecting policies governing the use of surveillance tools. Normalisation of intrusive surveillance without adequate safeguards poses a direct challenge to democratic principles and the protection of human rights within the EU.

Recommended read: Amnesty International, Serbia: Technical Briefing: Journalists targeted with Pegasus spyware

 💬 Online Expression & Civic Space

DSA Civil Society Coordination Group Publishes Analysis on DSA Risk Assessment Reports

Key elements of the Digital Services Act’s (DSA) due diligence obligations for Very Large Online Platforms and Search Engines (VLOPs/VLOSEs) are the provisions on risk assessment and mitigation. Last November, VLOPs and VLOSEs published their first risk assessment reports, which the DSA Civil Society Coordination Group, convened and coordinated by CDT Europe, took the opportunity to jointly assess. We identified both promising practices to adopt and critical gaps to address in order to improve future iterations of these reports and ensure meaningful DSA compliance.

Our analysis zooms in on key topics like online protection of minors, media pluralism, electoral integrity, and online gender-based violence. Importantly, we found that platforms have overwhelmingly focused on identifying and mitigating user-generated risks, as a result focusing less on risks stemming from the design of their services. In addition, platforms do not provide sufficient metrics and data to assess the effectiveness of the mitigation measures they employ. In our analysis, we describe what data and metrics future reports could reasonably include to achieve more meaningful transparency. 

Graphic with a blue background, with logo for the DSA Civil Society Coordination Group featuring members' logos. In black text, graphic reads, "Initial Analysis on the First Round of Risk Assessments Reports under the EU Digital Services Act".
Graphic with a blue background, with logo for the DSA Civil Society Coordination Group featuring members’ logos. In black text, graphic reads, “Initial Analysis on the First Round of Risk Assessments Reports under the EU Digital Services Act”.

CDT Europe’s David Klotsonis, lead author of the analysis, commented, “As the first attempt at DSA Risk Assessments, we didn’t expect perfection — but we did expect substance. Instead, these reports fall short as transparency tools, offering little new data on mitigation effectiveness or meaningful engagement with experts and affected communities. This is a chance for platforms to prove they take user safety seriously. To meet the DSA’s promise, they must provide real transparency and make civil society a key part of the risk assessment process. We are committed to providing constructive feedback and to fostering an ongoing dialogue.”

Recommended read: Tech Policy Press, A New Framework for Understanding Algorithmic Feeds and How to Fix Them 

⚖ Equity and Data

Code of Practice on General-Purpose AI Final Draft Falls Short

Following CDT Europe’s initial reaction to the release of the third Draft Code of Practice on General-Purpose AI (GPAI), we published a full analysis highlighting key concerns. One major issue is the Code’s narrow interpretation of the AI Act, which excludes fundamental rights risks from the list of selected risks that GPAI model providers must assess. Instead, assessing these risks is left as an option, and is only required if such risks are created by a model’s high-impact capabilities.

This approach stands in contrast to the growing international consensus, including the 2025 International AI Safety Report, which acknowledges the fundamental rights risks posed by GPAI. The Code also argues that existing legislation can better address these risks, but we push back on this claim. Laws like the General Data Protection Regulation, the Digital Services Act, and the Digital Markets Act lack the necessary tools to fully tackle these challenges.

Moreover, by making it optional to assess fundamental rights risks, the Code weakens some of its more promising provisions, such as requirements for external risk assessments and clear definitions of unacceptable risk tiers. 

In response to these concerns, we joined a coalition of civil society organisations in calling for a revised draft that explicitly includes fundamental rights risks in its risk taxonomy.

Global AI Standards Hub Summit 

At the inaugural global AI Standards Hub Summit, co-organised by the Alan Turing Institute, CDT Europe’s Laura Lazaro Cabrera spoke at a session exploring the role of fundamental rights in the development of international AI standards. Laura highlighted the importance of integrating sociotechnical expertise and meaningfully involving civil society actors to strengthen AI standards from a fundamental rights perspective. Laura emphasised the need to create dedicated spaces for civil society to participate in standards processes, tailored to the diversity of their contributions and resource limitations.  

Image featuring Programme Director for Equity and Data Laura Lazaro Cabrera speaking at a panel with three other panelists on the role of fundamental rights in standardisation, at the Global AI Standard Hub Summit
Image featuring Programme Director for Equity and Data Laura Lazaro Cabrera speaking at a panel with three other panelists on the role of fundamental rights in standardisation, at the Global AI Standard Hub Summit

Recommended read: Tech Policy Press, Human Rights are Universal, Not Optional: Don’t Undermine the EU AI Act with a Faulty Code of Practice

🆕 Job Opportunities in Brussels: Join Our EU Team

We’re looking for two motivated individuals to join our Brussels office and support our mission to promote human rights in the digital age. 

The Operations & Finance Officer will play a key role in keeping our EU office running smoothly—managing budgets, coordinating logistics, and ensuring strong operational foundations for our advocacy work. 

We’re also seeking an EU Advocacy Intern to support our policy and advocacy efforts, with hands-on experience in research, event planning, and stakeholder engagement. 

Apply before 23 April 2025 by sending your cover letter and CV to hr@cdt.org. For more information, visit our website

🗞 In the Press

⏫ Upcoming Event

Pall Mall Process Conference: On 3 and 4 April, our Director for Security and Surveillance Silvia Lorenzo Perez will participate in the annual Pall Mall Process Conference in Paris. 

The post EU Tech Policy Brief: April 2025 appeared first on Center for Democracy and Technology.

]]>
Civil Society Responds to DSA Risk Assessment Reports: An Initial Feedback Brief https://cdt.org/insights/dsa-civil-society-coordination-group-publishes-an-initial-analysis-of-the-major-online-platforms-risks-analysis-reports/ Mon, 17 Mar 2025 10:00:50 +0000 https://cdt.org/?post_type=insight&p=107918 The DSA Civil Society Coordination Group, in collaboration with the Recommender Systems Taskforce and People vs Big Tech, has released an initial analysis of the first Risk Assessment Reports submitted by major platforms under Article 42 of the DSA. This analysis identifies both promising practices and critical gaps, offering recommendations to improve future iterations of […]

The post Civil Society Responds to DSA Risk Assessment Reports: An Initial Feedback Brief appeared first on Center for Democracy and Technology.

]]>
The DSA Civil Society Coordination Group, in collaboration with the Recommender Systems Taskforce and People vs Big Tech, has released an initial analysis of the first Risk Assessment Reports submitted by major platforms under Article 42 of the DSA. This analysis identifies both promising practices and critical gaps, offering recommendations to improve future iterations of these reports and ensure meaningful compliance with the DSA.

The Digital Services Act (DSA) represents a landmark effort to create a safer and more transparent online environment. Central to this framework are yearly risk assessments required under Articles 34 and 35, which mandate Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) to identify, assess, and mitigate systemic risks posed by their services.

Identifying Useful Practices

The first round of RA Reports showcased varying approaches to risk identification and mitigation, but also different formats to present information. While the reports across platforms and services will inevitably differ to some extent, by identifying practices from each platform that were the most conducive to meaningful transparency, we aim to set a baseline for future iterations. To showcase this, we zoom-in on key topics like the online protection of minors, media pluralism, online gender-based violence and explore features from different reporting formats that we found compelling.

The Crucial Role of Platform Design

A recurring theme in the analysis of the RA Reports is the underrepresentation of design-related risks. While platforms occasionally acknowledged the role of their systems — such as recommender algorithms — in amplifying harmful content, these references were often indirect or insufficiently explored. Design choices, particularly those driven by engagement metrics, can significantly contribute to systemic risks, including mental health issues, political polarisation, and the spread of harmful content. Despite this, many reports focused primarily on content moderation rather than addressing how platform design itself might be a root cause of harm. Future RA Reports must prioritise assessing design-related risks, ensuring that mitigation measures target not only user-generated risks but also the systemic risks embedded in platform architecture. By doing so, platforms can better align with the DSA’s objectives and create safer digital environments for all users.

Transparency Builds Trust

Trust with users and regulators can only be fostered through transparency. Many RA Reports lacked verifiable data to substantiate claims about the effectiveness of mitigation measures. For instance, a number of reports referenced existing policies and data without providing new, DSA-specific assessments. Platforms must disclose quantitative and qualitative data, such as metrics on exposure to harmful content and user engagement with control tools, to demonstrate compliance and build trust. The brief includes a detailed table with the minimum level of disclosure that would be necessary to assess the effectiveness of mitigation measures, that we believe could be made public without posing a risk to trade secrets.

The Need for Meaningful Stakeholder Engagement

Finally, meaningful consultation with civil society, researchers, and impacted communities is essential to identifying and mitigating systemic risks. Yet, none of the RA Reports analysed detail how external expertise was incorporated into their assessments. Platforms must engage stakeholders systematically, reflecting their insights in risk assessments and mitigation strategies. This approach not only ensures compliance with DSA Recital 90 but also strengthens the credibility of the reports.

Recommendations

The first round of RA Reports under the DSA marks an important step toward greater accountability. However, significant gaps remain. To advance user safety and foster trust, platforms must:

  1. Focus on design-related risks, particularly those tied to recommender systems.
  2. Enhance transparency by providing verifiable data on mitigation measures.
  3. Engage meaningfully with stakeholders to ensure risk assessments reflect real-world harms.

By addressing these gaps, VLOPs and VLOSEs can align with the DSA’s objectives, contribute to a safer digital environment, and rebuild trust with users and regulators. Civil society remains committed to supporting this process through ongoing analysis and collaboration. Together, we can ensure that the DSA’s promise of a safer online space becomes a reality.

Read the full report.

The DSA CSO Coordination Group, convened and coordinated by CDT Europe, is an informal coalition of civil society organisations, academics and public interest technologists that advocates for the protection of human rights, in the implementation and enforcement of the EU Digital Services Act.

The post Civil Society Responds to DSA Risk Assessment Reports: An Initial Feedback Brief appeared first on Center for Democracy and Technology.

]]>
The Kids are Online: Research-Driven Insights on Child Safety Policy https://cdt.org/insights/the-kids-are-online-research-driven-insights-on-child-safety-policy/ Fri, 14 Feb 2025 22:32:49 +0000 https://cdt.org/?post_type=insight&p=107256 Executive Summary This report summarizes the key discussions and insights from an in-person symposium held in September 2024 on the topic of children’s online safety policy. The event convened academic researchers, policy experts, and civil society representatives to explore research-driven approaches to addressing critical issues impacting young users in digital environments. During the symposium, we […]

The post The Kids are Online: Research-Driven Insights on Child Safety Policy appeared first on Center for Democracy and Technology.

]]>
Graphic for CDT Research report, entitled "The Kids Are Online." Grey background, with purple, orange, and blue gradient bars underlying black text.
Graphic for CDT Research report, entitled “The Kids Are Online.” Grey background, with purple, orange, and blue gradient bars underlying black text.

Executive Summary

This report summarizes the key discussions and insights from an in-person symposium held in September 2024 on the topic of children’s online safety policy. The event convened academic researchers, policy experts, and civil society representatives to explore research-driven approaches to addressing critical issues impacting young users in digital environments. During the symposium, we attempted to foster meaningful dialogue, identify areas of consensus and disagreement, and chart actionable paths forward. The symposium included a range of perspectives, and thus the report reflects a synthesis of ideas rather than unanimous agreement.

The symposium brought together 23 participants for a day-long event conducted under the Chatham House Rule. Attendees engaged in two rounds of thematic roundtables covering four key topics related to child safety on online platforms: Connection, Content, Communication, and Characteristics. The event concluded with an all-participant session that summarized some of the main discussions and identified strategies and opportunities to integrate research into policy.

We lay out some of the cross-cutting themes that we have identified in conversation; these highlight the interconnectedness of issues surrounding youth safety online, and emphasize the need for evidence-based and youth-centric approaches, particularly along the following lines: 

  • No one-size-fits-all approach fixes current issues. Researchers pointed to a range of ways for keeping young people safe online, yet most solutions raise thorny tradeoffs.
  • Experiences of all youth online should be examined, including those with different backgrounds. Participants repeatedly raised that young users experience online environments differently based on factors like age, socioeconomic status, and identity. Tailored safety measures, they note, may be essential to address these varied experiences effectively. Some said that additional aspects like access and digital literacy require further consideration of tools that accommodate diverse user needs.
  • Consider the ecosystem of actors who are part of a young person’s life holistically. The discussions emphasized adopting a more holistic and collaborative approach to online child safety. Participants underscored the necessity of collective efforts that would involve parents, educators, platform designers, and policymakers. Collaboration across these groups was identified as crucial for reaching feasible and balanced actionable steps.
  • Limited researcher access to data impedes evidence-informed solutions. Researchers in the group agreed that a lack of access to comprehensive data impedes fully understanding online harms and prevents learning about the effectiveness of existing safety measures implemented by digital platforms. Most agreed that improved access to data is vital to develop evidence-informed policy.

Participants also proposed several practical steps with potential to enhance online safety for young people on digital platforms:

  • Establish default protections. Participants agreed that implementing safety settings by default, such as private accounts, can potentially keep young users and all users safer.
  • Empower users with the ability to customize their online experiences. According to participants, equipping youth — and all users — with features like customizable content filters and algorithm reset options could give them the reins to shape their own experiences online.
  • Provide researchers with privacy-preserving mechanisms to access data. Participants emphasized the importance of providing researchers with access to platform data, especially data related to safety mechanisms (e.g., the rate of users who use safety tools or how these tools are being used). They noted that this would allow researchers to better study online experiences and evaluate the effectiveness of safety measures.
  • Support digital literacy and onboarding. Participants recommended platforms to work towards supporting users’ development of skills to navigate digital spaces responsible, as opposed to restricting access to young users altogether. Leveraging peer-to-peer education, more collaborative onboarding processes, and norm setting can all help acquaint young users with improving online norms and safety practices.

The conversation underscored the complexity of creating safer online environments and the importance of engaging researchers, who can share data-driven knowledge on approaches that have the potential to work. Participants emphasized the need for ongoing dialogue and actionable processes — safer digital spaces require sustained efforts to bridge gaps between research, policy, and platform design. This report serves as a step towards creating this shared space that would support the creation of safer digital environments for young users while respecting their rights and agency.

Read the full report.

The post The Kids are Online: Research-Driven Insights on Child Safety Policy appeared first on Center for Democracy and Technology.

]]>
First Amendment Tech Transparency Roadmap https://cdt.org/insights/first-amendment-tech-transparency-roadmap/ Thu, 13 Feb 2025 18:23:05 +0000 https://cdt.org/?post_type=insight&p=107419 [ PDF version ] Transparency is often considered the cornerstone of good technology governance and best industry practice – whether applied to social media platforms or AI developers and deployers. At the same time, when transparency mandates are imposed by the government, they can implicate the First Amendment. This guide intends to help policymakers effectively […]

The post First Amendment Tech Transparency Roadmap appeared first on Center for Democracy and Technology.

]]>
First Amendment Tech Transparency Roadmap. White document on a grey background.
First Amendment Tech Transparency Roadmap. White document on a grey background.

[ PDF version ]

Transparency is often considered the cornerstone of good technology governance and best industry practice – whether applied to social media platforms or AI developers and deployers. At the same time, when transparency mandates are imposed by the government, they can implicate the First Amendment. This guide intends to help policymakers effectively navigate rapidly developing and often contradictory First Amendment precedent to empower legislation that would mandate meaningful transparency about technology and the way it affects people’s rights and lives.

What is Compelled Speech?

Government requirements that individuals or entities “speak a particular message” are considered compelled speech. Compelled speech is generally subject to strict scrutiny – the most stringent form of First Amendment review. Courts recognize, however, that certain types of compelled speech are more justifiable than others – for example, relating to product disclosures. These more justifiable types of compelled speech, therefore, are subject to a lower standard of First Amendment review. Transparency mandates are one form of compelled speech.

Lawmakers should tailor transparency requirements to the appropriate level of First Amendment scrutiny to ensure the mandates stand on strong legal ground. Tech transparency requirements often fall into one of three categories, each with their own standard of First Amendment review:

  • Disclosures About Regulated Conduct. The government often compels regulated entities to provide information about compliance with regulatory requirements – e.g., SEC filings. The Supreme Court has long recognized that compelled speech can be justified as “part of a far broader regulatory system that does not principally concern speech.” These disclosures are subject to a lenient form of First Amendment review. If the underlying regulatory requirement relates to speech itself – including editorial decision-making – then it may be better understood to be “speech about speech.”
  • Factual & Uncontroversial Commercial Disclosures. Commercial speech is speech that “does no more than propose a commercial transaction” or that relates “solely to the economic interests of the speaker and its audience.” Common examples include advertising and product labels. Commercial speech is reviewed under an intermediate form of First Amendment review. One subset of commercial speech requirements, however – namely those that compel “factual and uncontroversial information about the terms under which . . . services will be available” – is subject to a lower standard of First Amendment review than other kinds of commercial speech.
  • Speech About Speech. “Speech about speech” is a helpful way to think about transparency requirements (i.e., compelled speech) that implicate and burden underlying protected expression. Speech about speech comes up frequently in tech policy due to the editorial decision-making inherent to the design of social media platforms and AI models. Where transparency mandates are “inextricably intertwined” with and burden underlying fully protected expression, such as editorial decision-making by platforms and AI developers, those mandated disclosures are best understood as “speech about speech.” These disclosures are likely to be subject to stringent First Amendment review, including strict scrutiny.

Explore the full roadmap.

The post First Amendment Tech Transparency Roadmap appeared first on Center for Democracy and Technology.

]]>
EU Tech Policy Brief: January 2025 https://cdt.org/insights/eu-tech-policy-brief-january-2024/ Wed, 05 Feb 2025 00:45:21 +0000 https://cdt.org/?post_type=insight&p=107297 Welcome back to the Centre for Democracy & Technology Europe‘s Tech Policy Brief, where we highlight some of the most pressing technology and internet policy issues under debate in Europe, the U.S., and internationally, and give CDT’s perspective on the impact to digital rights. To sign up for this newsletter, or CDT Europe’s AI newsletter, […]

The post EU Tech Policy Brief: January 2025 appeared first on Center for Democracy and Technology.

]]>
Welcome back to the Centre for Democracy & Technology Europe‘s Tech Policy Brief, where we highlight some of the most pressing technology and internet policy issues under debate in Europe, the U.S., and internationally, and give CDT’s perspective on the impact to digital rights. To sign up for this newsletter, or CDT Europe’s AI newsletter, please visit our website.

📢 2025 Team Update 

CDT Europe’s team is back together! We’re thrilled to kick off the new year with the full team back in action. This January, we welcomed two team members, Joanna Tricoli, joining the Security, Surveillance and Human Rights Programme as a Policy and Research Officer, and Magdalena Maier, joining the Equity and Data Programme as a Legal and Advocacy Officer. Plus, our Secretary General, Asha Allen, has returned to the office – we’re so glad to have her back!

 Full CDT Europe team is pictured at CDT Europe’s office in Brussels.
 Full CDT Europe team is pictured at CDT Europe’s office in Brussels.

👁 Security, Surveillance & Human Rights

PCLOB Dismissals Put EU-U.S. Data Transfers At Risk

On 27 January, the Trump Administration dismissed three Democratic members of the Privacy and Civil Liberties Oversight Board (PCLOB), an independent government entity that facilitates transparency and accountability in U.S. surveillance. This lost the body its quorum, preventing it from commencing investigations or issuing reports on intelligence community activities that may threaten civil liberties. It is unclear when replacements will be appointed and operations will resume, but based on past instances, the process is likely to take a long time. 

The PCLOB plays a crucial role in protecting privacy rights and keeping intelligence agencies in check. It is also a key part of the EU-U.S. Data Privacy Framework (DPF), established in 2023 after years of negotiations following the Court of Justice of the EU’s invalidation of Privacy Shield. The DPF provides EU citizens with rights to access, correct, or delete their data, and offers redress mechanisms including independent dispute resolution and arbitration. Under the Framework, the PCLOB is responsible for overseeing and ensuring U.S. intelligence follows key privacy and procedural safeguards. As we pointed out in a Lawfare piece, weakening this Oversight Board raises serious concerns about DPF’s validity, since the EU now faces greater challenges in ensuring that the U.S. upholds its commitments — with the entire DPF and transatlantic data flows at risk. 

Venice Commission Asks for Strict Spyware Regulations   

In its long-awaited report released last December, the Venice Commission addressed growing concerns about spyware use, and the existing legislative frameworks regulating the technology in all Council of Europe Member States. The report is based on the Commission’s examination of whether those laws provide enough oversight to protect fundamental rights, and was done in response to a request from the Parliamentary Assembly of the Council of Europe following revelations about concerning uses of Pegasus spyware.

In the report, the Commission emphasised the need for clear and strict regulations due to spyware’s unprecedented intrusiveness, which can interfere with the most intimate aspects of our daily lives. To prevent misuse, it laid out clear guidelines for when and how governments can use such spyware surveillance tools, to ensure that privacy rights are respected and abuse is prevented.

Recommended read: The Guardian, WhatsApp says journalists and civil society members were targets of Israeli spyware 

💬 Online Expression & Civic Space

Civil Society Aligns Priorities on DSA Implementation

Last Wednesday, CDT Europe hosted the annual DSA Civil Society Coordination Group in-person meeting at its office, bringing together 36 participants from across Europe to strategise and plan for 2025 on topics including several aspects of Digital Services Act (DSA) enforcement. 

 DSA Coordination Group Meeting, hosted by CDT Europe in Brussels.
 DSA Coordination Group Meeting, hosted by CDT Europe in Brussels.

The day began with a focused workshop by the Recommender Systems Task Force on the role of recommender systems in annual DSA Risk Assessment reports, which Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) must complete to assess and mitigate the systemic risks posed by their services. The workshop addressed key challenges in interpreting these reports, particularly in the absence of data to substantiate claims about how effective mitigations are. 

That session was followed by a broader workshop on DSA Risk Assessments. With the first round of Risk Assessment and Audit reports now published, constructive civil society feedback on those reports can help improve each iteration, pushing towards the ultimate goal of meaningful transparency that better protects consumers and society at large.

Transparency and Accountability Are Needed from Online Platforms

Recently, at a multistakeholder event on DSA Risk Assessments, CDT Europe’s David Klotsonis facilitated a session on Recommender Systems. With the first round of Risk Assessment reports widely considered unsatisfactory by civil society, much of the conversation focused on how to foster greater and more meaningful transparency through these assessments. Participants highlighted that, without data to underpin the risk assessments, robust and informed evaluation by the public is impossible. Even in the absence of such data, however, the discussion underscored that consistent and meaningful engagement with relevant stakeholders—including those from digital rights organisations in the EU—remains crucial. Civil society reflections are key to ensure that these reports could be even more useful, and to drive the transparency and accountability necessary for better platform safety.

Recommended read: Tech Policy Press, Free Speech Was Never the Goal of Tech Billionaires. Power Was.

⚖ Equity and Data

CDT Europe Responds to EC Questionnaire on Prohibited AI Practices

CDT Europe participated in the public stakeholder consultation on which practices the AI Act prohibits, to inform the European Commission’s development of guidelines for practically implementing those prohibitions (which will apply beginning 2 February 2025). In our response, we highlighted that the prohibitions — as set out in the final AI Act text — should be further clarified to cover all potential scenarios where fundamental rights may be impacted. We also argued that exceptions to these prohibitions must be interpreted narrowly. 

Second Draft of the General-Purpose AI Code of Practice Raises Concerns

In December, the European Commission published the second draft of the General-Purpose AI (GPAI) Code of Practice (CoP). Despite significant changes and some improvements, several aspects of the draft continue to raise concerns among civil society. The systemic risk taxonomy, a key part of the draft that sets out the risks GPAI model providers must assess and mitigate, remains substantially unchanged. 

In earlier feedback, CDT Europe suggested key amendments to bring the draft in line with fundamental rights, such as including the risk to privacy or the prevalence of non-consensual intimate imagery and child sexual abuse material. On a different front, organisations representing rights-holders have called for critical revisions to the draft to avoid eroding EU copyright standards, noting that the CoP in its current form fails to require strict compliance with existing EU laws. 

Our comments on the second-draft systemic risk taxonomy’s approach to fundamental rights are available on our website. CDT Europe will continue to engage with the process, with the next draft expected to be released and simultaneously be made available for comments to CoP participants on 17 February

EDPB Opinion on Personal Data and AI Models: How Consequential Is It?

In an early January IAPP panel, our Equity & Data Programme Director Laura Lazaro Cabrera discussed the role of the latest EDPB opinion on AI models and GDPR in closing a long debate: Does the tokenisation process underlying AI models prevent data processing, in the traditional sense, from taking place? Ultimately, this line of reasoning would take AI models entirely outside of the General Data Protection Regulation (GDPR)’s scope.

Equity & Data Programme Director Laura Lazaro Cabrera speaking at IAPP’s online panel on the latest EDPB Opinion on Personal Data and AI Models.
Equity & Data Programme Director Laura Lazaro Cabrera speaking at IAPP’s online panel on the latest EDPB Opinion on Personal Data and AI Models.

The panel unpacked the opinion’s nuances, noting that it allowed for situations where a model could be considered legally anonymous — and thereby outside the GDPR’s scope — even when personal data could be extracted, if the likelihood of doing so using “reasonable means” was “insignificant”. As the panel highlighted, the opinion is strictly based on the GDPR and did not refer to the AI Act, but will inevitably inform how regulators approach data protection risks in the AI field. Those risks are currently under discussion in several AI Act implementation processes, such as those for the GPAI Code of Practice and the forthcoming template for reporting on a model’s training data.

Recommended read: POLITICO, The EU’s AI bans come with big loopholes for police

🦋 Bluesky

We are on Bluesky! As more users join the platform (including tech policy thought leaders), we’re finding more exciting content, and we want you to be part of the conversation. Be sure to follow us at @cdteu.bsky.social, and follow our team here. We also created a starter pack of 30+ EU tech journalists, to catch the latest digital news in the bubble. 

🗞 In the Press

⏫ Upcoming Events 

AI Summit: On 10 and 11 February, France will host the Artificial Intelligence Action Summit, gathering heads of State and Government, leaders of international organisations, CEOs, academia, NGOs, artists and members of civil society, to discuss the development of AI technologies across the world and their implications for human rights. CDT President Alexandra Reeve Givens and CDT Europe Programme Director Laura Lazaro Cabrera will attend the conference. Laura will be making the closing remarks at an official side event to the Summit hosted by Renaissance Numérique. Registration is open here.

RightsCon: Our Security, Surveillance and Human Rights Programme Director Silvia Lorenzo Perez will participate in a panel discussion on spyware at the 2025 RightsCon Edition, taking place from 24 to 27 February in Taipei. Each year, RightsCon convenes business leaders, policy makers, government representatives, technology experts, academics, journalists, and human rights advocates from around the world to tackle pressing issues at the intersection of human rights and technology.

The post EU Tech Policy Brief: January 2025 appeared first on Center for Democracy and Technology.

]]>
CDT Europe Response to the Consultation on Data Access in the DSA https://cdt.org/insights/cdt-europe-response-to-the-consultation-on-data-access-in-the-dsa/ Tue, 17 Dec 2024 12:28:02 +0000 https://cdt.org/?post_type=insight&p=106786 CDT Europe welcomes the European Commission’s initiative to provide independent researchers with access to platforms’ data, initially through Article 40 of the Digital Services Act and now with further specifications outlined in the Draft Delegated Act. Several VLOPs have made access to their data more difficult in the last year, but transparency that allows for […]

The post CDT Europe Response to the Consultation on Data Access in the DSA appeared first on Center for Democracy and Technology.

]]>
CDT Europe welcomes the European Commission’s initiative to provide independent researchers with access to platforms’ data, initially through Article 40 of the Digital Services Act and now with further specifications outlined in the Draft Delegated Act. Several VLOPs have made access to their data more difficult in the last year, but transparency that allows for research remains the primary tool for understanding how online services contribute to systemic risks to society and the best avenues to mitigate them.

The draft delegated act clarifies many important details, and much of the feedback that was provided from stakeholders during the call-for-evidence (including CDT Europe’s feedback) has been incorporated into the text. In what follows, we aim to build on this great effort by providing suggestions on what should be further detailed.

Overview of key recommendations:

– Expand the independence requirements for applicant researchers to prevent government overreach;
– Explicitly mention the role of CSOs, including CSOs outside the EU, as potential applicant researchers;
– Further detail the requirements for the data inventory to ensure completeness;
– Empower researchers to initiate the mediation process;
– Empower independent experts to assess the quality of data inventories and be part of the mediation process;
– Extend the timeline for DSCs responding to data access applications.

Find our full consultation online and in pdf.

The post CDT Europe Response to the Consultation on Data Access in the DSA appeared first on Center for Democracy and Technology.

]]>
Moderating Kiswahili Content on Social Media https://cdt.org/insights/moderating-kiswahili-content-on-social-media/ Thu, 12 Dec 2024 05:01:00 +0000 https://cdt.org/?post_type=insight&p=106406 [ PDF version ] Introduction Africa, a continent with over 2,000 languages and home to more than one-third of the world’s linguistic diversity, has many languages that remain beyond the reach of both automated and human content moderation (Shiundu, 2023). Social media platforms have a limited physical presence in Africa, operating only a few offices […]

The post Moderating Kiswahili Content on Social Media appeared first on Center for Democracy and Technology.

]]>
CDT report, entitled “Moderating Kiswahili Content on Social Media.” Illustration of two Kiswahili speakers' hands and forearms, crossed and palms closed, while wearing beaded bracelets with Kenyan and Tanzanian colors, and an ethernet cord intertwining their arms. Deep red patterned background.
CDT report, entitled “Moderating Kiswahili Content on Social Media.” Illustration of two Kiswahili speakers’ hands and forearms, crossed and palms closed, while wearing beaded bracelets with Kenyan and Tanzanian colors, and an ethernet cord intertwining their arms. Deep red patterned background.

[ PDF version ]

Introduction

Africa, a continent with over 2,000 languages and home to more than one-third of the world’s linguistic diversity, has many languages that remain beyond the reach of both automated and human content moderation (Shiundu, 2023). Social media platforms have a limited physical presence in Africa, operating only a few offices and employing minimal staff (De Gregorio & Stremlau, 2023). Despite this, these companies have heavily invested in outsourcing content moderation labor to the continent, hiring vendors to recruit moderators to review content from both Africa and beyond. One of the few African languages benefiting from human moderation is Kiswahili, a language that is spoken by over 100 million people in East and some parts of Central Africa. In this report, we investigate how the content moderation systems of select online platforms deal with user-generated content in Kiswahili. 

This report is part of a series that examines content moderation within low-resource and indigenous languages in the Global South. Low-resource describes languages that lack sufficient high-quality training data, making it difficult to develop automated content moderation systems (Nicholas & Bhatia, 2023). In our previous research, we found that content moderation in North Africa, especially in the Maghreb region, suffered from significant biases and systemic inadequacies (Elswah, 2024). We found that content moderation systems for Maghrebi Arabic dialects are impacted by inadequate training data, which fail to capture the rich linguistic diversity of the region. Additionally, content moderators, who work under challenging conditions and are tasked with overseeing content from across the Arab world, face several challenges in making accurate decisions regarding dialects they often do not understand. This results in inaccuracies and inconsistencies in moderation practices, highlighting the urgent need for more inclusive and representative approaches to the moderation of low-resource languages in the Global South. 

This report focuses on Kiswahili (also known as Swahili), a language that exists in many varieties in East Africa in Kenya, Tanzania, Uganda, parts of the Democratic Republic of Congo, Burundi, and Rwanda, as well as in some parts of Central Africa (Topan, 2008). This report specifically concentrates on Kenya and Tanzania. We chose Tanzania because it has the largest Kiswahili-speaking population and is the birthplace of Standard Swahili. We selected Kenya because it is home to a significant number of Kiswahili speakers, ranking second only to Tanzania (Dzahene-Quarshie, 2009). Additionally, Kenya is recognized as the “Silicon Savannah” of Africa, which refers to its advanced digital transformation, rapidly increasing internet connectivity, and being host to many companies and institutions involved in the development of digital technologies (Mwaura, 2023; Wahome, 2023).   

Using a mixed-method approach that combines an online survey of 143 frequent social media users in Kiswahili and 23 in-depth interviews with content moderators, creators, and digital rights advocates from Kenya and Tanzania, we found that: 

  1. According to our survey, Instagram is the most popular social media platform in Kenya and Tanzania. Additionally, TikTok’s popularity is rapidly growing in East Africa, surpassing that of Facebook. 
  2. The spread of misinformation and hate speech online is a significant issue within the Kiswahili online sphere. The majority of our survey participants expressed concerns about the proliferation of misleading and inciting content on social media platforms. 
  3. Popular social media platforms take three general approaches to Kiswahili content moderation: global, local, and multi-country. The global approach, exemplified by Meta, involves applying uniform policies to all Kiswahili users indiscriminately. Meta requires their Kiswahili moderators to review non-African English-language content from around the world. The local approach, employed by TikTok, tailors the enforcement of some of its policies to account for the diverse cultural contexts within East Africa. However, the variations in the Kiswahili language are overlooked because content moderation vendors hire primarily Kenyan moderators to review content from across East Africa. Many of these moderators may not be familiar with the specific contexts of other East African countries, which can lead to inadequate moderation. Lastly, the multi-country approach utilized by the local Tanzanian platform “JamiiForums” involves hiring native moderators from each Kiswahili-speaking country, who review content generated within their own nations. This ensures that the moderators understand the local context and cultural nuances, allowing them to provide more effective and relevant content moderation for users on JamiiForums.
  4. Content moderation vendors often downplay the harsh realities of the job by concealing the graphic content that moderators will encounter, avoiding any mention in job advertisements, interviews, and training sessions. Many moderators misunderstand the nature of the role, with some believing they will be content “creators.” Additionally, moderators are exposed to less graphic content during the short period of training, which fails to prepare them for the often distressing content they will encounter in their daily work.
  5. Much of the content moderation is conducted by third-party outsourced vendors who are contracted by social media platforms and hire moderators on behalf of the platforms. Companies in Nairobi, Kenya that provide Kiswahili content moderation services exclusively hire Kenyans to manage the diverse variations and contexts of Kiswahili content. This leads to many incidents of inaccuracies and inconsistencies in content evaluation.

Read the report.

Soma toleo la Kiswahili la ripoti hiyo.

The post Moderating Kiswahili Content on Social Media appeared first on Center for Democracy and Technology.

]]>
Real Time Threats: Analysis of Trust and Safety Practices for Child Sexual Exploitation and Abuse (CSEA) Prevention on Livestreaming Platforms https://cdt.org/insights/real-time-threats-analysis-of-trust-and-safety-practices-for-child-sexual-exploitation-and-abuse-csea-prevention-on-livestreaming-platforms/ Thu, 21 Nov 2024 05:01:00 +0000 https://cdt.org/?post_type=insight&p=106231 This report is also authored by Robert Gorwa. [ PDF version ] Executive Summary  In recent years, a range of new online services have emerged that facilitate the ‘livestreaming’ of real-time video and audio. Through these tools, users and content creators around the world can easily broadcast their activities to potentially large global audiences, facilitating […]

The post Real Time Threats: Analysis of Trust and Safety Practices for Child Sexual Exploitation and Abuse (CSEA) Prevention on Livestreaming Platforms appeared first on Center for Democracy and Technology.

]]>
This report is also authored by Robert Gorwa.

Graphic for CDT Research report, entitled "Real Time Threats." Illustration of a smartphone showing a warped grid and a recording button; the phone is surrounded by a "LIVE" icon, a warning icon in front of a cloud of smoke, chat bubbles, image icons, an eye, a video, a voice note; Tetris-like blocks are interspersed between all the elements.
Graphic for CDT Research report, entitled “Real Time Threats.” Illustration of a smartphone showing a warped grid and a recording button; the phone is surrounded by a “LIVE” icon, a warning icon in front of a cloud of smoke, chat bubbles, image icons, an eye, a video, a voice note; Tetris-like blocks are interspersed between all the elements.

[ PDF version ]

Executive Summary 

In recent years, a range of new online services have emerged that facilitate the ‘livestreaming’ of real-time video and audio. Through these tools, users and content creators around the world can easily broadcast their activities to potentially large global audiences, facilitating participatory and generative forms of collaborative ‘live’ gaming, music making, discussion, and other interaction. The rise of these platforms, however, has not been seamless: these same tools are used to disseminate socially problematic and/or illegal content, from promotion of self-harm and violent extremism to child sexual exploitation and abuse (CSEA) materials. 

This report examines the range of trust and safety tools and practices that platforms and third-party vendors are developing and deploying to safeguard livestreaming services, with a special focus on CSEA prevention. Moderating real-time media is inherently technically difficult for firms seeking to intervene responsibly: much livestreaming content is “new”, produced on the spot, and thus by definition not “known” and possible to match against previously identified harmful material through hash-based techniques. Firms seeking to analyze livestreams instead must do so with comparatively inefficient and potentially flawed predictive computer vision models, working creatively with the stream audio (e.g., through transcription and text classification), and/or through other emerging techniques, such as “signals”-oriented interventions based on the behavioral characteristics of suspicious user accounts.

Based on a review of publicly available documents of livestreaming platforms and vendors that offer content analysis services, as well as interviews with persons working on this problem in industry, civil society, and academia, we find that industry is taking three main approaches to address CSEA in livestreaming: 

  • Design based approaches — Steps taken before a user is able to stream, such as implementing friction and verification measures intended to make it more difficult for users, or suspicious users, to go live. For example, some platforms require a user to have a threshold number of followers or subscribers before they can livestream to prevent an actor from spontaneously creating an account and livestreaming harmful content. 
  • Content analysis approaches — Various forms of manual or automated content detection and analysis that can work on video, audio, and text as content is livestreamed. Examples include taking sample frames from livestreams and seeing if they match hashes of known CSEA material; using machine learning classifiers to detect CSAM on live video; and employing predictive analysis of text transcriptions of live audio or user chats in livestreams. 
  • Signal based approaches – Interventions based on the behavioral characteristics and metadata of user accounts. For example, platforms may share certain account metadata to help identify bad actors as they move from platform to platform or use signals to identify accounts engaged in potentially suspicious behavior that prompts further investigation.

In part because of the challenges of livestream content detection, the way in which industry tackles the problem of CSEA and other harmful content is evolving. As one interviewee put it, the idea is for firms to engage more actively in reducing the ability to use their platform for CSEA dissemination, not only engaging in a detect and report mode but also, aspirationally, towards a predict and disrupt model of trust and safety more akin to that used in areas such as cybersecurity and fraud.

Industry approaches to CSEA raise several concerns. First, there is a general trend to eschew transparency and clarity in how these systems operate and are deployed, ostensibly to prevent bad actors from circumventing them, but potentially to the detriment of victims, users, policymakers, and other stakeholders. Second, and related to the first point, it is almost impossible to determine how effective these approaches are, what gaps they leave, whether they result in overmoderation of legitimate content, and how well they serve the needs of all stakeholders. Third, these approaches introduce significant security, privacy, free speech, and other human rights risks that can undermine the safety of the minors that they are meant to protect as well as that of users in general.   

To help address these concerns, we highlight four areas for improvements: 

  1. Greater transparency is needed to help evaluate and improve efforts to address CSEA on livestreaming platforms. For example, there are currently no performance metrics that firms can use to test and compare the accuracy of the measures they take or that experts, policymakers, and researchers can use to gain a better understanding of their efficacy, as well as the extent of what is really possible. 
  2. Vendors and livestreaming platforms should be explicit about the limitations of automated approaches to detecting and addressing CSEA. In so doing, platforms can improve their trust and safety systems by ensuring human reviewers are appropriately involved and allowing them to make nuanced decisions based upon context and other information.
  3. Focus on design interventions that empower users including minors. The needs of streamers to protect themselves from being targeted with or being used to distribute CSEA are worthy of greater attention when it comes to design based solutions. For example, one design based approach that was not raised in our discussions with industry is to provide users, particularly minors, with the right set of tools and reporting mechanisms to help them protect themselves and others.
  4. Multistakeholder governance models can improve accountability of approaches to address CSEA on livestreaming. Best practice frameworks around the implementation of these systems could be developed not only through the continuing work of organizations like the Tech Coalition, but also through critical multistakeholder engagement in fora that not only involve child safety organizations, but also organizations actively engaged on a broader set of digital rights and civil liberties. 

Addressing the problem of CSEA in general and on livestreaming platforms is critically important given the impacts on children, parents, and their communities, so this is a hugely consequential and high-stakes area of platform governance. Vendors and industry alike are understandably eager to show that they are developing innovative new tools to address CSEA and other harmful content, but poor implementation (or poor design, with systems that are fundamentally flawed) will decrease, rather than increase, policymaker and public confidence in platforms’ trust and safety over the longer term. Better understanding of the measures platforms are taking on livestreaming platforms, along with increased multistakeholder engagement, will improve trust and safety systems in ways that minimize the risk of CSEA in livestreamed content, while also minimizing unintended impacts on ordinary users. 

Read the full report.

Lea nuestro informe en español.

Read the plain language report.

This project was funded by Safe Online.

The post Real Time Threats: Analysis of Trust and Safety Practices for Child Sexual Exploitation and Abuse (CSEA) Prevention on Livestreaming Platforms appeared first on Center for Democracy and Technology.

]]>
EU Tech Policy Brief: October 2024 https://cdt.org/insights/eu-tech-policy-brief-october-2024/ Mon, 04 Nov 2024 20:36:31 +0000 https://cdt.org/?post_type=insight&p=106148 Welcome back to the Centre for Democracy & Technology Europe‘s Tech Policy Brief. This edition covers the most pressing technology and internet policy issues under debate in Europe and gives CDT’s perspective on the impact to digital rights. To sign up for CDT Europe’s AI newsletter, please visit our website. Do not hesitate to contact […]

The post EU Tech Policy Brief: October 2024 appeared first on Center for Democracy and Technology.

]]>
Welcome back to the Centre for Democracy & Technology Europe‘s Tech Policy Brief. This edition covers the most pressing technology and internet policy issues under debate in Europe and gives CDT’s perspective on the impact to digital rights. To sign up for CDT Europe’s AI newsletter, please visit our website. Do not hesitate to contact our team in Brussels: Silvia Lorenzo Perez, Laura Lazaro Cabrera, Aimée Duprat-Macabies, David Klotsonis, and Giulia Papapietro.

👁 Security, Surveillance & Human Rights

CDT Europe Leads Coalition to Combat Spyware Abuse Across the EU 

On 1 October 2024, during the Tech and Society Summit (TSS), CDT Europe officially launched a Spyware Coordination Group composed of 16 leading civil society and journalist organisations from all over the EU focused on safeguarding democracy, transparency, and accountability in relation to spyware technologies. This initiative aims to combat the growing misuse of spyware technologies in the EU, and advocate for stronger regulations to protect fundamental rights and ensure respect for the rule of law. United in their commitment to protecting democratic institutions and civil society, members of the Coordination Group will work tirelessly to ensure that the new EU institutions take necessary measures to regulate and prevent abuse of spyware technologies in the EU.

Photograph of members from the Spyware Coordination Group at the Tech and Society Summit in Brussels.
Photograph of members from the Spyware Coordination Group at the Tech and Society Summit in Brussels.

Strengthening Global Efforts Against Commercial Spyware

The issue of spyware is not only being debated at the EU level: on 8 October 2024, the U.S. Department of State hosted its first commercial spyware-focused Human Rights Council side event. CDT Europe’s Security, Surveillance and Human Rights Program Director Silvia Lorenzo Perez spoke at the event, emphasising that modern spyware is not just a tool for law enforcement, but represents a fundamental shift that undermines our democratic values and violates the very principles upon which the European Union is built. She also commended the U.S. Government’s leadership in combating the abuse of commercial spyware through diplomatic efforts such as the U.S.-led Joint Statement, and encouraged the U.S. to intensify diplomacy towards the EU institutions to secure commitments from the European Commission, Parliament, and Council.

Push for Stronger Spyware Oversight in Slovakia and Greece

CDT Europe, alongside 11 organisational members of the Spyware Coordination Group, addressed the European Parliament with serious concerns about the procurement, use, and regulation of spyware technologies in Slovakia and Greece. In a joint letter, the coalition highlights the alarming developments in both countries, where spyware tools like Pegasus and Predator have been linked to violations of privacy and fundamental rights. The letter urges the European Parliament to take immediate action to ensure transparency, accountability, and adherence to rule of law principles, emphasising the need for robust legislative frameworks to protect privacy and freedom of expression.

Recommended read: Human Rights Watch, UK Court Accepts Case About Saudi Spyware Use

💬 Online Expression & Civic Space

CDT Europe at the Tech and Society Summit

At the Tech and Society Summit, CDT Europe’s Online Expression team played a key role in two critical discussions: First, Our Secretary General Asha Allen participated in a roundtable, “Making EU laws work for people: best practices for engaging with civil society”, emphasising the vital role of civil society in identifying harms and proposing actionable policy solutions. This session created an invaluable space for exchanging lessons learned and best practices related to civil society participation in the policymaking process and the enforcement of EU laws.

In a separate high-level roundtable, CDT Europe joined discussions on crafting an effective, rights-respecting EU digital enforcement strategy. Here, participants reached a consensus on the need to address pervasive digital harms by adopting a holistic, society-centred approach, rather than relying solely on individual regulations.

Enhancing Transparency with the Digital Services Act for Stronger Platform Accountability

Our Research and Policy Officer David Klotsonis recently shared key insights with Open Government Partnership (OGP) members on the Digital Services Act (DSA) and its role in promoting accountability in the digital space. David emphasised that annual risk assessments required of Very Large Online Platforms and Search Engines are essential to proactively identifying potential harms, and central to fostering transparency and safeguarding user trust. He also pointed to the importance of Digital Services Coordinators, whose timely appointment and adequate resourcing are vital for meaningful oversight and compliance at the national level. This dialogue with OGP members reinforced the value of collaboration in driving effective, accountable digital governance. You can watch the recording of the webinar on OGP’s YouTube channel.

Workshop on Prosocial Tech Design Governance

On 8 October, the Council on Technology and Social Cohesion and Search for Common Ground hosted a workshop that gathered policymakers, academics, and civil society leaders to examine technology’s role in supporting social cohesion and human rights. Key takeaways included the need for algorithmic accountability, with the DSA serving as a framework to mitigate harmful, profit-driven designs that amplify divisive content, in particular by leveraging risk assessments under the DSA’s Article 34 to address the monetisation of such content. Participants also discussed child protection efforts and the data privacy concerns around age verification, as the EU looks to further bolster the online protection of minors in the coming mandate.

Recommended read: Daphne Keller published an opinion piece in Lawfare, The Rise of the Compliant Speech Platform.

⚖ Equity and Data

Feedback to French Authority on GDPR Guidance for AI

CDT provided feedback to the French Data Protection Authority (Commission nationale de l’informatique et des libertés, or “CNIL”) on recently released factsheets that are intended to guide application of the EU’s General Data Protection Regulation (GDPR) to AI systems and models. We reiterated the limits of relying on “legitimate interests” as a valid legal basis for using data to train AI systems, particularly when conducting web scraping to source that data. CDT similarly called for protection of data subject rights in the AI ecosystem, highlighting the current obstacles individuals face in accessing sufficient information about the processing of their data and enforcement of their rights.

General Purpose AI Models and the Code of Practice Process

As part of our ongoing involvement in the Code of Practice process for general-purpose AI (GPAI) models — set to guide providers’ compliance with the AI Act’s rules governing GPAI models — we published a brief outlining the precedent-setting potential of the Code of Practice process, as well as the importance of civil society engagement and fundamental rights advocacy in the process. Active civil society  participation will be crucial to ensure a robust interpretation of the GPAI rules in the AI Act, and to promote high levels of transparency in GPAI models thorough risk mapping as well as development of robust mitigations and safeguards.

Addressing AI Governance Challenges in Democratic Elections

Photograph of Asha Allen, Secretary-General of CDT Europe, speaking at POLITICO Live's "AI & Elections: Are Democracies Ready?" event.
Photograph of Asha Allen, Secretary-General of CDT Europe, speaking at POLITICO Live’s “AI & Elections: Are Democracies Ready?” event.

On 14 October, our Secretary General Asha Allen spoke at POLITICO Live’s “AI & Elections: Are Democracies Ready?” event, where she shared insights on the state of AI governance and its implications for democratic processes. During the event, Asha and the other panellists discussed the relevance of AI in democratic processes, emphasising that more research is essential to fully understand how AI-generated content might impact the online space and individuals’ rights to participate in democratic debate without interference or discrimination. While the AI Act and DSA are a welcomed step forward, the impact of these laws in mitigating the risks of AI-generated disinformation during elections is yet to be determined. Asha also highlighted the need for tech platforms to fulfil their due diligence obligations and to comply with the EU legislative framework. If you missed it, you can rewatch the panel on YouTube.

Recommended read: La Quadrature du Net, French Family Welfare Algorithm Challenged in Court by 15 Organisations.

📌 Hearings to confirm the incoming European Commissioners

From 4 November to 12 November, the European Parliament is holding hearings to confirm the incoming European Commissioners. CDT Europe is closely monitoring these proceedings and will publish analyses of the nominees’ responses regarding digital rights. As part of this process, nominees have submitted written responses outlining their visions, priorities, and approaches to the portfolios they are set to manage. These answers provide valuable insights into how the new Commission might address some of the most pressing issues facing the European Union today. While the written responses reflect promising commitments in some areas, there are still questions that the Parliament should raise during the hearings to ensure that the final agenda aligns with the EU’s values of privacy, democracy and fundamental rights. We have written an in-depth article outlining these questions and delving into the nominees’ commitments related to our three key programs: Security and Surveillance, Online Expression and Civic Space, and Equity and Data.

⏫ Upcoming Events

Democracy Alive Summit: On 6 November, the day after the U.S. elections, CDT Europe’s Laura Lazaro Cabrera will participate in the Democracy Alive Summit organised by the European Movement International (EMI). Laura will discuss the challenges caused by AI in time of election, and what can be done to combat disinformation and manipulation. If you wish to attend, you can register by filling out this form.

Paris Peace Forum: On 12 November, CDT Europe’s Silvia Lorenzo Perez will attend spyware-focused sessions at this year’s Paris Peace Forum. Those include multistakeholder meetings: one on the Pall Mall Process, organised by the French and UK governments, and one organised by Access Now, the CyberPeace Institute, Freedom House, and the Paris Peace Forum.

Webinar on Trusted Flaggers in the DSA: On 21 November, CDT Europe is co-organising a webinar on Trusted Flaggers. By bringing together institutions, regulators, and civil society organisations, we aim to deepen participants’ understanding of the legal text, and share insights on what the vetting process looks like in practice, what can practically be expected, and what potential benefits are for CSOs interested in applying. This is a closed-door event; however, if you believe that your participation would add valuable insight to the discussion, or are interested in applying to be a Trusted Flagger, please feel free to reach out to eu@cdt.org.

The post EU Tech Policy Brief: October 2024 appeared first on Center for Democracy and Technology.

]]>
Beyond English-Centric AI: Lessons on Community Participation from Non-English NLP Groups https://cdt.org/insights/beyond-english-centric-ai-lessons-on-community-participation-from-non-english-nlp-groups/ Mon, 21 Oct 2024 04:01:00 +0000 https://cdt.org/?post_type=insight&p=106017 This report brief was authored by Evani Radiya-Dixit, CDT Summer Fellow for the CDT AI Governance Lab. Many leading language models are trained on nearly a thousand times more English text compared to text in other languages. These disparities in large language models have real-world impacts, especially for racialized and marginalized communities. For example, they […]

The post Beyond English-Centric AI: Lessons on Community Participation from Non-English NLP Groups appeared first on Center for Democracy and Technology.

]]>
This report brief was authored by Evani Radiya-Dixit, CDT Summer Fellow for the CDT AI Governance Lab.

CDT brief, entitled "Beyond English-Centric AI: Lessons on Community Participation from Non-English NLP Groups." Black and white document on a grey background.
CDT brief, entitled “Beyond English-Centric AI: Lessons on Community Participation from Non-English NLP Groups.” Black and white document on a grey background.

Many leading language models are trained on nearly a thousand times more English text compared to text in other languages. These disparities in large language models have real-world impacts, especially for racialized and marginalized communities. For example, they have resulted in inaccurate medical advice in Hindi, led to wrongful arrest because of mistranslations in Arabic, and have been accused of fueling ethnic cleansing in Ethiopia due to poor moderation of speech that incites violence.

These harms reflect the English-centric nature of natural language processing (NLP) tools, which prominent tech companies often develop without centering or even involving non-English-speaking communities. In response, region- and language-specific research groups, such as Masakhane and AmericasNLP, have emerged to counter English-centric NLP by empowering their communities to both contribute to and benefit from NLP tools developed in their languages. Based on our research and conversations with these collectives, we outline promising practices that companies and research groups can adopt to broaden community participation in multilingual AI development.

Read the full brief.

The post Beyond English-Centric AI: Lessons on Community Participation from Non-English NLP Groups appeared first on Center for Democracy and Technology.

]]>