Free Expression Archives - Center for Democracy and Technology https://cdt.org/area-of-focus/free-expression/ Wed, 14 May 2025 17:42:44 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://cdt.org/wp-content/uploads/2019/11/cropped-cdt-logo-32x32.png Free Expression Archives - Center for Democracy and Technology https://cdt.org/area-of-focus/free-expression/ 32 32 Moderating Tamil Content on Social Media https://cdt.org/insights/moderating-tamil-content-on-social-media/ Wed, 14 May 2025 13:54:09 +0000 https://cdt.org/?post_type=insight&p=108482 Tamil is a language with a long history. Spoken by over 80 million people worldwide, or over 1% of the world’s population, early inscriptions in the language date back to the 5th Century B.C.E (Murugan & Visalakshi , 2024). The language is spoken widely in India (predominantly in Tamil Nadu and Puducherry), in Sri Lanka, […]

The post Moderating Tamil Content on Social Media appeared first on Center for Democracy and Technology.

]]>
Graphic for CDT Research report, entitled "Moderating Tamil Content on Social Media." Illustration of a hand, with a variety of golden rings and bracelets on their wrist and fingers, seen pinching / holding on to a blue speech bubble with three dots indicating that someone is contemplating expressing themselves. A deep green background with a kolam pattern.
Graphic for CDT Research report, entitled “Moderating Tamil Content on Social Media.” Illustration of a hand, with a variety of golden rings and bracelets on their wrist and fingers, seen pinching / holding on to a blue speech bubble with three dots indicating that someone is contemplating expressing themselves. A deep green background with a kolam pattern.

Tamil is a language with a long history. Spoken by over 80 million people worldwide, or over 1% of the world’s population, early inscriptions in the language date back to the 5th Century B.C.E (Murugan & Visalakshi , 2024). The language is spoken widely in India (predominantly in Tamil Nadu and Puducherry), in Sri Lanka, and across diaspora communities in Malaysia, Thailand, Canada, the United Kingdom, the United States, and beyond. Despite the widespread use of the language, there remains limited understanding of how major social media platforms moderate content in Tamil. This report examines the online experiences of Tamil users and explores the challenges of applying consistent content moderation processes for this language. 

This report is part of a series that examines content moderation within low-resource and indigenous languages in the Global South. Low-resource languages are languages in which sufficient high-quality data is not available to train models, making it difficult to develop robust content moderation systems, particularly automated systems (Nicholas & Bhatia, 2023). In previous case studies conducted in the series, we found that this lack of high-quality and native datasets impeded effective and accurate moderation of Maghrebi Arabic and Kiswahili content (Elswah, 2024a; Elswah, 2024b). Inconsistent and inaccurate content moderation results in lower trust among users in the Global South, and limits their ability to express themselves freely and access information. 

This report dives into Tamil speakers’ experiences on the web, particularly on popular social media platforms and online forums run by Western and Indian companies. We highlight the impact of Tamil speakers’ perception of poor content moderation, particularly against a backdrop of democratic backsliding and growing repression of speech and civic participation in India and Sri Lanka (Vesteinsson, 2024; Nadaradjane, 2022). Ultimately, what emerges in this case study is a fragmented information environment where Tamil speakers perceive over-moderation while simultaneously encountering under-moderated feeds full of hate speech.   

We used a mixed-method approach, which included an online survey of 147 frequent social media users in India and Sri Lanka; 17 in-depth interviews with content moderators, content creators, platforms’ Trust & Safety representatives, and digital rights advocates; and a roundtable discussion with Tamil machine learning and data experts. The methods are detailed in the report’s appendix.

Based on these methods, we found that: 

1. Tamil speakers use a range of Western-based social media platforms and Indian platforms. Our survey indicates that Western social media platforms are more popular among Tamil speakers, while local TikTok alternatives are gaining popularity due to India’s TikTok ban. Online, Tamil speakers use tactics to circumvent content moderation, employing “algospeak” or computer-mediated communication, and, at other times, code-mixed and transliterated Tamil using Latin script for ease and convenience. These tactics complicate moderation.

2. Tech companies pursue various approaches to moderate Tamil content online, but mostly adhere to global or localized approaches. The global approach employs the same policies for all users worldwide, and relies on moderators and policy members who are not hired based on linguistic or regional expertise. Moderators are assigned content from across the world. In contrast, the local approach tailors some policies to meet Tamil language-specific guidance, and relies on more Tamil speakers to moderate content. Some Indian companies employ a hybrid approach, often making occasional localized adjustments for Tamil speakers.

3.
Tamil speakers, like others, routinely face inconsistent moderation, which they attribute to the fact that their primary language is not English. On the one hand, they encounter what they believe are under-moderated information environments, full of targeted abuse in Tamil. On the other hand, they encounter what they suspect is unfair over-moderation targeting Tamil speech in particular.

4. A majority of survey respondents are concerned about politically-motivated moderation and believe that content removals and restrictions are used to silence their voices online, particularly when they speak about politics. A few users also suspect that they experience “shadowbanning,” or a range of opaque, undisclosed moderation decisions by platforms, particularly when they use certain words or symbols commonly used by or associated with the Tamil community.

5. Despite a vibrant Tamil computing community, investment in automated moderation in Tamil still falls significantly short due to a lack of accessible resources, will, and financial constraints for smaller social media companies.

Read the full report.

The post Moderating Tamil Content on Social Media appeared first on Center for Democracy and Technology.

]]>
CDT Files Amicus Brief in Patterson v. Meta https://cdt.org/insights/cdt-files-amicus-brief-in-patterson-v-meta/ Thu, 08 May 2025 17:24:41 +0000 https://cdt.org/?post_type=insight&p=108750 On May 1, 2025, the Center for Democracy & Technology filed an amicus brief in the case of Patterson v. Meta. CDT filed this brief to bring to the court’s attention the broader impacts on free expression that weakening Section 230 will have on speech that is constitutionally protected, but controversial. The brief explains that […]

The post CDT Files Amicus Brief in Patterson v. Meta appeared first on Center for Democracy and Technology.

]]>
On May 1, 2025, the Center for Democracy & Technology filed an amicus brief in the case of Patterson v. Meta. CDT filed this brief to bring to the court’s attention the broader impacts on free expression that weakening Section 230 will have on speech that is constitutionally protected, but controversial. The brief explains that Section 230’s liability protections are essential to enable free expression online and they extend to the use of automated systems to engage to rank and order content as part of traditional publishing activities. It further argues that product liability claims do not fall outside of Section 230’s ambit, requiring courts to consider whether a particular product liability claim seeks to hold a service provider liable as a publisher of third party content. Finally, the brief notes that livestreaming is a method of publication of third-party content that also receives Section 230’s protection. 

Read the full brief.

The post CDT Files Amicus Brief in Patterson v. Meta appeared first on Center for Democracy and Technology.

]]>
EU Tech Policy Brief: May 2025 https://cdt.org/insights/eu-tech-policy-brief-may-2025/ Wed, 07 May 2025 00:01:11 +0000 https://cdt.org/?post_type=insight&p=108724 Welcome back to the Centre for Democracy & Technology Europe‘s Tech Policy Brief! This edition covers the most pressing technology and internet policy issues under debate in Europe and gives CDT’s perspective on the impact to digital rights. To sign up for CDT Europe’s AI newsletter, please visit our website. Do not hesitate to contact […]

The post EU Tech Policy Brief: May 2025 appeared first on Center for Democracy and Technology.

]]>
Welcome back to the Centre for Democracy & Technology Europe‘s Tech Policy Brief! This edition covers the most pressing technology and internet policy issues under debate in Europe and gives CDT’s perspective on the impact to digital rights. To sign up for CDT Europe’s AI newsletter, please visit our website. Do not hesitate to contact our team in Brussels.

👁 Security, Surveillance & Human Rights

Building Global Spyware Standards with the Pall Mall Process

As international attention focuses on misuses of commercial spyware, the Pall Mall Process continues to gather momentum. This joint initiative, led by France and the United Kingdom, seeks to establish international guiding principles for the development, sale, and use of commercial cyber intrusion capabilities (CCICs). 

At the Process’s second conference in Paris earlier this month, Programme Director Silvia Lorenzo Perez joined global stakeholders as the process concluded with the adoption of a Pall Mall Code of Practice for States. The Code has been endorsed by 25 countries to date, including 18 EU Member States. It sets out commitments for state action regarding the development, facilitation, acquisition, and deployment of CCICs. It also outlines good practices and regulatory recommendations to promote responsible state conduct in the use of CCICs. 

Pall Mall Process annual event in Paris.
Pall Mall Process annual event in Paris.

CDT Europe will soon publish a comprehensive assessment of the official document to provide deeper insights into its implications. In parallel, and as part of our ongoing work to advance spyware regulation within the EU, CDT Europe is leading preparation of the sixth edition of the civil society roundtable series, “Lifting the Veil – Advancing Spyware Regulation in the EU,” on 13 May. Stakeholders will discuss what meaningful action should look like in the EU, following the political commitments made by the Member States that endorsed the Pall Mall Code of Practice.

CSOs Urge Swedish Parliament to Reject Legislation Undermining Encryption

CDT Europe joined a coalition of civil society organisations, including members of the Global Encryption Coalition, in an open letter urging the Swedish Parliament to reject proposed legislation that would weaken encryption. This legislation, if enacted, would greatly undermine the security and privacy of Swedish citizens, companies, and institutions. Despite its intention to combat serious crime, the legislation’s dangerous approach would instead create vulnerabilities that criminals and other malicious actors could readily exploit. Compromising encryption would leave Sweden’s citizens and institutions less safe than before. The proposed legislation would particularly harm those who rely on encryption the most, including journalists, activists, survivors of domestic violence, and marginalised communities. Human rights organisations have consistently highlighted encryption’s critical role in safeguarding privacy and free expression. Additionally, weakening encryption would also pose a national security threat, as even the Swedish Armed Forces rely on encrypted tools like Signal for secure communication. 

Recommended read: Ofcom, Global Titles and Mobile Network Security, Measures to Address Misuse of Global Titles

 💬 Online Expression & Civic Space

DSA Civil Society Coordination Group Meets with the ODS Bodies Network

Earlier this month, the DSA Civil Society Coordination Group met with the Out-of-Court Dispute Settlement (ODS) Bodies Network for the first time to explore ways to collaborate. Under Article 21 of the Digital Services Act (DSA), ODS Bodies are to provide independent resolution of disputes between users and online platforms. As these bodies start forming and seeking certification, their role in helping users access redress and offering insights into platform compliance is becoming more important.

The meeting introduced the ODS Network’s mission: to encourage cooperation among certified bodies, promote best practices for data-sharing, and engage with platforms and regulators. Civil society organisations, which often support users who have faced harms on platforms, discussed how they could help identify cases that could be referred to ODS Bodies. In return, records from ODS Bodies could become a valuable resource for tracking systemic risks and holding platforms accountable under the DSA.

The discussion further focused on how to raise user awareness of redress options, make ODS procedures more accessible, and strengthen data reporting practices. Participants also outlined next steps for working more closely together, particularly around identifying the types of data that could best support civil society’s efforts to monitor risks and support enforcement actions by the European Commission.

Asha Allen Joins Euphoria Podcast to Discuss Civil Society in the EU

Civil society is under pressure, and now more than ever, solidarity and resilience are vital. These are the resounding conclusions of the latest episode of the podcast Euphoria, featuring CDT Europe’s Secretary General Asha Allen. Asha joined Arianna and Federico from EU&U to unpack the current state of human rights and the growing threats faced by civil society in Europe and beyond. With key EU legislation like the AI Act and Digital Services Act becoming increasingly politicised, they explored how to defend democracy, safeguard fundamental rights, and shape a digital future that truly serves its citizens. Listen now to discover how cross-movement collaboration and rights-based tech policy can help counter rising authoritarianism.

CDT Europe Secretary General Asha Allen speaking with podcasters Federico Terreni and Arianna Labasin from EU&U at the Euphoria Podcast recording.
CDT Europe Secretary General Asha Allen speaking with podcasters Federico Terreni and Arianna Labasin from EU&U at the Euphoria Podcast recording.

Recommended read: FEPs, Silenced, censored, resisting: feminist struggles in the digital age

⚖ Equity and Data

EU AI Act Explainer — AI at Work

In the fourth part of our series on the AI Act and its implications for human rights, we examine the deployment of AI systems in the workplace and the AI Act’s specific obligations aimed at ensuring the protection of workers. In particular, we assess which of the prohibited AI practices could become relevant for the workplace and where potential loopholes and gaps lie. We also focus on the obligations of providers and deployers of high-risk AI systems, which could increase protection of workers from harms caused by automated monitoring and decision-making systems. Finally, we examine to what extent the remedies and enforcement mechanisms foreseen by the AI Act can be a useful tool for workers and their representatives to claim their rights. Overall, we find that the AI Act’s approach to allow more favourable legislation in the employment sector to apply is a positive step. Nevertheless, the regulation itself has only limited potential to protect workers’ rights.

CSOs Express Concern with Withdrawal of AI Liability Directive

CDT Europe joined a coalition of civil society organisations in sending an open letter to European Commission Executive Vice-President Virkkunen and Commissioner McGrath, expressing deep concern over the Commission’s recent decision to withdraw the proposed Artificial Intelligence Liability Directive (AILD) and stressing the urgent need to immediately begin preparatory work on a new, robust liability framework. We argued that the proposal is necessary because individuals seeking compensation for AI-induced harm will need to prove that damage was caused by a faulty AI system, which would be an insurmountable burden without a liability framework. 

Programme Director Laura Lazaro Cabrera also participated in a working lunch hosted by The Nine to discuss the latest trends and developments in AI policy following the Paris AI Summit. Among other aspects, Laura tackled the deregulatory approach taken by the European Commission, the importance of countering industry narratives, and the fundamental rights concerns underlying some of the key features of the AI Act.

Equity and Data Programme Director Laura Lazaro Cabrera speaking on a panel at the “Post-Paris AI Summit: Key Trends and Policies” event hosted by The Nine.
Equity and Data Programme Director Laura Lazaro Cabrera speaking on a panel at the “Post-Paris AI Summit: Key Trends and Policies” event hosted by The Nine.

Recommended read: Tech Policy Press, Human Rights are Universal, Not Optional: Don’t Undermine the EU AI Act with a Faulty Code of Practice

🆕 New Team Member!

Marcel Mir Teijeiro, AI Policy Fellow in CDT Europe's Equity and Data programme.
Marcel Mir Teijeiro, AI Policy Fellow in CDT Europe’s Equity and Data programme.

CDT Europe’s team keeps growing! At the beginning of April, we welcomed Marcel Mir Teijeiro as the Equity and Data programme’s New AI Policy Fellow. He’ll work on the implementation of the AI Act and CDT Europe’s advocacy to protect the right to effective remedy for AI-induced harms. Previously, Marcel participated in the Code of Practice multistakeholder process for General-Purpose AI Models, advising rights-holder groups across the cultural and creative industries on transparency and intellectual property aspects. A Spanish qualified lawyer, he also helped develop a hash-based technical solution for training dataset disclosure shared with the AI Office, U.S. National Institute for Standards and Technology, and the UK AI Safety Institute. We are excited to have him on board, and look forward to working with him!

🗞 In the Press

⏫ Upcoming Events

Tech Policy in 2025: Where Does Europe Stand?: On May 15, CDT Europe and Tech Policy Press are co-hosting an evening of drinks and informal discussion, “Tech Policy in 2025: Where Does Europe Stand?”. It will be an opportunity to connect with fellow tech policy enthusiasts, share ideas, and figure out what the future holds for tech regulation in Europe. The event is currently sold out, but you can still join the waitlist in case some spots open up! 

Lifting the Veil – Advancing Spyware Regulation in the EU: CDT Europe, together with the Open Government Partnership, is hosting the sixth edition of the Civil Society Roundtable Series: “Lifting the Veil – Advancing Spyware Regulation in the EU.” The roundtable will gather representatives from EU Member States, EU institutions, and international bodies alongside civil society organisations, technologists, legal scholars, and human rights defenders for an in-depth exchange on the future of spyware regulation. The participation is invitation-only, so if you think you can contribute to the conversation, feel free to reach out at eu@cdt.org.

CPDP.ai 2025: From 21 to 23 May, CDT Europe will participate in CPDP.ai 18th International Conference. Each year, CPDP gathers academics, lawyers, practitioners, policymakers, industry, and civil society from all over the world in Brussels, offering them an arena to exchange ideas and discuss the latest emerging issues and trends. This year, CDT Europe will be hosting two workshops on AI and spyware, in addition to our Secretary General Asha Allen speaking on a panel on the intersection of the DSA and online gender-based violence. You can still register to attend the conference.

The post EU Tech Policy Brief: May 2025 appeared first on Center for Democracy and Technology.

]]>
CDT Opposes Trump Administration Initiative to Routinely Collect Social Media Identifiers from Applicants for Immigration Benefits https://cdt.org/insights/cdt-opposes-trump-administration-initiative-to-routinely-collect-social-media-identifiers-from-applicants-for-immigration-benefits/ Mon, 05 May 2025 17:46:14 +0000 https://cdt.org/?post_type=insight&p=108522 These comments were co-authored by CDT Intern Jacob Smith. Today, CDT submitted comments opposing USCIS’ initiative to routinely collect social media identifiers from applicants for a wide variety of immigration benefits, ranging from asylum to naturalization. USCIS plans to collect social media identifiers to further its viewpoint-based immigration enforcement policy, which will punish and deport individuals […]

The post CDT Opposes Trump Administration Initiative to Routinely Collect Social Media Identifiers from Applicants for Immigration Benefits appeared first on Center for Democracy and Technology.

]]>
These comments were co-authored by CDT Intern Jacob Smith. 

Today, CDT submitted comments opposing USCIS’ initiative to routinely collect social media identifiers from applicants for a wide variety of immigration benefits, ranging from asylum to naturalization. USCIS plans to collect social media identifiers to further its viewpoint-based immigration enforcement policy, which will punish and deport individuals on the basis of their constitutionally-protected expression and chill the lawful speech of citizens and noncitizens alike. CDT’s comments document the Trump Administration’s unconstitutional and punitive immigration enforcement actions against lawful residents who exercised their rights to speech and protest. Social media surveillance furthered through USCIS’ proposed social media identifier collection would fly in the face of our First Amendment values and chill valuable expression. The negative consequences of these policies will be made worse through the use of imprecise AI tools that are bound to fail, exacerbating the chilling and punitive effects of the administration’s unlawful policies.

Read the full comments.

The post CDT Opposes Trump Administration Initiative to Routinely Collect Social Media Identifiers from Applicants for Immigration Benefits appeared first on Center for Democracy and Technology.

]]>
Automated Tools for Social Media Monitoring Irrevocably Chill Millions of Noncitizens’ Expression https://cdt.org/insights/automated-tools-for-social-media-monitoring-irrevocably-chill-millions-of-noncitizens-expression/ Tue, 15 Apr 2025 20:17:08 +0000 https://cdt.org/?post_type=insight&p=108372 Last week, USCIS stated its plans to routinely screen applicants’ social media activity for alleged antisemitism when making immigration decisions in millions of cases, and announced that it is scouring the social media accounts of foreign students for speech that it deems potential grounds to revoke their legal status. Simultaneously, the Department of State has […]

The post Automated Tools for Social Media Monitoring Irrevocably Chill Millions of Noncitizens’ Expression appeared first on Center for Democracy and Technology.

]]>
Last week, USCIS stated its plans to routinely screen applicants’ social media activity for alleged antisemitism when making immigration decisions in millions of cases, and announced that it is scouring the social media accounts of foreign students for speech that it deems potential grounds to revoke their legal status. Simultaneously, the Department of State has started using AI to enforce its “Catch and Revoke” policy and weed out “pro-Hamas” views among visa-holders, particularly including students who have protested against Israel’s war in Gaza. 

This isn’t USCIS’s first time conducting some form of social media monitoring; in fact, their first foray into social media data collection was in 2014. But, it is the first time the government has used a previously obscure provision of immigration law to target a large group of noncitizens for removal based on their political opinions and activism that the Secretary of State has determined could have “potentially serious adverse foreign policy consequences.” The current Administration’s broad definitions of speech that could lead to visa revocation or application denial, and the questionable constitutionality of making immigration decisions based on viewpoint, raise concerns that will only be exacerbated by the use of flawed, error-prone social media monitoring technologies.

The American immigration system already subjects applicants to disproportionate invasions of privacy and surveillance, some applicants more than others. In the current Administration, immigration enforcement has been particularly aggressive and gone beyond the bounds of previous enforcement efforts, with agents bringing deportation proceedings against applicants on valid visas on the basis of their legally-protected speech, including authorship of op-eds, participation in protests, and, according to a real albeit now-deleted social media post by the Immigration and Customs Enforcement agency, their ideas. Noncitizens have long been aware of the government’s surveillance of their speech and their social media activity, which has deterred them from accessing essential services and speaking freely on a wide range of topics, including their experience with immigration authorities, labor conditions in their workplace, or even domestic violence.

What is happening now, however, is an unprecedented and calculated effort by the U.S. government to conduct surveillance of public speech and use the results to target for removal those who disagree with government policy. At the time of writing, over 1,000 student visas have been revoked according to the State Department, some of which have been for participation in First Amendment-protected activities. For example, one post-doctoral student at Georgetown reportedly had his visa revoked for posting in support of Palestine on social media, posts that were characterized as “spreading Hamas propaganda” by a DHS spokesperson. In a high-profile case from earlier this year, the former President of Costa Rica received an email from the U.S. government revoking his visa to the United States a few weeks after he criticized the government on social media, saying, “It has never been easy for a small country to disagree with the U.S. government, and even less so, when its president behaves like a Roman emperor, telling the rest of the world what to do.” All signs indicate that disagreement with this Administration’s viewpoints could lead to negative consequences for noncitizens seeking to enter or remain in this country in any capacity.

This expansion of ideological targeting is cast against the backdrop of an immigration system that faces, at times, a Sisyphean backlog of applications and insufficient oversight of enforcement decisions, which are only growing in this political climate. Mistakes are routinely made, and they have devastating consequences. To the extent oversight agencies did exist, including through entities such as the Department of Homeland Security’s Office for Civil Rights and Civil Liberties, they have been shuttered or undermined, which will make it all the more difficult to identify and fix errors and failures to provide due process.

Applicants have little recourse to seek remedy or appeal mistakes when they are made, instead having to choose among cautious over-compliance in the form of silence, potential retaliation, or self-deportation to avoid it all. Increased social media surveillance of noncitizens against this backdrop will compound existing inequities within the system, and will almost certainly further chill noncitizens’ ability to speak and participate freely in society for fear of running afoul of the Administration.

And that’s all before accounting for the problems with the tools that the government will use to conduct this monitoring. The automated tools used for this type of social media surveillance are likely to be based on keyword filters and machine learning models, including large language models such as those that underlie chatbots such as ChatGPT. These tools are subject to various flaws and limitations that will exacerbate the deprivation of individuals’ fundamental rights to free expression and due process. This litany of problems with automated social media analysis is so pronounced that DHS opted against using such a system during the first Trump administration. DHS’s concerns about erroneous enforcement and deportations may have disappeared, but the risks from this technology have not.

First, models may be trained with a particular bias. Social media monitoring systems are generally trained on selected keywords and data easily found on the web, such as data scraped from Reddit, Wikipedia, and other largely open-access sources, which over-index on the views and perspectives of a few. Keywords may be added to the training corpus to fit the domain of use, such as offering examples of what constitutes “anti-semitism” or threats to national security. Should the training data over-represent a particular set of views or designations of “foreign terrorists,” the model may over-flag speech by some individuals more than others. The Administration’s over-capacious definition of the term “antisemitic” may be weaponized during the training of these social media monitoring models, subjecting to greater scrutiny anyone who has engaged in speech with which the Administration disagrees on topics such as Israel-Palestine or campus protests related to military actions against Gaza, even where the speech is protected by the First Amendment.

Second, and relatedly, these prescriptive tools struggle to parse context. While keyword filters and machine learning models may be able to identify words or phrases they’ve been tasked to detect, they are unable to parse the context in which the term is used, including such essential human expressions as humor, sarcasm, irony, and reclaimed language. We’ve written previously about how the use of automated content analysis tools by Facebook to enforce its Dangerous Organization & Individuals’ policy erroneously flagged and took down all posts containing the word “shaheed” (which means martyr in Arabic), even when an individual was named Shaheed or in contexts where individuals were not using the term in a way that glorified or approved of violence. Noncitizen journalists who cover protests or federal policy and post their articles on social media may also be flagged and surveilled simply for doing their job. People named Isis have long been caught up in the fray and flagged by these automated technologies. Posts by individuals citing the “soup nazi” episode of Seinfeld may also be swept in this analysis. Models’ inability to parse context will also limit their ability to conduct predictive analysis. Vendors procured by USCIS to conduct social media monitoring assert that they use AI to scan for “risky keywords” and identify persons of interest, but promises of predictive analysis likely rest on untested and discriminatory assumptions and burden the fundamental rights of all individuals swept up by these social media monitoring tools. 

Finally, the systems will be especially error-prone in multilingual settings. New multilingual language models purport to work better in more languages, yet are still trained primarily on English-language data, some machine-translated non-English data, and other available and often religious or government documents,—all imperfect proxies for how individuals speak their languages online. Multilingual training data for models is likely to underinclude terms frequently used by native speakers, including spoken regional dialects, slang, code-mixed terms, and “algospeak.” As a result, most models are unable to parse the more informal ways people have of speaking online, leading to erroneous outcomes when models analyze non-English language speech.

There have already been countless instances where digital translation technologies have been used by U.S. immigration enforcement agencies in problematic ways, which have prevented individuals from accessing a fair process and even safety. For example, an automated translation tool resulted in an individual erroneously being denied asylum because it misunderstood that she was seeking safety from parental abuse, literally translating that her perpetrator “el jefe” was her boss rather than her father. An individual from Brazil was detained for six months because of an incomplete asylum application, because the translation tool ICE used translated “Belo Horizonte” literally to “beautiful horizon” instead of identifying it as a city in which the applicant had lived. Another automated system used to conduct content analysis mistranslated “good morning” in Arabic to “attack them.” Widespread use of these error-prone systems to detect disfavored ideas will only exacerbate the discriminatory treatment of those who speak English as a second language.

Ultimately, the adoption of automated technologies to scan social media data will punish people for engaging in legal speech and result in more errors in an already flawed system. It will also chill the speech of millions of people in this country and abroad, impoverishing the global conversations that happen online. An applicant seeking to adjust their status or become a U.S. citizen, or even a U.S. citizen seeking to communicate with a noncitizen, will reasonably think twice before speaking freely or engaging in constitutionally-protected activities like protesting, simply because of the specter of social media surveillance. They already are.

The post Automated Tools for Social Media Monitoring Irrevocably Chill Millions of Noncitizens’ Expression appeared first on Center for Democracy and Technology.

]]>
Why We Need a Digital Rights Framework for Gender-Diverse Communities https://cdt.org/insights/why-we-need-a-digital-rights-framework-for-gender-diverse-communities/ Tue, 15 Apr 2025 17:45:00 +0000 https://cdt.org/?post_type=insight&p=108363 By: Jess Reia, Assistant Professor of Data Science and Faculty-lead at the Digital Technology for Democracy Lab at the University of Virginia Disclaimer: The views expressed by CDT’s Non-Resident Fellows are their own and do not necessarily reflect the policy, position, or views of CDT. Few topics are more polarizing than gender identity and expression […]

The post Why We Need a Digital Rights Framework for Gender-Diverse Communities appeared first on Center for Democracy and Technology.

]]>
By: Jess Reia, Assistant Professor of Data Science and Faculty-lead at the Digital Technology for Democracy Lab at the University of Virginia

Disclaimer: The views expressed by CDT’s Non-Resident Fellows are their own and do not necessarily reflect the policy, position, or views of CDT.

Few topics are more polarizing than gender identity and expression for the current U.S. administration and its supporters. In the most recent U.S. election campaigns, several candidates used anti-trans discourse to worsen polarization, a rhetoric reflected in President Trump’s first day in office. On January 20, President Trump signed various Executive Orders, including one on so-called “gender ideology” and “biological truth,” in which he declared that only sex assigned at conception exists, either male or female. Part of a global agenda, and interconnected to other systemic issues in the U.S., the rampant attacks on transgender rights both online and off expose the importance of having a digital rights framework that considers the unique needs of gender-diverse communities. 

We see now that it is not only authoritarian governments attacking trans rights, as many countries, usually considered exemplary democracies, are also failing to offer basic protection to those at the margins of society. For example, in the United States in December 2024, the American Civil Liberties Union tracked at least 574 anti-LGBTQIA+ bills in the 118th Congress, most of them targeting gender-diverse communities through restrictions on healthcare (many on gender-affirming care), free speech, and civil rights. If we count the bills carried out from the previous year, that number is as high as 669. Even when these bills are defeated, their existence contributes to fearmongering and the current trend of dehumanizing trans people, which causes harm, anxiety, and human rights violations. Germane to this conversation is a reckoning of the ways digital technologies will compound or counter these threats to trans people.

In response to these threats, our team at the University of Virginia’s Digital Technology for Democracy Lab conducted research that aims to reimagine digital rights frameworks for the trans community. The study serves as an exploratory research project on trans data and public health supported by the UVA Center for Global Health Equity, and is inspired by our 2022 response to a White House Office of Science and Technology Policy request for information. A trans-centred approach is frequently excluded from decision-making spaces and international forums focusing on digital rights (such as the UN Internet Governance Forum), but can take us steps further in safeguarding fundamental rights for everyone.

Digital technologies and gender identity and expression

These anti-trans attacks should matter to digital rights advocates. Digital technologies serve as platforms for knowledge-sharing and care, yet are also arenas where anti-trans attacks are platformed and rampant. This feeds into a sense of ambivalence about being online as a transgender person. On the one hand, platforms such as social media networks serve as venues for community-building and become vital support networks, enabling individuals to share experiences, access resources, and foster a sense of belonging. In some instances, digital technologies may be the only place individuals find information on gender-affirming care.

On the other hand, these platforms can also facilitate harassment, abuse, and violence, becoming arenas for online gender-based violence (OGBV), particularly affecting transgender communities and women. For transgender individuals, OGBV manifests in specific ways, online and offline, such as the dismissal of gender identity, sharing of images without permission, hateful comments, and threats of violence and death. Simultaneously, there is a concerning trend of bills limiting the freedom of expression of educators and advocates in the U.S. 

Other concerns include the widespread use of biometric data (i.e. faces, fingerprints and iris scans) and automatic gender recognition (AGR) systems (including but not limited to facial recognition), which assume an individual’s gender based on biometric markers often to verify identity. Beyond issues of accuracy, these AGR Systems pose risks related to misidentification and discrimination against people undergoing gender-affirming care.

Building a Trans Digital Rights Framework

This approach is of utmost importance because digital rights frameworks rarely reflect the unique needs of adult gender-diverse communities. In our report, we present a first attempt at conceptualizing principles, guidelines and responses that can be relevant to other communities dealing with the ambivalences, possibilities, and risks of being visible and online. This framework applies to adults, and is not intended to address digital rights issues related to minors. We refer to these principles as a Trans Digital Rights (TDR) Framework. To build a solid foundation for inclusive digital rights advocacy, we cover issues pertaining to data collection, citizen-generated data, and artificial intelligence, then present guidance to a range of actors that includes:

  • Reimagining the right to be forgotten in relation to gender transition: Reimagine mechanisms that facilitate the exercise of the right to be forgotten applied to trans identity information and gender transition, allowing people to exclude, deindex, or delete their outdated, useless, or decontextualized information from online and offline databases.
  • Incorporating a purpose limitation principle: Incorporate a requirement for companies collecting information on gender identity and expression to only collect data necessary for the service they are providing, while being transparent about how that data will be used and limiting its use for other purposes.
  • Enabling 2SLGBTQIA+ positive content moderation: Enable 2SLGBTQIA+ positive content moderation policies that are conscious of the ways both over censorship and underprotection of online spaces limit the ability of gender-diverse people to use the internet – and social media specifically. It should be a user choice, not a platform requirement. Additionally, adding more user control over recommender systems can help avoid unwanted advertisements that could reinforce binary gender identities. Learn more about “2SLGBTQIA+” and gender-diverse communities.
  • Prohibiting deadnaming and misgendering in online platforms: Include clauses that prohibit human rights abuses based on gender, sexuality, and gender identity in terms of service (ToS) and policies of digital platforms, and recognize targeted deadnaming and misgendering as hate speech.
  • Addressing mis/disinformation and polarization: Address specific challenges that misinformation and polarization create for transgender individuals as an important first step for public awareness. It is equally important to understand the weaponization of historical hatred of gender-diverse communities and work together with specialists to tackle these issues. Examples are de-platforming extremist and transphobic content, and preventing misinformation on gender identities, gender-affirming care, and other aspects of trans lives.
  • Bringing digital rights into transgender-focused governmental data collection: The federal government should apply digital rights principles to data collection to increase data inclusion, visibility, and understanding of trans people and the issues they face while also protecting data privacy, even (and especially) in countries and jurisdictions lacking robust privacy regulatory frameworks.
  • Considering gender identity data as sensitive data: Under many data protection frameworks, sensitive data receives heightened protections, often requiring consent to collect it, mandatory Data Protection Impact Assessment (DPIA), and limits to its use.
  • Preventing deanonymization: Federal agencies, companies, and other non-state actors must adopt measures to prevent data de-anonymization relating to respondents’ gender identities in data collection processes, from research projects to censuses and surveys.
  • Rethinking data breaches from the perspective of attacks on trans rights: As data breaches can lead to significant privacy violations, outing individuals without their consent and increasing their vulnerability to discrimination, harassment, and targeted violence, we must rethink data breaches to incorporate the profound and disproportionate impacts on transgender individuals and communities.
  • Informing on intended uses of data: Make it clear to survey respondents about the intended uses of the collected data and the conditions for sharing it with other agencies. Individuals with gender non-conforming identities may feel comfortable providing this type of data under certain circumstances. Still, they certainly would hesitate if they knew their data would be available to other federal or state agencies that could put them at risk.
  • Facilitating the removal and changing of gender identity information in IDs: Policymakers should ensure access for trans individuals to correct name and gender information for both physical and digital IDs and any other government data. Additionally, we need to consider the removal of gender information altogether from IDs.
  • Involving advocates, civil society, and community organizations: Invite actors working on promoting digital rights to join the conversation about 2SLGBTQIA+ data equity—and vice versa—to generate a productive exchange about moving forward collaboratively. Additionally, invest in capacity-building of gender-diverse communities, learning from data stewardship efforts, and developing policy recommendations and guidelines for data collection alongside gender-diverse communities and advocates.

After detailing principles and guidance in the TDR framework, our report also introduces seven policy recommendations aimed at different actors (i.e. government, civil society organizations and advocates, industry, and academia.) These recommendations vary from broader actions, such as the need to engage with civil society organizations and advocates to improve data collection and evidence-based policymaking while strengthening open data efforts, to more specific actionable items. Examples of the latter include improving mechanisms that allow the participation of trans people in digital rights advocacy, building platforms for knowledge-sharing, designing trans-friendly AI impact assessments and trans-inclusive adoption tools, as well as prioritizing responsible, ethical and trans-friendly health care, online and offline.

You can download the full report from LibraOpen, the online archive of University of Virginia Scholarship. 

The work we present here is an attempt to fill in gaps in current research and advocacy, as well as build bridges between the trans rights and digital rights movements. This research was led by Dr. Jess Reia and co-authored with researchers Rachel Leach and Sophie Li. The project received funding and support from the UVA Center for Global Health Equity, the UVA School of Data Science and the Digital Technology for Democracy Lab at the Karsh Institute.

The post Why We Need a Digital Rights Framework for Gender-Diverse Communities appeared first on Center for Democracy and Technology.

]]>
EU Tech Policy Brief: April 2025 https://cdt.org/insights/eu-tech-policy-brief-april-2025/ Tue, 01 Apr 2025 21:26:17 +0000 https://cdt.org/?post_type=insight&p=108123 Welcome back to the Centre for Democracy & Technology Europe‘s Tech Policy Brief! This edition covers the most pressing technology and internet policy issues under debate in Europe and gives CDT’s perspective on the impact to digital rights. To sign up for CDT Europe’s AI newsletter, please visit our website. Do not hesitate to contact […]

The post EU Tech Policy Brief: April 2025 appeared first on Center for Democracy and Technology.

]]>
Welcome back to the Centre for Democracy & Technology Europe‘s Tech Policy Brief! This edition covers the most pressing technology and internet policy issues under debate in Europe and gives CDT’s perspective on the impact to digital rights. To sign up for CDT Europe’s AI newsletter, please visit our website. Do not hesitate to contact our team in Brussels.

👁 Security, Surveillance & Human Rights

Citizen Lab Unveils Surveillance Abuses in Europe and Beyond                                       

​The recent Citizen Lab report regarding deployment of Paragon spyware in EU Member States, particularly Italy and allegedly in Cyprus and Denmark, highlights a concerning trend of surveillance targeting journalists, government opponents, and human rights defenders. Invasive monitoring of journalist Francesco Cancellato, members of the NGO Mediterranea Saving Humans, and human rights activist Yambio raises serious concerns about press freedom, fundamental rights, and the broader implications for democracy and rule of law in the EU. 

The Italian government’s denial that it authorised surveillance, while reports indicate otherwise, indicates a lack of transparency and accountability. Reportedly, the Undersecretary to the Presidency of the Council of Ministers admitted that Italian intelligence services used Paragon spyware against Mediterranean activists, citing national security justifications. This admission highlights the urgent need for transparent oversight mechanisms and robust legal frameworks to prevent misuse of surveillance technologies. 

Graphic for Citizen Lab report, which reads, "Virtue or Vice? A First Look at Paragon's Proliferating Spyware Options". Graphic has a yellow background, and a grayscale hand reaching through great message bubbles.
Graphic for Citizen Lab report, which reads, “Virtue or Vice? A First Look at Paragon’s Proliferating Spyware Options”. Graphic has a yellow background, and a grayscale hand reaching through great message bubbles.

Lack of decisive action at the European level in response to these findings is alarming. Efforts to initiate a plenary debate within the European Parliament have stalled due to insufficient political support, reflecting a broader pattern of inaction that threatens civic space and fundamental rights across the EU. This inertia is particularly concerning given parallel developments in France, Germany, and Austria, where legislative measures are being considered to legalise use of surveillance technologies. In light of the European Parliament’s PEGA Committee findings on Pegasus and equivalent spyware, it is imperative that EU institutions and Member States establish clear, rights-respecting policies governing the use of surveillance tools. Normalisation of intrusive surveillance without adequate safeguards poses a direct challenge to democratic principles and the protection of human rights within the EU.

Recommended read: Amnesty International, Serbia: Technical Briefing: Journalists targeted with Pegasus spyware

 💬 Online Expression & Civic Space

DSA Civil Society Coordination Group Publishes Analysis on DSA Risk Assessment Reports

Key elements of the Digital Services Act’s (DSA) due diligence obligations for Very Large Online Platforms and Search Engines (VLOPs/VLOSEs) are the provisions on risk assessment and mitigation. Last November, VLOPs and VLOSEs published their first risk assessment reports, which the DSA Civil Society Coordination Group, convened and coordinated by CDT Europe, took the opportunity to jointly assess. We identified both promising practices to adopt and critical gaps to address in order to improve future iterations of these reports and ensure meaningful DSA compliance.

Our analysis zooms in on key topics like online protection of minors, media pluralism, electoral integrity, and online gender-based violence. Importantly, we found that platforms have overwhelmingly focused on identifying and mitigating user-generated risks, as a result focusing less on risks stemming from the design of their services. In addition, platforms do not provide sufficient metrics and data to assess the effectiveness of the mitigation measures they employ. In our analysis, we describe what data and metrics future reports could reasonably include to achieve more meaningful transparency. 

Graphic with a blue background, with logo for the DSA Civil Society Coordination Group featuring members' logos. In black text, graphic reads, "Initial Analysis on the First Round of Risk Assessments Reports under the EU Digital Services Act".
Graphic with a blue background, with logo for the DSA Civil Society Coordination Group featuring members’ logos. In black text, graphic reads, “Initial Analysis on the First Round of Risk Assessments Reports under the EU Digital Services Act”.

CDT Europe’s David Klotsonis, lead author of the analysis, commented, “As the first attempt at DSA Risk Assessments, we didn’t expect perfection — but we did expect substance. Instead, these reports fall short as transparency tools, offering little new data on mitigation effectiveness or meaningful engagement with experts and affected communities. This is a chance for platforms to prove they take user safety seriously. To meet the DSA’s promise, they must provide real transparency and make civil society a key part of the risk assessment process. We are committed to providing constructive feedback and to fostering an ongoing dialogue.”

Recommended read: Tech Policy Press, A New Framework for Understanding Algorithmic Feeds and How to Fix Them 

⚖ Equity and Data

Code of Practice on General-Purpose AI Final Draft Falls Short

Following CDT Europe’s initial reaction to the release of the third Draft Code of Practice on General-Purpose AI (GPAI), we published a full analysis highlighting key concerns. One major issue is the Code’s narrow interpretation of the AI Act, which excludes fundamental rights risks from the list of selected risks that GPAI model providers must assess. Instead, assessing these risks is left as an option, and is only required if such risks are created by a model’s high-impact capabilities.

This approach stands in contrast to the growing international consensus, including the 2025 International AI Safety Report, which acknowledges the fundamental rights risks posed by GPAI. The Code also argues that existing legislation can better address these risks, but we push back on this claim. Laws like the General Data Protection Regulation, the Digital Services Act, and the Digital Markets Act lack the necessary tools to fully tackle these challenges.

Moreover, by making it optional to assess fundamental rights risks, the Code weakens some of its more promising provisions, such as requirements for external risk assessments and clear definitions of unacceptable risk tiers. 

In response to these concerns, we joined a coalition of civil society organisations in calling for a revised draft that explicitly includes fundamental rights risks in its risk taxonomy.

Global AI Standards Hub Summit 

At the inaugural global AI Standards Hub Summit, co-organised by the Alan Turing Institute, CDT Europe’s Laura Lazaro Cabrera spoke at a session exploring the role of fundamental rights in the development of international AI standards. Laura highlighted the importance of integrating sociotechnical expertise and meaningfully involving civil society actors to strengthen AI standards from a fundamental rights perspective. Laura emphasised the need to create dedicated spaces for civil society to participate in standards processes, tailored to the diversity of their contributions and resource limitations.  

Image featuring Programme Director for Equity and Data Laura Lazaro Cabrera speaking at a panel with three other panelists on the role of fundamental rights in standardisation, at the Global AI Standard Hub Summit
Image featuring Programme Director for Equity and Data Laura Lazaro Cabrera speaking at a panel with three other panelists on the role of fundamental rights in standardisation, at the Global AI Standard Hub Summit

Recommended read: Tech Policy Press, Human Rights are Universal, Not Optional: Don’t Undermine the EU AI Act with a Faulty Code of Practice

🆕 Job Opportunities in Brussels: Join Our EU Team

We’re looking for two motivated individuals to join our Brussels office and support our mission to promote human rights in the digital age. 

The Operations & Finance Officer will play a key role in keeping our EU office running smoothly—managing budgets, coordinating logistics, and ensuring strong operational foundations for our advocacy work. 

We’re also seeking an EU Advocacy Intern to support our policy and advocacy efforts, with hands-on experience in research, event planning, and stakeholder engagement. 

Apply before 23 April 2025 by sending your cover letter and CV to hr@cdt.org. For more information, visit our website

🗞 In the Press

⏫ Upcoming Event

Pall Mall Process Conference: On 3 and 4 April, our Director for Security and Surveillance Silvia Lorenzo Perez will participate in the annual Pall Mall Process Conference in Paris. 

The post EU Tech Policy Brief: April 2025 appeared first on Center for Democracy and Technology.

]]>
CDT Joins More Than 100 Civil Society Organizations In Recommendations to Operationalize the WSIS +20 Review Process https://cdt.org/insights/cdt-joins-more-than-100-civil-society-organizations-in-recommendations-to-operationalize-the-wsis-20-review-process/ Tue, 01 Apr 2025 19:50:43 +0000 https://cdt.org/?post_type=insight&p=108113 On March 25, 2025, CDT joined more than 100 organizations in a letter presenting a five point plan for the implementation of the WSIS +20 review process. WSIS +20 is the 20 year review of the World Summit on the Information Society, an ongoing process to discuss global governance of digital technologies and their impact […]

The post CDT Joins More Than 100 Civil Society Organizations In Recommendations to Operationalize the WSIS +20 Review Process appeared first on Center for Democracy and Technology.

]]>
On March 25, 2025, CDT joined more than 100 organizations in a letter presenting a five point plan for the implementation of the WSIS +20 review process. WSIS +20 is the 20 year review of the World Summit on the Information Society, an ongoing process to discuss global governance of digital technologies and their impact on human rights. The plan provides five critical recommendations: publish a clear and inclusive timeline, ensure transparency and accountability, facilitate inclusive and meaningful stakeholder consultations, broaden and diversity participation, and maximize inclusive participation in final negotiations. These recommendations are designed to ensure that the WSIS +20 review successfully incorporates the points of view of the wide variety of stakeholders around the globe that access information online and use digital technologies and ensures human rights are protected in the digital age.

Read the full letter.

The post CDT Joins More Than 100 Civil Society Organizations In Recommendations to Operationalize the WSIS +20 Review Process appeared first on Center for Democracy and Technology.

]]>
A Call for US Leadership in the Digital Age https://cdt.org/insights/a-call-for-us-leadership-in-the-digital-age/ Mon, 31 Mar 2025 18:13:51 +0000 https://cdt.org/?post_type=insight&p=108104 The global digital economy stands at a crossroads. The decisions made today will determine whether the Internet remains an open engine for innovation, economic growth, and free expression, or becomes fragmented and controlled by forces hostile to these values. The United States has the opportunity—and responsibility—to lead the world towards a future where the Internet […]

The post A Call for US Leadership in the Digital Age appeared first on Center for Democracy and Technology.

]]>
The global digital economy stands at a crossroads. The decisions made today will determine whether the Internet remains an open engine for innovation, economic growth, and free expression, or becomes fragmented and controlled by forces hostile to these values. The United States has the opportunity—and responsibility—to lead the world towards a future where the Internet empowers individuals, businesses, and societies.

As a group of organizations and experts that believe an open, global, secure, and trusted Internet is crucial to digital trade and online discourse, we are eager to support the administration in advancing principles that protect the Internet’s ability to enable innovation, promote free expression and access to information, and foster a dynamic digital economy.

The Stakes: A Free and Open Internet Under Threat

The Internet has revolutionized the way we live, work, speak, and learn. It has fueled unprecedented economic growth, connected people across borders, and provided a platform for the free exchange of ideas. However, this progress is under threat. A growing number of countries are adopting policies that restrict cross-border data flows, mandate data localization, force the disclosure of source code, and discriminate against foreign digital products. These policies undermine the very foundations of the Internet, threatening its ability to support innovation, economic growth, and fundamental freedoms.

A Call to Leadership

The United States’ long tradition of leadership in promoting an open Internet has directly contributed to its strength as a hub for tech innovation and thriving digital economy.

Since the 1990s, and particularly beginning in 2013, leaders in Congress, including Senators John Thune and Ron Wyden, pushed for the United States to lead internationally to promote an open Internet and digital trade, by ensuring that data can flow freely among trading partners and to prevent discrimination against American digital content. From there, the United States promoted and secured international consensus that protected the Internet’s ability to support a thriving U.S. digital economy, including in trade agreements negotiated by the Trump administration with guidance and overwhelming bipartisan support from Congress.

Now is the time to maintain that leadership in the digital realm. The US should work with like-minded countries to establish a framework for open data flows crucial to an open Internet and digital trade, with the following core principles:

  1. Protect The Free Flow of Information: Data is the lifeblood of the digital economy. Restrictions on cross-border data flows, including tariffs on electronic transmissions and limits on access to information, would stifle innovation, limit consumer choice, and impede economic growth and the global exchange of ideas. The US must champion policies that ensure the free flow of information across borders, while respecting privacy and security.
  2. Data Security and Privacy, Not Data Localization: Data localization requirements do not enhance security. In fact, they often have the opposite effect, fragmenting the Internet and making it more difficult to protect data from cybersecurity threats. The US should advocate for policies that promote data security and privacy through international cooperation and the adoption of strong cybersecurity standards, and should push back against protectionist measures that isolate countries, harm businesses, and limit the free flow of information.
  3. Prevent Mandated Source Code Disclosure: Forced disclosure of source code as a condition for doing business in a country undermines intellectual property rights, discourages innovation, and makes businesses vulnerable to cyberattacks. While open-source development fosters transparency and collaboration, mandated access to proprietary code gives adversaries an unwarranted competitive advantage, while amplifying the potential for surveillance, exploitation, and jeopardizing national security and the integrity of the Internet. The US must firmly oppose such policies, recognizing that protecting intellectual property is essential for a dynamic digital economy. Disclosures for legitimate judicial and regulatory purposes must be narrowly tailored and accompanied by proportionate privacy and security assurances.
  4. Don’t Discriminate Against Foreign Digital Services and Products: Governments should not discriminate against foreign digital products or services. Such discrimination distorts markets, limits consumer choice, and undermines the benefits of global competition. While governments should be able to enforce generally applicable regulations, the US must advocate for policies that ensure a level playing field for all digital businesses, regardless of their country of origin.

The United States has a unique opportunity to protect the Internet and shape the future of the digital economy. By championing these principles, the US can help build a global digital ecosystem that is open and secure, and works for all. We urge the Administration and Congress to seize this moment and lead the world towards a future where the Internet continues to empower individuals, businesses, and societies around the globe.

Sincerely,

Internet Society

American Civil Liberties Union

Center for Democracy and Technology

Freedom House

Internet Society Washington DC Chapter

Also find the letter here.

The post A Call for US Leadership in the Digital Age appeared first on Center for Democracy and Technology.

]]>
Using Internet Standards to Keep Kids Away from Adult Content Online https://cdt.org/insights/using-internet-standards-to-keep-kids-away-from-adult-content-online/ Tue, 25 Mar 2025 15:13:21 +0000 https://cdt.org/?post_type=insight&p=108037 In an effort to block kids from online content intended for adults, some have argued that age-verification or age-assurance tools offer the possibility of simple, effective guardrails.  In our brief to the Supreme Court last year, CDT laid out serious concerns these tools raise regarding privacy and First Amendment freedoms – in addition to questions […]

The post Using Internet Standards to Keep Kids Away from Adult Content Online appeared first on Center for Democracy and Technology.

]]>
In an effort to block kids from online content intended for adults, some have argued that age-verification or age-assurance tools offer the possibility of simple, effective guardrails. 

In our brief to the Supreme Court last year, CDT laid out serious concerns these tools raise regarding privacy and First Amendment freedoms – in addition to questions about their efficacy. 

But that doesn’t mean technical solutions can’t address some valid concerns about adult content. In particular, two policies related to internet standards are worth pursuing right now.

First, parents can already set most children’s devices to block adult websites, which depends on sites labeling themselves as adults-only via metadata. Most adult content sites are happy to label themselves as adults-only: it’s cheap and easy, and allowing children to view their content raises legal, regulatory, ethical and commercial concerns that sites would rather avoid. Making these tools more robust — well-defined standards, widely adopted by websites and interpreted by web browsers and parental control tools — can make them more effective.

Alternatively, just as we allow users to request “safe mode” of Google search or YouTube, devices could be configured to request “safe mode” of other sites on the internet. Proactively alerting sites that there’s a young person (or just someone avoiding NSFW content) on the other end of the connection has the advantage of working on platforms that contain content appropriate for general audiences alongside content for adults only.

There’s plenty of work to do to implement these tools, but standards for sites to self-label and for users to indicate their content preferences are already being proposed.

It’s possible that in the future age-verification and age-assurance systems will be able to avoid the worst problems of the current systems, perhaps by associating a government-issued ID with unlinkable digital tokens that can be presented to a website without requiring someone to send a photo of an actual ID card or revealing a government-issued identifier. But for the time being, standards-based solutions like these provide the most practical opportunities both to protect children from adult content and to protect the rights of adults to access the content they want, while also avoiding severe privacy and security issues.

The post Using Internet Standards to Keep Kids Away from Adult Content Online appeared first on Center for Democracy and Technology.

]]>