Insights Archive - Center for Democracy and Technology https://cdt.org/insights/ Wed, 14 May 2025 18:55:51 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://cdt.org/wp-content/uploads/2019/11/cropped-cdt-logo-32x32.png Insights Archive - Center for Democracy and Technology https://cdt.org/insights/ 32 32 Op-Ed – Artificial Sweeteners: The Dangers of Sycophantic AI https://cdt.org/insights/op-ed-artificial-sweeteners-the-dangers-of-sycophantic-ai/ Wed, 14 May 2025 18:49:56 +0000 https://cdt.org/?post_type=insight&p=108846 This op-ed – authored by CDT’s Amy Winecoff  – first appeared in Tech Policy Press on May 14, 2025. A portion of the text has been pasted below. At the end of April, OpenAI released a model update that made ChatGPT feel less like a helpful assistant and more like a yes-man. The update was quickly rolled back, […]

The post Op-Ed – Artificial Sweeteners: The Dangers of Sycophantic AI appeared first on Center for Democracy and Technology.

]]>
This op-ed – authored by CDT’s Amy Winecoff  – first appeared in Tech Policy Press on May 14, 2025. A portion of the text has been pasted below.

At the end of April, OpenAI released a model update that made ChatGPT feel less like a helpful assistant and more like a yes-man. The update was quickly rolled back, with CEO Sam Altman admitting the model had become “too sycophant-y and annoying.” But framing the concern as just about the tool’s irritating cheerfulness downplays the potential seriousness of the issue. Users reported the model encouraging them to stop taking their medication or lash out at strangers.

This problem isn’t limited to OpenAI’s recent update. A growing number of anecdotes and reportssuggest that overly flattering, affirming AI systems may be reinforcing delusional thinking, deepening social isolation, and distorting users’ grip on reality. In this context, the OpenAI incident serves as a sharp warning: in the effort to make AI friendly and agreeable, tech firms may also be introducing new dangers.

At the center of AI sycophancy are techniques designed to make systems safer and more “aligned” with human values. AI systems are typically trained on massive datasets sourced from the public internet. As a result, these systems learn not only from useful information but also from toxic, illegal, and unethical content. To address these problems, AI developers have introduced techniques to help AI systems respond in ways that better match users’ intentions.

Read the full article.

The post Op-Ed – Artificial Sweeteners: The Dangers of Sycophantic AI appeared first on Center for Democracy and Technology.

]]>
AI Agents In Focus: Technical and Policy Considerations https://cdt.org/insights/ai-agents-in-focus-technical-and-policy-considerations/ Wed, 14 May 2025 15:26:42 +0000 https://cdt.org/?post_type=insight&p=108816 AI agents are moving rapidly from prototypes to real-world products. These systems are increasingly embedded into consumer tools, enterprise workflows, and developer platforms. Yet despite their growing visibility, the term “AI agent” lacks a clear definition and is used to describe a wide spectrum of systems — from conversational assistants to action-oriented tools capable of […]

The post AI Agents In Focus: Technical and Policy Considerations appeared first on Center for Democracy and Technology.

]]>
AI Agents In Focus: Technical and Policy Considerations. White and black document on a grey background.
Brief entitled, “AI Agents In Focus: Technical and Policy Considerations.” White and black document on a grey background.

AI agents are moving rapidly from prototypes to real-world products. These systems are increasingly embedded into consumer tools, enterprise workflows, and developer platforms. Yet despite their growing visibility, the term “AI agent” lacks a clear definition and is used to describe a wide spectrum of systems — from conversational assistants to action-oriented tools capable of executing complex tasks. This brief focuses on a narrower and increasingly relevant subset: action-taking AI agents, which pursue goals by making decisions and interacting with digital environments or tools, often with limited human oversight. 

As an emerging class of AI systems, action-taking agents indicate a distinct shift from earlier generations of generative AI. Unlike passive assistants that respond to user prompts, these systems can initiate tasks, revise plans based on new information, and operate across applications or time horizons. They typically combine large language models (LLMs) with structured workflows and tool access, enabling them to navigate interfaces, retrieve and input data, and coordinate tasks across systems, in addition to often offering conversational interfaces. In more advanced settings, they operate in orchestration frameworks where multiple agents collaborate, each with distinct roles or domain expertise.

This brief begins by outlining how action-taking agents function, the technical components that enable them, and the kinds of agentic products being built. It then explains how technical components of AI agents — such as control loop complexity, tool access, and scaffolding architecture — shape their behavior in practice. Finally, it surfaces emerging areas of policy concern where the risks posed by agents increasingly appear to outpace the safeguards currently in place, including security, privacy, control, human-likeness, governance infrastructure, and allocation of responsibility. Together, these sections aim to clarify both how AI agents currently work and what is needed to ensure they are responsibly developed and deployed.

Read the full brief.

The post AI Agents In Focus: Technical and Policy Considerations appeared first on Center for Democracy and Technology.

]]>
Moderating Tamil Content on Social Media https://cdt.org/insights/moderating-tamil-content-on-social-media/ Wed, 14 May 2025 13:54:09 +0000 https://cdt.org/?post_type=insight&p=108482 Tamil is a language with a long history. Spoken by over 80 million people worldwide, or over 1% of the world’s population, early inscriptions in the language date back to the 5th Century B.C.E (Murugan & Visalakshi , 2024). The language is spoken widely in India (predominantly in Tamil Nadu and Puducherry), in Sri Lanka, […]

The post Moderating Tamil Content on Social Media appeared first on Center for Democracy and Technology.

]]>
Graphic for CDT Research report, entitled "Moderating Tamil Content on Social Media." Illustration of a hand, with a variety of golden rings and bracelets on their wrist and fingers, seen pinching / holding on to a blue speech bubble with three dots indicating that someone is contemplating expressing themselves. A deep green background with a kolam pattern.
Graphic for CDT Research report, entitled “Moderating Tamil Content on Social Media.” Illustration of a hand, with a variety of golden rings and bracelets on their wrist and fingers, seen pinching / holding on to a blue speech bubble with three dots indicating that someone is contemplating expressing themselves. A deep green background with a kolam pattern.

Tamil is a language with a long history. Spoken by over 80 million people worldwide, or over 1% of the world’s population, early inscriptions in the language date back to the 5th Century B.C.E (Murugan & Visalakshi , 2024). The language is spoken widely in India (predominantly in Tamil Nadu and Puducherry), in Sri Lanka, and across diaspora communities in Malaysia, Thailand, Canada, the United Kingdom, the United States, and beyond. Despite the widespread use of the language, there remains limited understanding of how major social media platforms moderate content in Tamil. This report examines the online experiences of Tamil users and explores the challenges of applying consistent content moderation processes for this language. 

This report is part of a series that examines content moderation within low-resource and indigenous languages in the Global South. Low-resource languages are languages in which sufficient high-quality data is not available to train models, making it difficult to develop robust content moderation systems, particularly automated systems (Nicholas & Bhatia, 2023). In previous case studies conducted in the series, we found that this lack of high-quality and native datasets impeded effective and accurate moderation of Maghrebi Arabic and Kiswahili content (Elswah, 2024a; Elswah, 2024b). Inconsistent and inaccurate content moderation results in lower trust among users in the Global South, and limits their ability to express themselves freely and access information. 

This report dives into Tamil speakers’ experiences on the web, particularly on popular social media platforms and online forums run by Western and Indian companies. We highlight the impact of Tamil speakers’ perception of poor content moderation, particularly against a backdrop of democratic backsliding and growing repression of speech and civic participation in India and Sri Lanka (Vesteinsson, 2024; Nadaradjane, 2022). Ultimately, what emerges in this case study is a fragmented information environment where Tamil speakers perceive over-moderation while simultaneously encountering under-moderated feeds full of hate speech.   

We used a mixed-method approach, which included an online survey of 147 frequent social media users in India and Sri Lanka; 17 in-depth interviews with content moderators, content creators, platforms’ Trust & Safety representatives, and digital rights advocates; and a roundtable discussion with Tamil machine learning and data experts. The methods are detailed in the report’s appendix.

Based on these methods, we found that: 

1. Tamil speakers use a range of Western-based social media platforms and Indian platforms. Our survey indicates that Western social media platforms are more popular among Tamil speakers, while local TikTok alternatives are gaining popularity due to India’s TikTok ban. Online, Tamil speakers use tactics to circumvent content moderation, employing “algospeak” or computer-mediated communication, and, at other times, code-mixed and transliterated Tamil using Latin script for ease and convenience. These tactics complicate moderation.

2. Tech companies pursue various approaches to moderate Tamil content online, but mostly adhere to global or localized approaches. The global approach employs the same policies for all users worldwide, and relies on moderators and policy members who are not hired based on linguistic or regional expertise. Moderators are assigned content from across the world. In contrast, the local approach tailors some policies to meet Tamil language-specific guidance, and relies on more Tamil speakers to moderate content. Some Indian companies employ a hybrid approach, often making occasional localized adjustments for Tamil speakers.

3.
Tamil speakers, like others, routinely face inconsistent moderation, which they attribute to the fact that their primary language is not English. On the one hand, they encounter what they believe are under-moderated information environments, full of targeted abuse in Tamil. On the other hand, they encounter what they suspect is unfair over-moderation targeting Tamil speech in particular.

4. A majority of survey respondents are concerned about politically-motivated moderation and believe that content removals and restrictions are used to silence their voices online, particularly when they speak about politics. A few users also suspect that they experience “shadowbanning,” or a range of opaque, undisclosed moderation decisions by platforms, particularly when they use certain words or symbols commonly used by or associated with the Tamil community.

5. Despite a vibrant Tamil computing community, investment in automated moderation in Tamil still falls significantly short due to a lack of accessible resources, will, and financial constraints for smaller social media companies.

Read the full report.

The post Moderating Tamil Content on Social Media appeared first on Center for Democracy and Technology.

]]>
CDT Joins Call for SNAP Payment Processors to Refuse USDA Data Requests https://cdt.org/insights/cdt-joins-call-for-snap-payment-processors-to-refuse-usda-data-requests/ Tue, 13 May 2025 21:09:56 +0000 https://cdt.org/?post_type=insight&p=108817 This week, the Center for Democracy & Technology (CDT) joined Protect Democracy and the Electronic Privacy Information Center (EPIC) in calling on the private companies that process Supplemental Nutrition Assistance Program (SNAP) payments to refuse the federal government’s unprecedented, and likely illegal, request to access sensitive information about tens of millions of Americans who receive […]

The post CDT Joins Call for SNAP Payment Processors to Refuse USDA Data Requests appeared first on Center for Democracy and Technology.

]]>
This week, the Center for Democracy & Technology (CDT) joined Protect Democracy and the Electronic Privacy Information Center (EPIC) in calling on the private companies that process Supplemental Nutrition Assistance Program (SNAP) payments to refuse the federal government’s unprecedented, and likely illegal, request to access sensitive information about tens of millions of Americans who receive this life-saving benefit.

For over 60 years, the U.S. Department of Agriculture (USDA) has funded states to administer SNAP. In that time, the federal government has never requested access to the personal data of all program recipients, which are primarily low-income families as well as disabled or older adults. Forcing states to turn over data collected to administer a program that feeds millions of low-income, disabled, and older people for unknown purposes is an alarming data privacy threat that will create a chilling effect that prevents Americans from accessing life-saving benefits.

In this letter, we urge SNAP payment processors to stand up for privacy and refuse to facilitate this broad and dangerous attempt at government overreach.

Read the full letter.

The post CDT Joins Call for SNAP Payment Processors to Refuse USDA Data Requests appeared first on Center for Democracy and Technology.

]]>
OMB’s Revised AI Memos Exemplify Bipartisan Consensus on AI Governance Ideals, But Serious Questions Remain About Implementation https://cdt.org/insights/ombs-revised-ai-memos-exemplify-bipartisan-consensus-on-ai-governance-ideals-but-serious-questions-remain-about-implementation/ Tue, 13 May 2025 20:12:25 +0000 https://cdt.org/?post_type=insight&p=108821 On April 3, the Office of Management and Budget (OMB) released updated versions of its guidance to federal agencies on the use (M-25-21) and procurement (M-25-22) of AI. These memos were issued in response to statutory requirements in the AI in Government Act of 2020 and the Advancing American AI Act. The updated memos build […]

The post OMB’s Revised AI Memos Exemplify Bipartisan Consensus on AI Governance Ideals, But Serious Questions Remain About Implementation appeared first on Center for Democracy and Technology.

]]>
On April 3, the Office of Management and Budget (OMB) released updated versions of its guidance to federal agencies on the use (M-25-21) and procurement (M-25-22) of AI. These memos were issued in response to statutory requirements in the AI in Government Act of 2020 and the Advancing American AI Act. The updated memos build on and streamline similar guidance on the use (M-24-10) and procurement (M-24-18) of AI first issued under the Biden Administration.

In fulfilling this legislative requirement, CDT has long advocated that OMB adopt measures to advance responsible AI practices across the federal government’s use and procurement of AI. Doing so will both protect people’s rights and interests, and help ensure that government AI systems are effective and fit for purpose. The most recent OMB guidance retains many of the core AI governance measures that CDT has called for, ranging from heightened protections for high-risk use cases to centralized agency leadership. The updated guidance is especially important as the Trump Administration signals its interest to rapidly expand the use of AI across federal agencies, including efforts by the Department of Government Efficiency (DOGE) to deploy AI tools to make a host of high-stakes decisions

Encouragingly, the publication of this revised guidance confirms that there is bipartisan consensus around core best practices for ensuring the responsible use and development of AI by public agencies. But, while this updated guidance is promising on paper, there are significant unanswered questions about how it will be implemented in practice. The overarching goals and obligations set out by these memos aimed at advancing responsible AI innovation through public trust and safety appear to be in direct tension with the reported actions of DOGE and various federal agencies. 

The true test of the strength and durability of this guidance will be in the efforts to implement and enforce these crucial safeguards over the coming months. In line with CDT’s ongoing advocacy, these memos provide agencies with a clear roadmap for mitigating the risks of AI systems and advancing public trust, through three avenues:

  • Intra- and Inter-Agency AI Governance
  • Risk Management Practices
  • Responsible AI Procurement

Intra- and Inter-Agency AI Governance

AI governance bodies and oversight practices facilitate the robust oversight of AI tools and the promotion of responsible innovation across the federal government. Critical AI governance practices — such as standardizing decision-making processes and appointing leaders specifically responsible for AI — enable agencies to fully assess the benefits and risks of a given system and implement appropriate safeguards across agency operations.

Significantly, OMB’s updated memos retain critical agency and government-wide AI governance structures that establish dedicated AI leadership and coordination functions aimed at supporting agencies’ safe and effective adoption of AI:

  • Agency chief AI officers: Each agency is required to retain or designate a Chief AI Officer (CAIO) responsible for managing the development, acquisition, use, and oversight of AI throughout the agency. These officials serve a critical role in coordinating with leaders across each agency and ensuring that agencies meet their transparency and risk management obligations.
  • Agency AI governance boards: Each agency is required to establish an interdisciplinary governance body — consisting of senior privacy, civil rights, civil liberties, procurement, and customer experience leaders, among others — tasked with developing and overseeing each agency’s AI policies. These governance boards help agencies ensure that a diverse range of internal stakeholders are involved throughout the AI policy development and implementation process, creating a structured forum for agency civil rights and privacy leaders to play a direct role in agency decision-making about AI.
  • Interagency chief AI officer council: OMB is required to convene an interagency council of CAIOs to support government-wide coordination on AI use and oversight. This council supports collaboration and information sharing across the government, allowing for agencies to learn from one another’s successes and failures.
  • Cross-functional procurement teams: Each agency is required to create a cross-functional team — including acquisition, cybersecurity, privacy, civil rights, and budgeting experts — to coordinate agency AI acquisitions. These teams help agencies to effectively identify and evaluate needed safeguards for each procurement and to successfully monitor the performance of acquired tools.  

Risk Management Practices

Not all AI use cases present the same risks to individuals and communities. For instance, an AI tool used to identify fraudulent benefits claims poses a significantly different set of risks than an AI tool used to categorize public comments submitted to an agency. It is therefore widely understood that certain high-risk uses should be subjected to increased scrutiny and care. 

Acknowledging the need to proactively identify and mitigate potential risks, OMB’s updated memos retain and streamline requirements for agencies to establish heightened risk management practices for systems used in high-risk settings. Building on a similar framework established under the previous OMB AI memos, the updated OMB memos define a category of “high-impact AI” use cases for which agencies must implement minimum risk management practices. This categorization of “high-impact AI” simplifies categories that were created under the previous versions of these memos, which defined two separate definitions of “safety-impact” and “rights-impacting” AI systems that were subject to similar minimum risk management practices. This unified category significantly simplifies agencies’ process for identifying high-risk systems by requiring only one determination as opposed to two. 

In line with the earlier versions of these memos, the updated guidance requires agencies to establish the following heightened risk management practices for all “high-impact” use cases:

  • Pre-deployment testing and impact assessments: Agencies are required to conduct impact assessments and testing in real-world scenarios prior to deploying a tool. These processes help agencies proactively assess a system’s performance, identify potential impacts or harms, and develop risk mitigation strategies. 
  • Ongoing monitoring: Agencies are required to conduct periodic performance testing and oversight, allowing agencies to identify changes in a system’s use or function that may lead to harmful or unexpected outcomes.
  • Human training and oversight: Agencies are required to provide ongoing training about the use and risks of AI for agency personnel and to implement human oversight measures. These practices ensure that agency personnel have sufficient information to understand the impacts of the AI tools that they use and are empowered to intervene if harms occur. 
  • Remedy and appeal: Agencies are required to provide avenues for individuals to seek human review and appeal any AI-related adverse actions, ensuring that impacted individuals are able to seek redress for any negative outcomes that may result due to the use of AI. 
  • Public feedback: Agencies are required to seek public feedback about the development, use, and acquisition of AI systems, helping agencies make informed decisions about how AI can best serve the interests of the public.

While many of these core risk management requirements extend those set out under the previous OMB AI guidance, there are several notable differences in the updated OMB memos. First, the updated guidance allows for pilot programs to be exempted from the minimum risk management practices, so long as a pilot is time-bound, limited in scope, and approved by the agency CAIO. Second, the updated guidance removes several previously required minimum risk management practices, including requirements for agencies to provide notice to individuals impacted by an AI tool and to maintain an option for individuals to opt-out of AI-enabled decisions. Third, the updated guidance no longer includes previous requirements for rights-impacting tools to undergo separate assessments on equity and discrimination, although impact assessments still require agencies to evaluate how systems use information related to protected classes and to describe mitigation measures used to prevent unlawful discrimination. Finally, the updated guidance narrows the definition of systems that are presumed to be “high-impact,” removing certain categories previously included in the definitions of “safety-impact” and “rights-impacting” AI systems, such as AI systems to used to maintain the integrity of elections and voting infrastructure and systems used to detect or measure human emotions.

Responsible AI Procurement

Many of the AI tools used by federal agencies are procured from, or developed with the support of, third-party vendors. Because of this, it is critical for agencies to establish additional measures for ensuring the efficacy, safety, and transparency of AI procurement. 

To meet this need, OMB’s updated memos simplify and build on many of the responsible AI procurement practices put in place by the initial version of OMB’s guidance. First, and most importantly, this updated guidance requires agencies to extend their minimum risk management practices to procured AI systems. Similar to OMB’s previous requirements, agencies are directed to proactively identify if a system that they are seeking to acquire is likely high-impact and to disclose such information in a solicitation. And, once an agency is in the process of acquiring a high-impact AI tool, it is obligated to include contract language that ensures compliance with all minimum risk management practices. These measures make sure that the same protections are put in place no matter if a high-impact AI tool is developed in-house or acquired from a vendor. 

Moreover, the updated guidance outlines additional obligations that agencies have to establish for all procured AI systems. To ensure that agency contracts contain sufficient protections, agencies are directed to include contract terms that address the intellectual property rights and use of government data, data privacy, ongoing testing and monitoring, performance standards, and notice requirements to alert agencies prior to the integration of new AI features into a procured system. The updated guidance also has a heightened focus on promoting competition in the AI marketplace, requiring agencies to implement protections against vendor lock-in throughout the solicitation development, selection and award, and contract closeout phases. 

In tandem with these contractual obligations, agencies are required to monitor the ongoing performance of an AI system throughout the administration of a contract and to establish criteria for sunsetting the use of an AI system. One significant difference in OMB’s updated memos, however, is that these procurement obligations only apply to future contracts and renewals, whereas the prior version of OMB’s guidance extended a subset of these requirements to existing contracts for high-impact systems. 

Conclusion

As CDT highlighted when the first version of OMB’s guidance was published a year ago, while this revised guidance is an important step forward, implementation will be the most critical part of this process. OMB and federal agencies have an opportunity to use this updated guidance to address inconsistencies and gaps in AI governance practices across agencies, increasing the standardization and effectiveness of agencies’ adherence to these requirements even as they expand their use of AI. 

Ensuring adequate implementation of OMB’s memos is not only critical to promoting the effective use of taxpayer money, but is especially urgent given alarming reports about the opaque and potentially risky uses of AI at the hands of DOGE. The government has an obligation to lead by example by modeling what responsible AI innovation should look like in practice. These revised memos are a good start, but now it is time for federal agencies to walk the walk and not just talk the talk.

The post OMB’s Revised AI Memos Exemplify Bipartisan Consensus on AI Governance Ideals, But Serious Questions Remain About Implementation appeared first on Center for Democracy and Technology.

]]>
CDT Submits Comments Outlining Dangers of SSA About-Face Blocking Vulnerable Beneficiaries from Accessing Critical Benefits https://cdt.org/insights/cdt-outlines-dangers-of-ssa-about-face-blocking-vulnerable-beneficiaries-from-accessing-critical-benefits/ Tue, 13 May 2025 14:32:32 +0000 https://cdt.org/?post_type=insight&p=108802 Despite initially heeding an outpouring of concerns, many around accessibility for disabled beneficiaries, the Social Security Administration (SSA) appears to be forging ahead with plans to require in-person visits or access to an online account to complete certain phone-based transactions. This about-face will block some of SSA’s most vulnerable beneficiaries from accessing critical benefits, including […]

The post CDT Submits Comments Outlining Dangers of SSA About-Face Blocking Vulnerable Beneficiaries from Accessing Critical Benefits appeared first on Center for Democracy and Technology.

]]>
Despite initially heeding an outpouring of concerns, many around accessibility for disabled beneficiaries, the Social Security Administration (SSA) appears to be forging ahead with plans to require in-person visits or access to an online account to complete certain phone-based transactions.

This about-face will block some of SSA’s most vulnerable beneficiaries from accessing critical benefits, including disabled and/or older people who disproportionately rely on telephone services. Though we appreciate SSA’s attention to the integrity of their programs, attempts to address fraud cannot make programs inaccessible to beneficiaries.

CDT has filed comments outlining the dangers of this approach to people with disabilities and older adults who depend on the SSA-administered benefits that they are entitled to receive.

Read the full comments.

The post CDT Submits Comments Outlining Dangers of SSA About-Face Blocking Vulnerable Beneficiaries from Accessing Critical Benefits appeared first on Center for Democracy and Technology.

]]>
Op-Ed – DOGE & Disability Rights: Three Key Tech Policy Concerns https://cdt.org/insights/op-ed-doge-disability-rights-three-key-tech-policy-concerns/ Mon, 12 May 2025 18:49:19 +0000 https://cdt.org/?post_type=insight&p=108773 This op-ed – authored by CDT’s Ariana Aboulafia  – first appeared in Tech Policy Press on May 12, 2025. A portion of the text has been pasted below. Three months into the Trump administration, the Department of Government Efficiency (DOGE) has wreaked havoc on the United States federal government and on many individuals who rely on government services. This […]

The post Op-Ed – DOGE & Disability Rights: Three Key Tech Policy Concerns appeared first on Center for Democracy and Technology.

]]>
This op-ed – authored by CDT’s Ariana Aboulafia  – first appeared in Tech Policy Press on May 12, 2025. A portion of the text has been pasted below.

Three months into the Trump administration, the Department of Government Efficiency (DOGE) has wreaked havoc on the United States federal government and on many individuals who rely on government services. This includes people with disabilities, who have been impacted by cuts to education programschaos at the Social Security Administration (SSA), the shuttering of digital services programs that focused on accessibility, and even mandatory return-to-office policies for federal workers, among other things. Despite the announcement that Elon Musk will soon be “stepping back” from his role at DOGE, there’s no indication that the agency will stop its crusade, regardless of the costs to everyday people. And, it will continue to use technology to do it.

I currently lead one of the only projects in the US that focuses on how technology (such as AI tools and algorithmic systems) impacts people with disabilities. From my vantage, it is clear that DOGE’s underlying ableist rhetoric both informs and forecasts its work, while its violations of data privacy and expansive use of AI without proper oversight have already harmed disabled people, and will continue to do so.

Read the full text.

The post Op-Ed – DOGE & Disability Rights: Three Key Tech Policy Concerns appeared first on Center for Democracy and Technology.

]]>
CDT and the Leadership Conference Release New Analysis of Immigration, DOGE, and Data Privacy https://cdt.org/insights/cdt-and-the-leadership-conference-release-new-analysis-of-immigration-doge-and-data-privacy/ Mon, 12 May 2025 13:59:00 +0000 https://cdt.org/?post_type=insight&p=108756 In March, CDT and the Leadership Conference’s Center for Civil Rights and Technology released a fact sheet examining some of the core issues related to the Department of Government Efficiency’s (DOGE) access to and use of sensitive information held by federal agencies. Since we released this analysis, not only has DOGE increased its efforts to […]

The post CDT and the Leadership Conference Release New Analysis of Immigration, DOGE, and Data Privacy appeared first on Center for Democracy and Technology.

]]>
In March, CDT and the Leadership Conference’s Center for Civil Rights and Technology released a fact sheet examining some of the core issues related to the Department of Government Efficiency’s (DOGE) access to and use of sensitive information held by federal agencies. Since we released this analysis, not only has DOGE increased its efforts to access sensitive information across the federal government, but DOGE and federal law enforcement authorities have specifically sought to repurpose administrative data for immigration-related uses. 

As the federal government seeks to rapidly expand the use of sensitive data to target immigrants, CDT and the Leadership Conference developed a follow-up explainer that analyzes the issues surrounding federal immigration authorities and DOGE’s access and use of administrative data for immigration-related activities. This new explainer details:

  • The types of administrative data held by federal agencies, 
  • Examples of how federal administrative data is being repurposed for immigration-related efforts, 
  • The legal protections of federal administrative data and law enforcement exceptions, 
  • The impacts of government data access and use on immigrants and society, and
  • The unanswered questions about and potential future changes to the federal government’s access, use, and sharing of administrative data for immigration-related purposes. 

Repurposing federal administrative data for immigration-related activities may have widespread and significant impacts on the lives of U.S. citizens and non-citizen immigrants alike. Ensuring transparency into the actions of DOGE and federal immigration authorities is a critical step towards protecting and safeguarding data privacy for everyone.

Read the full analysis.

The post CDT and the Leadership Conference Release New Analysis of Immigration, DOGE, and Data Privacy appeared first on Center for Democracy and Technology.

]]>
CDT Files Amicus Brief in Patterson v. Meta https://cdt.org/insights/cdt-files-amicus-brief-in-patterson-v-meta/ Thu, 08 May 2025 17:24:41 +0000 https://cdt.org/?post_type=insight&p=108750 On May 1, 2025, the Center for Democracy & Technology filed an amicus brief in the case of Patterson v. Meta. CDT filed this brief to bring to the court’s attention the broader impacts on free expression that weakening Section 230 will have on speech that is constitutionally protected, but controversial. The brief explains that […]

The post CDT Files Amicus Brief in Patterson v. Meta appeared first on Center for Democracy and Technology.

]]>
On May 1, 2025, the Center for Democracy & Technology filed an amicus brief in the case of Patterson v. Meta. CDT filed this brief to bring to the court’s attention the broader impacts on free expression that weakening Section 230 will have on speech that is constitutionally protected, but controversial. The brief explains that Section 230’s liability protections are essential to enable free expression online and they extend to the use of automated systems to engage to rank and order content as part of traditional publishing activities. It further argues that product liability claims do not fall outside of Section 230’s ambit, requiring courts to consider whether a particular product liability claim seeks to hold a service provider liable as a publisher of third party content. Finally, the brief notes that livestreaming is a method of publication of third-party content that also receives Section 230’s protection. 

Read the full brief.

The post CDT Files Amicus Brief in Patterson v. Meta appeared first on Center for Democracy and Technology.

]]>
EU Tech Policy Brief: May 2025 https://cdt.org/insights/eu-tech-policy-brief-may-2025/ Wed, 07 May 2025 00:01:11 +0000 https://cdt.org/?post_type=insight&p=108724 Welcome back to the Centre for Democracy & Technology Europe‘s Tech Policy Brief! This edition covers the most pressing technology and internet policy issues under debate in Europe and gives CDT’s perspective on the impact to digital rights. To sign up for CDT Europe’s AI newsletter, please visit our website. Do not hesitate to contact […]

The post EU Tech Policy Brief: May 2025 appeared first on Center for Democracy and Technology.

]]>
Welcome back to the Centre for Democracy & Technology Europe‘s Tech Policy Brief! This edition covers the most pressing technology and internet policy issues under debate in Europe and gives CDT’s perspective on the impact to digital rights. To sign up for CDT Europe’s AI newsletter, please visit our website. Do not hesitate to contact our team in Brussels.

👁 Security, Surveillance & Human Rights

Building Global Spyware Standards with the Pall Mall Process

As international attention focuses on misuses of commercial spyware, the Pall Mall Process continues to gather momentum. This joint initiative, led by France and the United Kingdom, seeks to establish international guiding principles for the development, sale, and use of commercial cyber intrusion capabilities (CCICs). 

At the Process’s second conference in Paris earlier this month, Programme Director Silvia Lorenzo Perez joined global stakeholders as the process concluded with the adoption of a Pall Mall Code of Practice for States. The Code has been endorsed by 25 countries to date, including 18 EU Member States. It sets out commitments for state action regarding the development, facilitation, acquisition, and deployment of CCICs. It also outlines good practices and regulatory recommendations to promote responsible state conduct in the use of CCICs. 

Pall Mall Process annual event in Paris.
Pall Mall Process annual event in Paris.

CDT Europe will soon publish a comprehensive assessment of the official document to provide deeper insights into its implications. In parallel, and as part of our ongoing work to advance spyware regulation within the EU, CDT Europe is leading preparation of the sixth edition of the civil society roundtable series, “Lifting the Veil – Advancing Spyware Regulation in the EU,” on 13 May. Stakeholders will discuss what meaningful action should look like in the EU, following the political commitments made by the Member States that endorsed the Pall Mall Code of Practice.

CSOs Urge Swedish Parliament to Reject Legislation Undermining Encryption

CDT Europe joined a coalition of civil society organisations, including members of the Global Encryption Coalition, in an open letter urging the Swedish Parliament to reject proposed legislation that would weaken encryption. This legislation, if enacted, would greatly undermine the security and privacy of Swedish citizens, companies, and institutions. Despite its intention to combat serious crime, the legislation’s dangerous approach would instead create vulnerabilities that criminals and other malicious actors could readily exploit. Compromising encryption would leave Sweden’s citizens and institutions less safe than before. The proposed legislation would particularly harm those who rely on encryption the most, including journalists, activists, survivors of domestic violence, and marginalised communities. Human rights organisations have consistently highlighted encryption’s critical role in safeguarding privacy and free expression. Additionally, weakening encryption would also pose a national security threat, as even the Swedish Armed Forces rely on encrypted tools like Signal for secure communication. 

Recommended read: Ofcom, Global Titles and Mobile Network Security, Measures to Address Misuse of Global Titles

 💬 Online Expression & Civic Space

DSA Civil Society Coordination Group Meets with the ODS Bodies Network

Earlier this month, the DSA Civil Society Coordination Group met with the Out-of-Court Dispute Settlement (ODS) Bodies Network for the first time to explore ways to collaborate. Under Article 21 of the Digital Services Act (DSA), ODS Bodies are to provide independent resolution of disputes between users and online platforms. As these bodies start forming and seeking certification, their role in helping users access redress and offering insights into platform compliance is becoming more important.

The meeting introduced the ODS Network’s mission: to encourage cooperation among certified bodies, promote best practices for data-sharing, and engage with platforms and regulators. Civil society organisations, which often support users who have faced harms on platforms, discussed how they could help identify cases that could be referred to ODS Bodies. In return, records from ODS Bodies could become a valuable resource for tracking systemic risks and holding platforms accountable under the DSA.

The discussion further focused on how to raise user awareness of redress options, make ODS procedures more accessible, and strengthen data reporting practices. Participants also outlined next steps for working more closely together, particularly around identifying the types of data that could best support civil society’s efforts to monitor risks and support enforcement actions by the European Commission.

Asha Allen Joins Euphoria Podcast to Discuss Civil Society in the EU

Civil society is under pressure, and now more than ever, solidarity and resilience are vital. These are the resounding conclusions of the latest episode of the podcast Euphoria, featuring CDT Europe’s Secretary General Asha Allen. Asha joined Arianna and Federico from EU&U to unpack the current state of human rights and the growing threats faced by civil society in Europe and beyond. With key EU legislation like the AI Act and Digital Services Act becoming increasingly politicised, they explored how to defend democracy, safeguard fundamental rights, and shape a digital future that truly serves its citizens. Listen now to discover how cross-movement collaboration and rights-based tech policy can help counter rising authoritarianism.

CDT Europe Secretary General Asha Allen speaking with podcasters Federico Terreni and Arianna Labasin from EU&U at the Euphoria Podcast recording.
CDT Europe Secretary General Asha Allen speaking with podcasters Federico Terreni and Arianna Labasin from EU&U at the Euphoria Podcast recording.

Recommended read: FEPs, Silenced, censored, resisting: feminist struggles in the digital age

⚖ Equity and Data

EU AI Act Explainer — AI at Work

In the fourth part of our series on the AI Act and its implications for human rights, we examine the deployment of AI systems in the workplace and the AI Act’s specific obligations aimed at ensuring the protection of workers. In particular, we assess which of the prohibited AI practices could become relevant for the workplace and where potential loopholes and gaps lie. We also focus on the obligations of providers and deployers of high-risk AI systems, which could increase protection of workers from harms caused by automated monitoring and decision-making systems. Finally, we examine to what extent the remedies and enforcement mechanisms foreseen by the AI Act can be a useful tool for workers and their representatives to claim their rights. Overall, we find that the AI Act’s approach to allow more favourable legislation in the employment sector to apply is a positive step. Nevertheless, the regulation itself has only limited potential to protect workers’ rights.

CSOs Express Concern with Withdrawal of AI Liability Directive

CDT Europe joined a coalition of civil society organisations in sending an open letter to European Commission Executive Vice-President Virkkunen and Commissioner McGrath, expressing deep concern over the Commission’s recent decision to withdraw the proposed Artificial Intelligence Liability Directive (AILD) and stressing the urgent need to immediately begin preparatory work on a new, robust liability framework. We argued that the proposal is necessary because individuals seeking compensation for AI-induced harm will need to prove that damage was caused by a faulty AI system, which would be an insurmountable burden without a liability framework. 

Programme Director Laura Lazaro Cabrera also participated in a working lunch hosted by The Nine to discuss the latest trends and developments in AI policy following the Paris AI Summit. Among other aspects, Laura tackled the deregulatory approach taken by the European Commission, the importance of countering industry narratives, and the fundamental rights concerns underlying some of the key features of the AI Act.

Equity and Data Programme Director Laura Lazaro Cabrera speaking on a panel at the “Post-Paris AI Summit: Key Trends and Policies” event hosted by The Nine.
Equity and Data Programme Director Laura Lazaro Cabrera speaking on a panel at the “Post-Paris AI Summit: Key Trends and Policies” event hosted by The Nine.

Recommended read: Tech Policy Press, Human Rights are Universal, Not Optional: Don’t Undermine the EU AI Act with a Faulty Code of Practice

🆕 New Team Member!

Marcel Mir Teijeiro, AI Policy Fellow in CDT Europe's Equity and Data programme.
Marcel Mir Teijeiro, AI Policy Fellow in CDT Europe’s Equity and Data programme.

CDT Europe’s team keeps growing! At the beginning of April, we welcomed Marcel Mir Teijeiro as the Equity and Data programme’s New AI Policy Fellow. He’ll work on the implementation of the AI Act and CDT Europe’s advocacy to protect the right to effective remedy for AI-induced harms. Previously, Marcel participated in the Code of Practice multistakeholder process for General-Purpose AI Models, advising rights-holder groups across the cultural and creative industries on transparency and intellectual property aspects. A Spanish qualified lawyer, he also helped develop a hash-based technical solution for training dataset disclosure shared with the AI Office, U.S. National Institute for Standards and Technology, and the UK AI Safety Institute. We are excited to have him on board, and look forward to working with him!

🗞 In the Press

⏫ Upcoming Events

Tech Policy in 2025: Where Does Europe Stand?: On May 15, CDT Europe and Tech Policy Press are co-hosting an evening of drinks and informal discussion, “Tech Policy in 2025: Where Does Europe Stand?”. It will be an opportunity to connect with fellow tech policy enthusiasts, share ideas, and figure out what the future holds for tech regulation in Europe. The event is currently sold out, but you can still join the waitlist in case some spots open up! 

Lifting the Veil – Advancing Spyware Regulation in the EU: CDT Europe, together with the Open Government Partnership, is hosting the sixth edition of the Civil Society Roundtable Series: “Lifting the Veil – Advancing Spyware Regulation in the EU.” The roundtable will gather representatives from EU Member States, EU institutions, and international bodies alongside civil society organisations, technologists, legal scholars, and human rights defenders for an in-depth exchange on the future of spyware regulation. The participation is invitation-only, so if you think you can contribute to the conversation, feel free to reach out at eu@cdt.org.

CPDP.ai 2025: From 21 to 23 May, CDT Europe will participate in CPDP.ai 18th International Conference. Each year, CPDP gathers academics, lawyers, practitioners, policymakers, industry, and civil society from all over the world in Brussels, offering them an arena to exchange ideas and discuss the latest emerging issues and trends. This year, CDT Europe will be hosting two workshops on AI and spyware, in addition to our Secretary General Asha Allen speaking on a panel on the intersection of the DSA and online gender-based violence. You can still register to attend the conference.

The post EU Tech Policy Brief: May 2025 appeared first on Center for Democracy and Technology.

]]>