Civic Tech Fed. State Engagement Archives - Center for Democracy and Technology https://cdt.org/area-of-focus/equity-in-civic-tech/civic-tech-fed-state-engagement/ Tue, 13 May 2025 21:09:58 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://cdt.org/wp-content/uploads/2019/11/cropped-cdt-logo-32x32.png Civic Tech Fed. State Engagement Archives - Center for Democracy and Technology https://cdt.org/area-of-focus/equity-in-civic-tech/civic-tech-fed-state-engagement/ 32 32 CDT Joins Call for SNAP Payment Processors to Refuse USDA Data Requests https://cdt.org/insights/cdt-joins-call-for-snap-payment-processors-to-refuse-usda-data-requests/ Tue, 13 May 2025 21:09:56 +0000 https://cdt.org/?post_type=insight&p=108817 This week, the Center for Democracy & Technology (CDT) joined Protect Democracy and the Electronic Privacy Information Center (EPIC) in calling on the private companies that process Supplemental Nutrition Assistance Program (SNAP) payments to refuse the federal government’s unprecedented, and likely illegal, request to access sensitive information about tens of millions of Americans who receive […]

The post CDT Joins Call for SNAP Payment Processors to Refuse USDA Data Requests appeared first on Center for Democracy and Technology.

]]>
This week, the Center for Democracy & Technology (CDT) joined Protect Democracy and the Electronic Privacy Information Center (EPIC) in calling on the private companies that process Supplemental Nutrition Assistance Program (SNAP) payments to refuse the federal government’s unprecedented, and likely illegal, request to access sensitive information about tens of millions of Americans who receive this life-saving benefit.

For over 60 years, the U.S. Department of Agriculture (USDA) has funded states to administer SNAP. In that time, the federal government has never requested access to the personal data of all program recipients, which are primarily low-income families as well as disabled or older adults. Forcing states to turn over data collected to administer a program that feeds millions of low-income, disabled, and older people for unknown purposes is an alarming data privacy threat that will create a chilling effect that prevents Americans from accessing life-saving benefits.

In this letter, we urge SNAP payment processors to stand up for privacy and refuse to facilitate this broad and dangerous attempt at government overreach.

Read the full letter.

The post CDT Joins Call for SNAP Payment Processors to Refuse USDA Data Requests appeared first on Center for Democracy and Technology.

]]>
CDT Submits Comments to Representative Lori Trahan on Updating the Privacy Act of 1974 https://cdt.org/insights/cdt-submits-comments-to-representative-lori-trahan-on-updating-the-privacy-act-of-1974/ Wed, 30 Apr 2025 04:01:00 +0000 https://cdt.org/?post_type=insight&p=108499 On April 30, the Center for Democracy & Technology (CDT) submitted comments to Representative Lori Trahan about reforming the Privacy Act of 1974 to address advances in technology and emerging threats to federal government data privacy. Our comments highlight potential privacy harms related to federal government data practices and provide an overview of CDT’s nearly […]

The post CDT Submits Comments to Representative Lori Trahan on Updating the Privacy Act of 1974 appeared first on Center for Democracy and Technology.

]]>
On April 30, the Center for Democracy & Technology (CDT) submitted comments to Representative Lori Trahan about reforming the Privacy Act of 1974 to address advances in technology and emerging threats to federal government data privacy. Our comments highlight potential privacy harms related to federal government data practices and provide an overview of CDT’s nearly two decades of advocacy on the Privacy Act.

We urge Congress to address gaps in the Privacy Act, including by:

  • Updating the definition of “system of records,” 
  • Limiting the “routine use” exemption, 
  • Expanding the Privacy Act to cover non-U.S. persons, and 
  • Strengthening privacy notices.

Read the full comments.

The post CDT Submits Comments to Representative Lori Trahan on Updating the Privacy Act of 1974 appeared first on Center for Democracy and Technology.

]]>
CDT Stands Up for Taxpayer Privacy https://cdt.org/insights/cdt-stands-up-for-taxpayer-privacy/ Wed, 16 Apr 2025 15:49:08 +0000 https://cdt.org/?post_type=insight&p=108378 The Center for Democracy & Technology has joined over 270 other organizations in a letter calling on Congress to stand up for taxpayer privacy just as millions of Americans are filing their tax returns. The letter decries a new Memorandum of Understanding (MOU) pursuant to which the Internal Revenue Service will share with the Department […]

The post CDT Stands Up for Taxpayer Privacy appeared first on Center for Democracy and Technology.

]]>
The Center for Democracy & Technology has joined over 270 other organizations in a letter calling on Congress to stand up for taxpayer privacy just as millions of Americans are filing their tax returns. The letter decries a new Memorandum of Understanding (MOU) pursuant to which the Internal Revenue Service will share with the Department of Homeland Security taxpayer information regarding as many as seven million taxpayers that DHS suspects are undocumented. Taxpayers will have no prior notice that their information is being shared, and no opportunity to challenge the sharing of their information on a case-by-case basis before it is shared.

As stated in the letter, which was quarterbacked by the civil rights and advocacy NGO UnidosUS, the IRS-DHS MOU “… poses an unprecedented threat to taxpayer privacy protections that have been respected on a bipartisan basis for nearly 50 years.” Taxpayer information is protected by law against disclosure, and immigration enforcement is not a recognized exception to those protections.  We are calling for Congress to conduct oversight hearings, demand release of the MOU without redactions, and demand that the Treasury Department explain its novel interpretation of the law. 

Taxpayer privacy encourages taxpayer compliance. As CDT has pointed out, use of taxpayer information for immigration enforcement will create a huge disincentive for undocumented people to pay taxes, and will drive them further into the informal labor sector, where they are vulnerable to abuse. This will cost the Treasury billions in lost tax revenue. The IRS had urged undocumented people to file tax returns, and to encourage them to do so, gave assurances that information submitted for tax purposes would not be used for immigration enforcement. The IRS has reneged on those assurances, calling into question other taxpayer privacy commitments — including those imposed by law. 

Read the full letter.

The post CDT Stands Up for Taxpayer Privacy appeared first on Center for Democracy and Technology.

]]>
Exploring the 2024 Federal AI Inventories: Key Improvements, Trends, and Continued Inconsistencies https://cdt.org/insights/exploring-the-2024-federal-ai-inventories-key-improvements-trends-and-continued-inconsistencies/ Tue, 15 Apr 2025 13:39:09 +0000 https://cdt.org/?post_type=insight&p=108350 Introduction At the end of last year, U.S. federal agencies published the 2024 updates to their public-facing AI use case inventories. These most recent agency AI inventories mark a significant improvement from past years, providing greater transparency and unprecedented information about how one of the world’s largest governments is using AI. Most notably, the 2024 […]

The post Exploring the 2024 Federal AI Inventories: Key Improvements, Trends, and Continued Inconsistencies appeared first on Center for Democracy and Technology.

]]>
Introduction

At the end of last year, U.S. federal agencies published the 2024 updates to their public-facing AI use case inventories. These most recent agency AI inventories mark a significant improvement from past years, providing greater transparency and unprecedented information about how one of the world’s largest governments is using AI. Most notably, the 2024 agency AI inventories include 1,400 more use cases than 2023’s, representing a 200% increase in reported use cases. 

The publication of these inventories reflects federal agencies’ continued commitment to meet their legal obligations to publicly disclose details about how they are using AI. Those requirements were first established under President Trump’s Executive Order 13960 in December 2020, and later enacted into law in 2022 with the passage of the bipartisan Advancing American AI Act. These requirements were recently reaffirmed by the Office of Management and Budget’s updated guidance on federal agencies’ use of AI, which states that agencies are required to submit and publish their AI use case inventories “at least annually.” 

Federal agencies’ AI use case inventories are more crucial now than ever, as many agencies seek to expand their uses of AI for everything from benefits administration to law enforcement. This is underscored by OMB’s directive to agencies to “accelerate the Federal use of AI,” and by reports that DOGE is using AI tools to make high-risk decisions about government operations and programs with little to no public transparency. The Trump Administration now has the opportunity to build on and improve federal agency AI use case inventories as a critical transparency measure for building public trust and confidence in the government’s growing use of this technology. 

CDT examined the 2023 federal AI inventories, and noted some of the challenges in navigating agency inventories as well as some of the common themes. The following analysis provides an update on what we shared previously, examining how federal agencies have taken steps toward improved reporting as well as detailing remaining gaps and inconsistencies that risk diminishing the public utility of agency AI inventories.

A Step in the Right Direction: Improved Reporting and Documentation

Since 2023, federal agencies have made important progress in the breadth and depth of information included in their AI inventories in several key ways. 

First, the Office of Management and Budget (OMB) created and published a more easily accessible centralized repository of all agency inventories. As CDT noted in our past analysis of agency inventories, it was previously difficult to find agency inventories in an accessible and easily navigable format, and this development is a clear improvement on this issue.

Second, the 2024 agency inventories include far greater reporting about the total number of AI use cases. Agencies reported over three times more use cases than last year, from 710 to 2,133 total use cases across the federal government. This large increase in reporting is likely due to the additional clarification provided by the updated reporting guidance published by OMB under President Biden, as well as potential increased use of AI by federal agencies. While greater agency reporting is important, this increase also creates an overwhelming amount of information that does not necessarily give the public a clear picture of which systems have the greatest impacts on rights and safety. Going forward, it will be critical for agencies to maintain this reporting standard in order to track changes in agencies’ use of AI over time.

Finally, the updated agency inventories include significantly more detail about the risks and governance of specific use cases. As a result of OMB’s reporting guidance, agency inventories generally contain more information about each use case’s stage of development, deployment, data use, and other risk management practices. However, as detailed below, this information is reported inconsistently, undermining the usefulness of this greater degree of reporting.

These improvements enable better understanding in two important ways: 

  1. Changes in agency AI use over time
  2. Additional detail about high-risk AI uses

Changes in agency AI use over time

CDT first published its analysis of agency AI inventories in the summer of 2023. In agencies’ 2023 inventories, we found that three common use cases included chatbots, national security-related uses, and uses related to veterans’ mental health. The updated federal agency inventories from 2024 reflect many of the same trends. National security and veterans’ health care were common uses among a broader set of high-risk systems, as discussed in greater detail in the next section. Additionally, chatbots remain commonly used by a number of agencies, ranging from internally-facing employee resource tools to externally-facing tools used to educate the public about agencies’ resources. For instance, the Department of Agriculture reported use of a chatbot to assist employees from the Farm Service Agency in searching loan handbooks, and the U.S. Patent and Trade Office within the Department of Commerce reported use of a public-facing chatbot to help answer questions about trademarks and patents. 

As noted in the federal CIO’s analysis of the 2024 inventories, roughly 46% of all AI use cases are “mission-enabling” uses related to “administrative and IT functions.” Several common use cases emerged in this year’s inventories that reflect this trend. 

First, a number of agencies reported uses of Generative AI tools and large language models (LLMs) to analyze data, summarize information, and generate text, images, and code. For instance, the Department of Commerce’s Bureau of Economic Analysis reported use of an LLM-based chatbot to support text and data analysis, and the Department of Health and Human Services’ Center for Disease Control reported use of an enterprise-wide Generative AI tool to edit written materials. 

Second, a significant number of agencies reported the use of AI tools to manage public input and requests for information. The following seven agencies all reported the use of AI tools to categorize and process public comments and claims:

  • Department of the Interior
  • Department of Health and Human Services
  • Department of Agriculture
  • Federal Fair Housing Agency
  • Federal Reserve
  • Securities and Exchange Commission
  • Department of Justice 

And, the following nine agencies reported the use of AI systems to automate portions of the FOIA process, such as redacting personally identifiable information:

  • Department of Homeland Security
  • Department of the Interior
  • Department of Health and Human Services
  • National Science Foundation
  • Department of State
  • Equal Employment Opportunity Commission
  • National Archives and Records Administration
  • Department of Justice
  • Department of Transportation 

Additional details about high-risk AI uses

In addition to reporting about their overall AI use cases, OMB’s updated reporting guidance required agencies to indicate which uses are high-risk, which OMB defines as rights- and safety-impacting AI systems. This is an important addition to agency inventories because such high-risk uses have the greatest potential impact on individuals’ rights and liberties, including highly invasive surveillance tools and tools that determine access to a variety of government benefits and services. Across all publicly available agency AI inventories, the three most common categories of high-risk systems currently in use include:

  • Law enforcement and national security
  • Public benefits administration
  • Health and human services delivery and administration

Law enforcement and national security

The Department of Justice and Department of Homeland Security both reported a large number of high-risk law enforcement and national security-related use cases. AI use cases reported by the Department Justice, for instance, include tools used to analyze data and video surveillance for criminal investigations, monitor vehicles and automatically read license plates, detect gunshots, predict prison populations and misconduct among incarcerated individuals, and track recidivism, among a number of other uses related to investigations, surveillance, and prison management. Such uses are concerning and in need of the utmost scrutiny because many of these technologies have proven to be frequently inaccurate, subject to inadequate scrutiny and excess reliance, and prone to lead investigators astray; in the context of law enforcement actions, these mistakes can have severe harms to individuals’ lives and liberty. 

Given how serious these risks are, it is alarming that, while the Department of Justice reported a high number of high-risk use cases—124 of the Department’s total 240—the inventory entries for all Department of Justice use cases do not contain any information about risk mitigation or general AI governance procedures, such as information about whether or not systems were developed in-house or procured, whether systems disseminate information to the public, and which demographic variables systems use. Moreover, a number of use cases included in the Department of Justice inventory do not have a risk classification because they are designated as “too new to fully assess.” Many other agencies similarly neglected to share such information, but these omissions are especially concerning in the context of use cases that pose such a significant threat to individuals’ rights, freedom, and liberties. 

The Department of Homeland Security similarly reported a number of high-risk use cases, 34 of the Department’s 183 reported use cases. These tools span uses such as social media monitoring, border surveillance, facial recognition and other forms of biometric identification, automated device analytics, and predicting the risk for non-citizens under ICE’s management to abscond. 

Although the Department of Homeland Security’s inventory is helpful in assessing its law enforcement, immigration enforcement, and national security uses of AI, two omissions and ambiguities on facial recognition highlight the need for additional transparency. First, one use case listed in the Department’s inventory details Border Patrol use of facial recognition in the field, stating the technology is used to “facilitate biometric identification of individuals as they are encountered.” This leaves ambiguity as to whether facial recognition is used as the basis to detain individuals, or if it is merely a check to inform procedures for bringing an individual in for processing after a detainment decision has already been made. The former scenario would raise serious concerns, especially given how variable facial recognition’s accuracy is across field conditions. Second, the Department’s inventory does not include any mention of ICE using facial recognition in conjunction with DMV databases to find individuals’ identity and current address, a practice that has been publicly documented since 2019. Both of these issues highlight the need for the Department to clarify the extent to which specific AI technologies are used and to include all known use cases, even those that may have been discontinued. 

Public benefits administration

The Social Security Administration and the Department of Veterans Affairs both reported a significant number of high-risk use cases related to the administration of public benefits programs. These systems are used for a variety of purposes ranging from processing benefits claims to identifying fraudulent applications and predicting high-risk claims. The Social Security Administration, for example, reported using AI tools to analyze claims with a high likelihood of error, identify instances of overpayment within social security insurance cases, and to triage review of disability benefits determinations, to name only a few. Similarly, the Veterans Benefits Administration within the Department of Veterans Affairs reported using AI to identify fraudulent changes to veterans’ benefit payments and to process and summarize claims materials.   

Health and human services

The delivery and administration of health and human services was another core area of high-risk AI use cases, with a majority housed within the Department of Veterans Affairs, the largest healthcare system in the nation, and the Department of Health and Human Services. For instance, the Office of Refugee Resettlement within the Department of Health and Human Services’ Administration for Children and Families reported use of AI tools to aid in placing and monitoring the safety of refugee children. And, the Department of Veterans Affairs reported a vast number of healthcare and human services-related uses, ranging from clinical diagnostic tools to systems used to predict suicide and overdose risks among veterans. 

Remaining Gaps and Inconsistencies

Although the 2024 agency AI inventories offer greater insight into these core high-risk use cases across the government, there is still significant room for improvement. Most notably, numerous AI inventories contained inconsistent documentation and insufficient detail about compliance with required risk management practices. 

Insufficient detail

Under OMB’s guidance on federal agencies’ use of AI, agencies were permitted to issue waivers or extensions for certain risk management practices if an agency needed additional time to fulfill a requirement, or if a specific practice would increase risk or impede agency operations. Disappointingly, public reporting about these measures was overwhelmingly scarce across all agencies. The Department of Homeland Security, for example, was the only agency in the entire federal government to include specific information about the length of time for which extensions were issued. And, the Department of Housing and Urban Development was the only agency to report information about any waivers issued, while all other agencies merely left entire sections of their inventories blank without further explanation.

Lack of consistency

Beyond these gaps, inventory reporting is incredibly variable within and between federal agencies, including different levels of detail and different approaches to reporting and categorizing the risk level of use cases. Some agencies and subcomponents within agencies completed a majority of the fields required in their inventories while others, including other subcomponents within the same agency, left many of the same fields blank. In addition, many agencies classified very similar tools as having different levels of risk. For example, the Department of Housing and Urban Development classified a AI tool used for translation as rights-impacting while the Department of Homeland Security did not classify a similar translation tool as rights- or safety-impacting.

Across these inconsistencies, one of the greatest barriers to public understanding is that agencies are not required to report information about how they determined whether or not a particular use case is high-risk. Without this information, it remains difficult for the public to understand why similar systems used by different agencies have different risk classifications or why seemingly high-risk tools (such as AI tools used to redact personally identifiable information) are not designated as such. The Department of Homeland Security, however, stands apart from other agencies on this issue. Alongside their updated AI inventory, the Department of Homeland Security published a companion blog post that provides greater explanation about how the agency approached the completion of their updated inventory, including additional information about how the Department’s leadership made determinations about high-risk use cases and about the nature of extensions issued. This should serve as a model for other agencies to publicly communicate additional information about why and how AI governance decisions are made.

Conclusion

Agency AI use case inventories should not be an end unto themselves. Instead, they should serve as the foundation for agencies to build public accountability and trust about how they are using and governing AI tools. 

The value of these inventories as a transparency tool is further reinforced as state and local governments establish similar legal requirements for government agencies to publish AI use case inventories. At least 12 states have formally issued such requirements, through either legislation or executive order, and the updated federal inventories can serve as an important model for these and other states across the country.

OMB now has the opportunity to make significant improvements to federal agencies’ AI use case inventories heading into their 2025 updates. OMB’s recently updated guidance on federal agencies’ use of AI states that OMB will issue additional “detailed instructions to agencies regarding the inventory and its scope.” OMB should use these instructions as a tool to provide agencies with additional clarity about their obligations and to address the gaps and inconsistencies seen in the 2024 inventories. 

AI use case inventories are a critical transparency mechanism for public agencies at all levels of government. They push governments to document and disclose their myriad uses of AI, and the steps they’ve taken to mitigate risks to individuals’ rights and safety in a manner that is clear and accessible to the public. As federal agencies continue to meet their existing legal obligations, ensuring that agencies update their inventories in a timely manner and that their inventories are robust, detailed, and usable should be a key component of meeting this transparency goal.

The post Exploring the 2024 Federal AI Inventories: Key Improvements, Trends, and Continued Inconsistencies appeared first on Center for Democracy and Technology.

]]>
AI Action Plan Should Promote AI Transparency, Accuracy, Effectiveness and Reliability, CDT Says https://cdt.org/insights/cdt-submits-comments-on-the-federal-governments-ai-action-plan/ Mon, 17 Mar 2025 13:36:02 +0000 https://cdt.org/?post_type=insight&p=107929 The Center for Democracy & Technology (CDT) submitted comments to the Networking and Information Technology Research and Development National Coordination Office on the highest priority actions that should be in the new AI Action Plan required under Executive Order 14179. As this executive order explains, the AI Action Plan would “sustain and enhance America’s global […]

The post AI Action Plan Should Promote AI Transparency, Accuracy, Effectiveness and Reliability, CDT Says appeared first on Center for Democracy and Technology.

]]>
The Center for Democracy & Technology (CDT) submitted comments to the Networking and Information Technology Research and Development National Coordination Office on the highest priority actions that should be in the new AI Action Plan required under Executive Order 14179. As this executive order explains, the AI Action Plan would “sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” 

Our comments identify several well-established principles for trustworthy and effective AI that have bipartisan support and acceptance and should form the basis of the AI Action Plan. During his first term, President Trump issued Executive Orders 13859 and 13960 and Office of Management and Budget’s Memorandum M-21-06, which articulate principles that include: 

  • Evaluating and addressing risks to people’s privacy, civil rights, civil liberties, and safety; 
  • Improving transparency and accountability to the public; 
  • Ensuring accuracy, reliability, and effectiveness; and 
  • Incorporating public input. 

These principles were incorporated into the National Institute of Standards and Technology’s (NIST) consensus-driven AI Risk Management Framework (AI RMF), and Congress has endorsed similar principles on a bipartisan basis, as illustrated by the Bipartisan House Task Force on AI’s report and Bipartisan Senate AI Working Group’s roadmap. Companies have also voluntarily adopted similar principles in their own AI governance commitments.

Our comments recommend several elements that the AI Action Plan should include to advance these goals:

Continuing NIST’s work to develop guidance for AI governance

  • NIST’s vital role in AI governance is to develop voluntary standards and evaluation and measurement methods grounded in technical expertise regarding how AI systems work, how AI systems can cause or contribute to risks to people’s rights, and how those risks can be mitigated.
  • The AI Action Plan should direct NIST to continue developing standards through a process that meaningfully integrates civil society expertise to ensure that risks to communities are spotted and addressed and to support greater understanding of how a system’s design and capabilities affect its behavior and performance.
  • The standards-development process should center not only prospective security risks, but also current, ongoing risks such as privacy harms, ineffectiveness of the system, and discrimination.
  • NIST also provides necessary expertise on the valid and reliable measurement of different qualities of an AI system, such as safety, efficacy, or fairness, so standards development should include a multifaceted approach involving multiple methods to measure any given quality.

Ensuring the use of trustworthy AI in the federal government

  • The AI Action Plan should advance safe, trustworthy, effective, and efficient AI in government service delivery and operations – realizing the potential of AI systems in modernizing government requires enabling responsible uses through robust guardrails to protect individual’s safety, privacy, and civil liberties.
  • The AI Action Plan should center six best practices to guide federal agencies’ use of AI: risk assessment and mitigation, testing and evaluation, centralized governance and oversight, privacy and security, public engagement, and transparency.
  • In the specific context of law enforcement, the AI Action Plan should protect due process rights by requiring disclosure of an AI system’s use to people accused of a crime based in part on evidence or leads generated using that system. 
  • Given alarming reports about DOGE’s use of AI systems to make a host of high-risk decisions across the federal government, these measures are more important now than ever.

Aligning the use of AI for national security purposes with civil liberties and the Constitution

  • Many AI use cases for national security-related decisions are high risk because life or liberty are at stake, but are not made public, which can shield the abuse and misuse of AI systems.
  • Where classification needs prevent public reporting of AI use cases in a national security setting, the AI Action Plan should ensure effective reporting to relevant congressional committees and support the establishment of an independent oversight body for such use cases.
  • The AI Action Plan should also ensure effective governance and oversight through coordination between Chief AI Officers and through an independent oversight mechanism within the Executive Branch.

Advancing competitiveness by supporting openness in the AI ecosystem and investing in the National AI Research Resource

  • The AI Action Plan should set a course that ensures America remains a home for the development of open models, which can accelerate AI innovation and facilitate the rapid and responsible adoption of AI by businesses.
  • Open models also mitigate the concentration of power and control over the sharing of knowledge and expression within the closed AI model ecosystem where many different entities rely on the same few companies’ closed AI models.
  • The AI Action Plan should require robust standards for agencies to monitor developments in open models’ capabilities and identify potential public safety and national security risks, rather than prematurely imposing export restrictions that would undercut American competitiveness and cede AI leadership.
  • The AI Action Plan should also recognize the implementation of the National AI Research Resource as a priority, which can strengthen our nation’s AI research infrastructure to democratize access to the computational resources, data, and tools needed for cutting-edge AI development.

Shaping responsible private sector use of AI

  • Agencies have the sector-specific expertise needed to help companies adopt practical governance measures that ensure their AI systems are effective, fit for purpose, and safe; do not undermine people’s rights; and comply with long-standing legal obligations.
  • The AI Action Plan should direct agencies to take regulatory and non-regulatory approaches by pursuing new enforcement actions, adapting their regulations, and providing guidance to hold companies accountable when they adopt AI into their business practices.
  • The AI Action Plan should include formal interagency coordination mechanisms to help agencies exercise their individual authorities while collectively ensuring that companies routinely apply principles of trustworthy AI.

To advance American AI leadership, the AI Action Plan should ensure that public and private sector development and use of AI advances fundamental American values.

Read the full comments here.

The post AI Action Plan Should Promote AI Transparency, Accuracy, Effectiveness and Reliability, CDT Says appeared first on Center for Democracy and Technology.

]]>
CDT Signs Onto Letter Urging Congress to Protect Postsecondary Student Data https://cdt.org/insights/cdt-signs-onto-letter-urging-congress-to-protect-postsecondary-student-data/ Fri, 14 Mar 2025 19:45:53 +0000 https://cdt.org/?post_type=insight&p=107913 On March 10, 2025 the Center for Democracy & Technology joined the Institute for Higher Education Policy (IHEP), the Postsecondary Data Collaborative (PostsecData), and 87 other organizations and individual researchers in sending a letter to members of Congress urging them to use their oversight power to demand information regarding the recent cancellation of the Institute […]

The post CDT Signs Onto Letter Urging Congress to Protect Postsecondary Student Data appeared first on Center for Democracy and Technology.

]]>
On March 10, 2025 the Center for Democracy & Technology joined the Institute for Higher Education Policy (IHEP), the Postsecondary Data Collaborative (PostsecData), and 87 other organizations and individual researchers in sending a letter to members of Congress urging them to use their oversight power to demand information regarding the recent cancellation of the Institute of Education Sciences (IES) contracts and grants by the the Department of Government Efficiency (DOGE) and how sensitive student information may have been accessed and by whom.

The letter explains that the U.S. Department of Education collects and maintains sensitive personally identifiable information about student loan borrowers like their social security number and tax records. Serious privacy and security concerns persist because there is little information about who has accessed this information, how they are keeping it safe, and what they are doing with it, and these questions would benefit from additional Congressional inquiry and oversight. 

Read the full letter.

The post CDT Signs Onto Letter Urging Congress to Protect Postsecondary Student Data appeared first on Center for Democracy and Technology.

]]>
CDT Submits Comments to OMB on Federal Agencies’ Use of Commercially Available Information https://cdt.org/insights/cdt-submits-comments-to-omb-on-federal-agencies-use-of-commercially-available-information/ Mon, 16 Dec 2024 16:13:44 +0000 https://cdt.org/?post_type=insight&p=106773 On December 16, the Center for Democracy & Technology submitted comments to the Office of Management and Budget (OMB) on federal agencies’ use of commercially available information (CAI). Our comments highlight the harms that information obtained from data brokers pose to individuals’ privacy, rights, and safety. We urge OMB to take additional steps to address […]

The post CDT Submits Comments to OMB on Federal Agencies’ Use of Commercially Available Information appeared first on Center for Democracy and Technology.

]]>
On December 16, the Center for Democracy & Technology submitted comments to the Office of Management and Budget (OMB) on federal agencies’ use of commercially available information (CAI). Our comments highlight the harms that information obtained from data brokers pose to individuals’ privacy, rights, and safety.

We urge OMB to take additional steps to address the risks associated with such information, including by updating the requirements for Privacy Impact Assessments, maintaining a comprehensive inventory of agencies’ use of CAI, requiring agencies to proactively disclose how CAI is used to make determinations about individuals, and creating a centralized reporting mechanism for agencies to report problems with CAI.

The comments are available to read.

The post CDT Submits Comments to OMB on Federal Agencies’ Use of Commercially Available Information appeared first on Center for Democracy and Technology.

]]>
CDT Comments on NIST Digital Identity Guidelines With a Focus on Equity, Access, Privacy in Public Benefits Administration https://cdt.org/insights/cdt-comments-on-nist-digital-identity-guidelines-with-a-focus-on-equity-access-privacy-in-public-benefits-administration/ Tue, 08 Oct 2024 16:10:05 +0000 https://cdt.org/?post_type=insight&p=105951 The Center for Democracy & Technology (CDT) has submitted a comment in response to the National Institute of Standards and Technology’s (NIST) call for comments on the second public draft of revision four of the Digital Identity Guidelines (Special Publication 800-63). CDT is pleased to see that NIST has taken steps to account for equity, […]

The post CDT Comments on NIST Digital Identity Guidelines With a Focus on Equity, Access, Privacy in Public Benefits Administration appeared first on Center for Democracy and Technology.

]]>
The Center for Democracy & Technology (CDT) has submitted a comment in response to the National Institute of Standards and Technology’s (NIST) call for comments on the second public draft of revision four of the Digital Identity Guidelines (Special Publication 800-63).

CDT is pleased to see that NIST has taken steps to account for equity, access, and privacy in the identity management process in this draft. Our comments aim to advance these goals, particularly in the context of public benefits administration, an area where providing customer-centered, privacy-forward, multi-modal identity verification is paramount for protecting vulnerable individuals and people experiencing hardships while streamlining their access to life-saving benefits.

Read the full comments.

The post CDT Comments on NIST Digital Identity Guidelines With a Focus on Equity, Access, Privacy in Public Benefits Administration appeared first on Center for Democracy and Technology.

]]>
CDT’s Kristin Woelfel Submits Written Testimony to Pennsylvania Advisory Committee to the U.S. Commission on Civil Rights https://cdt.org/insights/cdts-kristin-woelfel-submits-written-testimony-to-pennsylvania-advisory-committee-to-the-u-s-commission-on-civil-rights/ Tue, 07 May 2024 13:30:05 +0000 https://cdt.org/?post_type=insight&p=103891 On May 1, CDT submitted written testimony to the Pennsylvania Advisory Committee to the U.S. Commission on Civil Rights on the intersection of civil rights and AI in education. Our testimony provides a summary of CDT’s legal analysis of student civil rights laws as applied to the disproportionate impacts of AI-powered educational technologies on protected classes of […]

The post CDT’s Kristin Woelfel Submits Written Testimony to Pennsylvania Advisory Committee to the U.S. Commission on Civil Rights appeared first on Center for Democracy and Technology.

]]>
On May 1, CDT submitted written testimony to the Pennsylvania Advisory Committee to the U.S. Commission on Civil Rights on the intersection of civil rights and AI in education. Our testimony provides a summary of CDT’s legal analysis of student civil rights laws as applied to the disproportionate impacts of AI-powered educational technologies on protected classes of students, along with a copy of the full legal research report and slide deck with visual depictions of CDT’s polling data. Specifically, our testimony: 

  • Lays out key legal authorities for civil rights enforcement in schools;
  • Lists key discrimination principles under which claims for algorithmic discrimination in schools might arise;
  • Describes specific types of AI-powered education technologies that are known or likely to have a discriminatory impact on protected classes; and
  • Briefly summarizes CDT’s recommendations to local education leaders to address potential AI-driven inequities in school.

Read the full testimony here.

The post CDT’s Kristin Woelfel Submits Written Testimony to Pennsylvania Advisory Committee to the U.S. Commission on Civil Rights appeared first on Center for Democracy and Technology.

]]>
CDT Submits Comments to OMB on the Responsible Procurement of AI by the Government https://cdt.org/insights/cdt-submits-comments-to-omb-on-the-responsible-procurement-of-ai-by-the-government/ Wed, 01 May 2024 20:20:08 +0000 https://cdt.org/?post_type=insight&p=103803 On April 29, CDT submitted comments to the Office of Management and Budget (OMB) on the responsible procurement of artificial intelligence (AI) by the government. Our comments offer recommendations – including from CDT’s new report, The Federal Government’s Power of the Purse: Enacting Procurement Policies and Practices to Support Responsible AI Use – to help OMB ensure that agency […]

The post CDT Submits Comments to OMB on the Responsible Procurement of AI by the Government appeared first on Center for Democracy and Technology.

]]>
On April 29, CDT submitted comments to the Office of Management and Budget (OMB) on the responsible procurement of artificial intelligence (AI) by the government. Our comments offer recommendations – including from CDT’s new report, The Federal Government’s Power of the Purse: Enacting Procurement Policies and Practices to Support Responsible AI Use  to help OMB ensure that agency contracts for AI acquisition align with OMB’s recent guidance on governance and risk management in agency use of AI. Specifically, our comments describe how OMB can:

  • Build on existing standard procurement practices to incorporate measures for responsible AI
  • Promote competition among AI vendors
  • Hold both vendors and agencies responsible for pre- and post-award evaluations and impact assessments of their AI
  • Guide the contractual terms agencies use to protect data privacy and security
  • Ensure that procured AI advances equitable outcomes and mitigates risks to civil rights

Read the full comments here.

The post CDT Submits Comments to OMB on the Responsible Procurement of AI by the Government appeared first on Center for Democracy and Technology.

]]>