Equity in Civic Technology Archives - Center for Democracy and Technology https://cdt.org/area-of-focus/equity-in-civic-tech/ Tue, 13 May 2025 21:09:58 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://cdt.org/wp-content/uploads/2019/11/cropped-cdt-logo-32x32.png Equity in Civic Technology Archives - Center for Democracy and Technology https://cdt.org/area-of-focus/equity-in-civic-tech/ 32 32 CDT Joins Call for SNAP Payment Processors to Refuse USDA Data Requests https://cdt.org/insights/cdt-joins-call-for-snap-payment-processors-to-refuse-usda-data-requests/ Tue, 13 May 2025 21:09:56 +0000 https://cdt.org/?post_type=insight&p=108817 This week, the Center for Democracy & Technology (CDT) joined Protect Democracy and the Electronic Privacy Information Center (EPIC) in calling on the private companies that process Supplemental Nutrition Assistance Program (SNAP) payments to refuse the federal government’s unprecedented, and likely illegal, request to access sensitive information about tens of millions of Americans who receive […]

The post CDT Joins Call for SNAP Payment Processors to Refuse USDA Data Requests appeared first on Center for Democracy and Technology.

]]>
This week, the Center for Democracy & Technology (CDT) joined Protect Democracy and the Electronic Privacy Information Center (EPIC) in calling on the private companies that process Supplemental Nutrition Assistance Program (SNAP) payments to refuse the federal government’s unprecedented, and likely illegal, request to access sensitive information about tens of millions of Americans who receive this life-saving benefit.

For over 60 years, the U.S. Department of Agriculture (USDA) has funded states to administer SNAP. In that time, the federal government has never requested access to the personal data of all program recipients, which are primarily low-income families as well as disabled or older adults. Forcing states to turn over data collected to administer a program that feeds millions of low-income, disabled, and older people for unknown purposes is an alarming data privacy threat that will create a chilling effect that prevents Americans from accessing life-saving benefits.

In this letter, we urge SNAP payment processors to stand up for privacy and refuse to facilitate this broad and dangerous attempt at government overreach.

Read the full letter.

The post CDT Joins Call for SNAP Payment Processors to Refuse USDA Data Requests appeared first on Center for Democracy and Technology.

]]>
OMB’s Revised AI Memos Exemplify Bipartisan Consensus on AI Governance Ideals, But Serious Questions Remain About Implementation https://cdt.org/insights/ombs-revised-ai-memos-exemplify-bipartisan-consensus-on-ai-governance-ideals-but-serious-questions-remain-about-implementation/ Tue, 13 May 2025 20:12:25 +0000 https://cdt.org/?post_type=insight&p=108821 On April 3, the Office of Management and Budget (OMB) released updated versions of its guidance to federal agencies on the use (M-25-21) and procurement (M-25-22) of AI. These memos were issued in response to statutory requirements in the AI in Government Act of 2020 and the Advancing American AI Act. The updated memos build […]

The post OMB’s Revised AI Memos Exemplify Bipartisan Consensus on AI Governance Ideals, But Serious Questions Remain About Implementation appeared first on Center for Democracy and Technology.

]]>
On April 3, the Office of Management and Budget (OMB) released updated versions of its guidance to federal agencies on the use (M-25-21) and procurement (M-25-22) of AI. These memos were issued in response to statutory requirements in the AI in Government Act of 2020 and the Advancing American AI Act. The updated memos build on and streamline similar guidance on the use (M-24-10) and procurement (M-24-18) of AI first issued under the Biden Administration.

In fulfilling this legislative requirement, CDT has long advocated that OMB adopt measures to advance responsible AI practices across the federal government’s use and procurement of AI. Doing so will both protect people’s rights and interests, and help ensure that government AI systems are effective and fit for purpose. The most recent OMB guidance retains many of the core AI governance measures that CDT has called for, ranging from heightened protections for high-risk use cases to centralized agency leadership. The updated guidance is especially important as the Trump Administration signals its interest to rapidly expand the use of AI across federal agencies, including efforts by the Department of Government Efficiency (DOGE) to deploy AI tools to make a host of high-stakes decisions

Encouragingly, the publication of this revised guidance confirms that there is bipartisan consensus around core best practices for ensuring the responsible use and development of AI by public agencies. But, while this updated guidance is promising on paper, there are significant unanswered questions about how it will be implemented in practice. The overarching goals and obligations set out by these memos aimed at advancing responsible AI innovation through public trust and safety appear to be in direct tension with the reported actions of DOGE and various federal agencies. 

The true test of the strength and durability of this guidance will be in the efforts to implement and enforce these crucial safeguards over the coming months. In line with CDT’s ongoing advocacy, these memos provide agencies with a clear roadmap for mitigating the risks of AI systems and advancing public trust, through three avenues:

  • Intra- and Inter-Agency AI Governance
  • Risk Management Practices
  • Responsible AI Procurement

Intra- and Inter-Agency AI Governance

AI governance bodies and oversight practices facilitate the robust oversight of AI tools and the promotion of responsible innovation across the federal government. Critical AI governance practices — such as standardizing decision-making processes and appointing leaders specifically responsible for AI — enable agencies to fully assess the benefits and risks of a given system and implement appropriate safeguards across agency operations.

Significantly, OMB’s updated memos retain critical agency and government-wide AI governance structures that establish dedicated AI leadership and coordination functions aimed at supporting agencies’ safe and effective adoption of AI:

  • Agency chief AI officers: Each agency is required to retain or designate a Chief AI Officer (CAIO) responsible for managing the development, acquisition, use, and oversight of AI throughout the agency. These officials serve a critical role in coordinating with leaders across each agency and ensuring that agencies meet their transparency and risk management obligations.
  • Agency AI governance boards: Each agency is required to establish an interdisciplinary governance body — consisting of senior privacy, civil rights, civil liberties, procurement, and customer experience leaders, among others — tasked with developing and overseeing each agency’s AI policies. These governance boards help agencies ensure that a diverse range of internal stakeholders are involved throughout the AI policy development and implementation process, creating a structured forum for agency civil rights and privacy leaders to play a direct role in agency decision-making about AI.
  • Interagency chief AI officer council: OMB is required to convene an interagency council of CAIOs to support government-wide coordination on AI use and oversight. This council supports collaboration and information sharing across the government, allowing for agencies to learn from one another’s successes and failures.
  • Cross-functional procurement teams: Each agency is required to create a cross-functional team — including acquisition, cybersecurity, privacy, civil rights, and budgeting experts — to coordinate agency AI acquisitions. These teams help agencies to effectively identify and evaluate needed safeguards for each procurement and to successfully monitor the performance of acquired tools.  

Risk Management Practices

Not all AI use cases present the same risks to individuals and communities. For instance, an AI tool used to identify fraudulent benefits claims poses a significantly different set of risks than an AI tool used to categorize public comments submitted to an agency. It is therefore widely understood that certain high-risk uses should be subjected to increased scrutiny and care. 

Acknowledging the need to proactively identify and mitigate potential risks, OMB’s updated memos retain and streamline requirements for agencies to establish heightened risk management practices for systems used in high-risk settings. Building on a similar framework established under the previous OMB AI memos, the updated OMB memos define a category of “high-impact AI” use cases for which agencies must implement minimum risk management practices. This categorization of “high-impact AI” simplifies categories that were created under the previous versions of these memos, which defined two separate definitions of “safety-impact” and “rights-impacting” AI systems that were subject to similar minimum risk management practices. This unified category significantly simplifies agencies’ process for identifying high-risk systems by requiring only one determination as opposed to two. 

In line with the earlier versions of these memos, the updated guidance requires agencies to establish the following heightened risk management practices for all “high-impact” use cases:

  • Pre-deployment testing and impact assessments: Agencies are required to conduct impact assessments and testing in real-world scenarios prior to deploying a tool. These processes help agencies proactively assess a system’s performance, identify potential impacts or harms, and develop risk mitigation strategies. 
  • Ongoing monitoring: Agencies are required to conduct periodic performance testing and oversight, allowing agencies to identify changes in a system’s use or function that may lead to harmful or unexpected outcomes.
  • Human training and oversight: Agencies are required to provide ongoing training about the use and risks of AI for agency personnel and to implement human oversight measures. These practices ensure that agency personnel have sufficient information to understand the impacts of the AI tools that they use and are empowered to intervene if harms occur. 
  • Remedy and appeal: Agencies are required to provide avenues for individuals to seek human review and appeal any AI-related adverse actions, ensuring that impacted individuals are able to seek redress for any negative outcomes that may result due to the use of AI. 
  • Public feedback: Agencies are required to seek public feedback about the development, use, and acquisition of AI systems, helping agencies make informed decisions about how AI can best serve the interests of the public.

While many of these core risk management requirements extend those set out under the previous OMB AI guidance, there are several notable differences in the updated OMB memos. First, the updated guidance allows for pilot programs to be exempted from the minimum risk management practices, so long as a pilot is time-bound, limited in scope, and approved by the agency CAIO. Second, the updated guidance removes several previously required minimum risk management practices, including requirements for agencies to provide notice to individuals impacted by an AI tool and to maintain an option for individuals to opt-out of AI-enabled decisions. Third, the updated guidance no longer includes previous requirements for rights-impacting tools to undergo separate assessments on equity and discrimination, although impact assessments still require agencies to evaluate how systems use information related to protected classes and to describe mitigation measures used to prevent unlawful discrimination. Finally, the updated guidance narrows the definition of systems that are presumed to be “high-impact,” removing certain categories previously included in the definitions of “safety-impact” and “rights-impacting” AI systems, such as AI systems to used to maintain the integrity of elections and voting infrastructure and systems used to detect or measure human emotions.

Responsible AI Procurement

Many of the AI tools used by federal agencies are procured from, or developed with the support of, third-party vendors. Because of this, it is critical for agencies to establish additional measures for ensuring the efficacy, safety, and transparency of AI procurement. 

To meet this need, OMB’s updated memos simplify and build on many of the responsible AI procurement practices put in place by the initial version of OMB’s guidance. First, and most importantly, this updated guidance requires agencies to extend their minimum risk management practices to procured AI systems. Similar to OMB’s previous requirements, agencies are directed to proactively identify if a system that they are seeking to acquire is likely high-impact and to disclose such information in a solicitation. And, once an agency is in the process of acquiring a high-impact AI tool, it is obligated to include contract language that ensures compliance with all minimum risk management practices. These measures make sure that the same protections are put in place no matter if a high-impact AI tool is developed in-house or acquired from a vendor. 

Moreover, the updated guidance outlines additional obligations that agencies have to establish for all procured AI systems. To ensure that agency contracts contain sufficient protections, agencies are directed to include contract terms that address the intellectual property rights and use of government data, data privacy, ongoing testing and monitoring, performance standards, and notice requirements to alert agencies prior to the integration of new AI features into a procured system. The updated guidance also has a heightened focus on promoting competition in the AI marketplace, requiring agencies to implement protections against vendor lock-in throughout the solicitation development, selection and award, and contract closeout phases. 

In tandem with these contractual obligations, agencies are required to monitor the ongoing performance of an AI system throughout the administration of a contract and to establish criteria for sunsetting the use of an AI system. One significant difference in OMB’s updated memos, however, is that these procurement obligations only apply to future contracts and renewals, whereas the prior version of OMB’s guidance extended a subset of these requirements to existing contracts for high-impact systems. 

Conclusion

As CDT highlighted when the first version of OMB’s guidance was published a year ago, while this revised guidance is an important step forward, implementation will be the most critical part of this process. OMB and federal agencies have an opportunity to use this updated guidance to address inconsistencies and gaps in AI governance practices across agencies, increasing the standardization and effectiveness of agencies’ adherence to these requirements even as they expand their use of AI. 

Ensuring adequate implementation of OMB’s memos is not only critical to promoting the effective use of taxpayer money, but is especially urgent given alarming reports about the opaque and potentially risky uses of AI at the hands of DOGE. The government has an obligation to lead by example by modeling what responsible AI innovation should look like in practice. These revised memos are a good start, but now it is time for federal agencies to walk the walk and not just talk the talk.

The post OMB’s Revised AI Memos Exemplify Bipartisan Consensus on AI Governance Ideals, But Serious Questions Remain About Implementation appeared first on Center for Democracy and Technology.

]]>
CDT Submits Comments Outlining Dangers of SSA About-Face Blocking Vulnerable Beneficiaries from Accessing Critical Benefits https://cdt.org/insights/cdt-outlines-dangers-of-ssa-about-face-blocking-vulnerable-beneficiaries-from-accessing-critical-benefits/ Tue, 13 May 2025 14:32:32 +0000 https://cdt.org/?post_type=insight&p=108802 Despite initially heeding an outpouring of concerns, many around accessibility for disabled beneficiaries, the Social Security Administration (SSA) appears to be forging ahead with plans to require in-person visits or access to an online account to complete certain phone-based transactions. This about-face will block some of SSA’s most vulnerable beneficiaries from accessing critical benefits, including […]

The post CDT Submits Comments Outlining Dangers of SSA About-Face Blocking Vulnerable Beneficiaries from Accessing Critical Benefits appeared first on Center for Democracy and Technology.

]]>
Despite initially heeding an outpouring of concerns, many around accessibility for disabled beneficiaries, the Social Security Administration (SSA) appears to be forging ahead with plans to require in-person visits or access to an online account to complete certain phone-based transactions.

This about-face will block some of SSA’s most vulnerable beneficiaries from accessing critical benefits, including disabled and/or older people who disproportionately rely on telephone services. Though we appreciate SSA’s attention to the integrity of their programs, attempts to address fraud cannot make programs inaccessible to beneficiaries.

CDT has filed comments outlining the dangers of this approach to people with disabilities and older adults who depend on the SSA-administered benefits that they are entitled to receive.

Read the full comments.

The post CDT Submits Comments Outlining Dangers of SSA About-Face Blocking Vulnerable Beneficiaries from Accessing Critical Benefits appeared first on Center for Democracy and Technology.

]]>
CDT and the Leadership Conference Release New Analysis of Immigration, DOGE, and Data Privacy https://cdt.org/insights/cdt-and-the-leadership-conference-release-new-analysis-of-immigration-doge-and-data-privacy/ Mon, 12 May 2025 13:59:00 +0000 https://cdt.org/?post_type=insight&p=108756 In March, CDT and the Leadership Conference’s Center for Civil Rights and Technology released a fact sheet examining some of the core issues related to the Department of Government Efficiency’s (DOGE) access to and use of sensitive information held by federal agencies. Since we released this analysis, not only has DOGE increased its efforts to […]

The post CDT and the Leadership Conference Release New Analysis of Immigration, DOGE, and Data Privacy appeared first on Center for Democracy and Technology.

]]>
In March, CDT and the Leadership Conference’s Center for Civil Rights and Technology released a fact sheet examining some of the core issues related to the Department of Government Efficiency’s (DOGE) access to and use of sensitive information held by federal agencies. Since we released this analysis, not only has DOGE increased its efforts to access sensitive information across the federal government, but DOGE and federal law enforcement authorities have specifically sought to repurpose administrative data for immigration-related uses. 

As the federal government seeks to rapidly expand the use of sensitive data to target immigrants, CDT and the Leadership Conference developed a follow-up explainer that analyzes the issues surrounding federal immigration authorities and DOGE’s access and use of administrative data for immigration-related activities. This new explainer details:

  • The types of administrative data held by federal agencies, 
  • Examples of how federal administrative data is being repurposed for immigration-related efforts, 
  • The legal protections of federal administrative data and law enforcement exceptions, 
  • The impacts of government data access and use on immigrants and society, and
  • The unanswered questions about and potential future changes to the federal government’s access, use, and sharing of administrative data for immigration-related purposes. 

Repurposing federal administrative data for immigration-related activities may have widespread and significant impacts on the lives of U.S. citizens and non-citizen immigrants alike. Ensuring transparency into the actions of DOGE and federal immigration authorities is a critical step towards protecting and safeguarding data privacy for everyone.

Read the full analysis.

The post CDT and the Leadership Conference Release New Analysis of Immigration, DOGE, and Data Privacy appeared first on Center for Democracy and Technology.

]]>
CDT Submits Comments to Representative Lori Trahan on Updating the Privacy Act of 1974 https://cdt.org/insights/cdt-submits-comments-to-representative-lori-trahan-on-updating-the-privacy-act-of-1974/ Wed, 30 Apr 2025 04:01:00 +0000 https://cdt.org/?post_type=insight&p=108499 On April 30, the Center for Democracy & Technology (CDT) submitted comments to Representative Lori Trahan about reforming the Privacy Act of 1974 to address advances in technology and emerging threats to federal government data privacy. Our comments highlight potential privacy harms related to federal government data practices and provide an overview of CDT’s nearly […]

The post CDT Submits Comments to Representative Lori Trahan on Updating the Privacy Act of 1974 appeared first on Center for Democracy and Technology.

]]>
On April 30, the Center for Democracy & Technology (CDT) submitted comments to Representative Lori Trahan about reforming the Privacy Act of 1974 to address advances in technology and emerging threats to federal government data privacy. Our comments highlight potential privacy harms related to federal government data practices and provide an overview of CDT’s nearly two decades of advocacy on the Privacy Act.

We urge Congress to address gaps in the Privacy Act, including by:

  • Updating the definition of “system of records,” 
  • Limiting the “routine use” exemption, 
  • Expanding the Privacy Act to cover non-U.S. persons, and 
  • Strengthening privacy notices.

Read the full comments.

The post CDT Submits Comments to Representative Lori Trahan on Updating the Privacy Act of 1974 appeared first on Center for Democracy and Technology.

]]>
CDT Stands Up for Taxpayer Privacy https://cdt.org/insights/cdt-stands-up-for-taxpayer-privacy/ Wed, 16 Apr 2025 15:49:08 +0000 https://cdt.org/?post_type=insight&p=108378 The Center for Democracy & Technology has joined over 270 other organizations in a letter calling on Congress to stand up for taxpayer privacy just as millions of Americans are filing their tax returns. The letter decries a new Memorandum of Understanding (MOU) pursuant to which the Internal Revenue Service will share with the Department […]

The post CDT Stands Up for Taxpayer Privacy appeared first on Center for Democracy and Technology.

]]>
The Center for Democracy & Technology has joined over 270 other organizations in a letter calling on Congress to stand up for taxpayer privacy just as millions of Americans are filing their tax returns. The letter decries a new Memorandum of Understanding (MOU) pursuant to which the Internal Revenue Service will share with the Department of Homeland Security taxpayer information regarding as many as seven million taxpayers that DHS suspects are undocumented. Taxpayers will have no prior notice that their information is being shared, and no opportunity to challenge the sharing of their information on a case-by-case basis before it is shared.

As stated in the letter, which was quarterbacked by the civil rights and advocacy NGO UnidosUS, the IRS-DHS MOU “… poses an unprecedented threat to taxpayer privacy protections that have been respected on a bipartisan basis for nearly 50 years.” Taxpayer information is protected by law against disclosure, and immigration enforcement is not a recognized exception to those protections.  We are calling for Congress to conduct oversight hearings, demand release of the MOU without redactions, and demand that the Treasury Department explain its novel interpretation of the law. 

Taxpayer privacy encourages taxpayer compliance. As CDT has pointed out, use of taxpayer information for immigration enforcement will create a huge disincentive for undocumented people to pay taxes, and will drive them further into the informal labor sector, where they are vulnerable to abuse. This will cost the Treasury billions in lost tax revenue. The IRS had urged undocumented people to file tax returns, and to encourage them to do so, gave assurances that information submitted for tax purposes would not be used for immigration enforcement. The IRS has reneged on those assurances, calling into question other taxpayer privacy commitments — including those imposed by law. 

Read the full letter.

The post CDT Stands Up for Taxpayer Privacy appeared first on Center for Democracy and Technology.

]]>
AI in Local Government: How Counties & Cities Are Advancing AI Governance https://cdt.org/insights/ai-in-local-government-how-counties-cities-are-advancing-ai-governance/ Tue, 15 Apr 2025 14:23:40 +0000 https://cdt.org/?post_type=insight&p=108358 This blog is part of a series of pieces highlighting AI regulation trends across states. See CDT’s other blogs on state AI executive orders, public sector AI legislation, and state education agencies’ AI guidance. Introduction While much attention has been paid to the use of AI by state and federal agencies, city and local governments […]

The post AI in Local Government: How Counties & Cities Are Advancing AI Governance appeared first on Center for Democracy and Technology.

]]>
This blog is part of a series of pieces highlighting AI regulation trends across states. See CDT’s other blogs on state AI executive orders, public sector AI legislation, and state education agencies’ AI guidance.

Introduction

While much attention has been paid to the use of AI by state and federal agencies, city and local governments also are increasingly using AI and should implement safeguards around public sector uses of these tools. City and county governments administer a wide range of public services – including transportation, healthcare, law enforcement, veterans services, and nutrition assistance, to name only a few – that have significant impacts on individuals’ health and safety. AI systems can assist in increasing the efficiency and effectiveness of local governments’ provision of such services, but without proper guardrails these same tools can also harm constituents and impede the safe, dignified, and fair delivery of public services.

In response to both the benefits and risks of using AI in local government, an increasing number of cities and counties have released AI policies and guidance. Organizations like the GovAI Coalition and the National Association of Counties are helping local governments craft and implement their own policies. In particular, the GovAI Coalition, a group of state and local public agencies working to advance responsible AI, created several template AI policies that a number of local agencies have since adopted as part of their own AI governance strategies.

To understand local trends, we analyzed public-facing policy documents from 21 cities and counties. Because most cities and counties do not make their internal IT policies publicly available, the following analysis could be skewed by differences in cities and counties that take proactive steps to disclose their AI policies. Analysis of publicly available AI policies and guidance at the local level reveals five common trends in AI governance, in that these policies: 

  • Draw from federal, state, and other local AI governance guidance;
  • Emphasize that use of AI should align with existing legal obligations;
  • Identify and prioritize mitigation of risks, like bias, reliability, privacy, and security;
  • Prioritize public transparency of AI uses; and
  • Advance accountability and human oversight in decision-making that incorporates AI.

AI Policy and Guidance at the County and City Level

Within the past several years, county and city governments across the country have published AI use policies and guidance to advance responsible AI uses and place guardrails on the ways they use the technology. Counties and cities are using various methods in regulating government AI use, including policies, guidelines, and executive orders. In addition, at least two cities – New York, NY, and San Francisco, Calif. – have enacted city ordinances requiring agencies to create public inventories of their AI use cases.

While many of these documents are not publicly accessible, several counties – Haines Borough, Alaska; Alameda County, Calif.; Los Angeles County, Calif.; Santa Cruz County, Calif.; Sonoma County, Calif.; Miami-Dade County, Fla.; Prince George’s County, Md.; Montgomery County, Md.; Washington County, Ore; and Nashville and Davidson County, Tenn. – and city governments – Baltimore, Md.; Birmingham, Ala.; Boise, Idaho; Boston, Mass.; Lebanon, NH; Long Beach, Calif.; New York City, NY; San Francisco, Calif.; San Jose, Calif.; Seattle, Wash.; and Tempe, Ariz. – have publicly released their policies, providing important insight into key trends across jurisdictions. These policies span states that already have existing state-wide policies and those that do not. Regardless of state-level policy, however, additional county and city-level guidance can help clarify the roles and obligations of local agencies.

Trends in County and City AI Policies and Guidance

  1. Draw from federal, state, and other local AI governance guidance

At both the county and city level, governments are building off of other local, state, and federal guidance as a starting point, mostly through borrowing language. Some of the most commonly cited or used resources are Boston’s AI guidelines, San Jose’s AI guidelines, the National Institute for Standards and Technology’s (NIST’s)  AI Risk Management Framework, and the Biden Administration’s since-rescinded AI Executive Order and AI Bill of Rights

For example, the City of Birmingham, Ala.’s generative AI guidelines acknowledge that the authors drew inspiration from the City of Boston’s guidelines. Likewise, Miami-Dade County’s report on AI policies and guidelines draws from several other government resources, including the cities of Boston, San Jose, and Seattle, the state of Kansas, the White House, and NIST.

  1. Emphasize that use of AI should align with existing legal obligations

At least 15 of the guidance documents that we analyzed explicitly call out the necessity for public agencies to ensure their use of AI tools adheres to existing laws relating to topics such as cybersecurity, public records, and privacy. On the city front, San Jose, Calif.’s AI guidelines state that “users will need to comply with the California Public Records Act and other applicable public records laws” for all city uses of generative AI, and Tempe, Ariz. mentions that all city employees must “comply with applicable laws, standards and regulations related to AI and data protection.” Several counties similarly affirm public agencies’ obligations to use AI systems in compliance with existing laws. Nashville and Davidson County’s guidance states that “all AI and GenAI use shall comply with relevant data privacy laws and shall not violate any intellectual property use,” and Los Angeles County’s technology directive affirms that AI systems must be used in “adherence to relevant laws and regulations.”

Some cities and counties take an additional step by creating access controls to prevent unauthorized use and disclosure of personal information. Santa Cruz County, for example, prohibits the use of AI systems without authorization, and New York City specifies that employees can only use tools that have been “approved by responsible agency personnel” and are “authorized by agency-specific and citywide requirements.” Likewise, Haines Borough requires employees to have specific authorization to use any AI systems that handle sensitive information.

  1. Identify and prioritize mitigation of risks, like bias, reliability, privacy, and security

Cities and counties commonly recognize the following three main risks of using AI:

  • Perpetuating bias: About 12 of the guidelines mention the potential for AI tools to produce biased outputs. One example of this at the city level is Lebanon, NH’s AI policy, which specifies the different types of bias issues that can show up with AI – biased training data, sampling bias, and stereotyping/societal biases – and expresses that “any biases that are identified must be addressed and corrective actions should be taken.” Alameda County, Calif., similarly highlights these issues, stating that “GenAI models can inadvertently amplify biases in the data the models are trained with or that users provide AI.”
  • Accuracy and unreliable outputs: At least 15 cities and counties discuss the unreliability of AI tools (due to issues such as hallucination), often acknowledging this through requiring employees to double-check or verify outputs before using AI-generated information in their work. For instance, Baltimore, Md.’s generative AI executive order prohibits city employees from using generative AI outputs without fact-checking and refining the content, especially if used for decision-making or in public communications. Guidance published by Washington County, Oreg. directs county employees to “fact check and review all content generated by AI,” noting that “while Generative AI can rapidly produce clear prose, the information and content might be inaccurate, outdated, or entirely fictional.” 
  • Privacy and security concerns: Roughly 18 city and county AI guidelines and policies state the importance of protecting privacy and security. These policies emphasize the potential privacy- and security-related harms if employees, for example, input personally identifiable or other sensitive information into an AI tool. The City of San Francisco, Calif., explains that a risk of using generative AI is “exposing non-public data as part of a training data set” and recommends that employees do not enter information that should not be public into non-enterprise generative AI tools. Long Beach, Calif., also recommends that city employees opt out of generative AI tools’ data collection and sharing whenever possible, and even provides a step-by-step guide on how to do so on ChatGPT. Sonoma County, Calif., notes that “there can be risks in using this technology, including… security and privacy concerns with inputting proprietary or confidential information about an employee, client, operations, etc. when interacting with the AI technology.”
  1. Prioritize public transparency of AI uses

Roughly 17 city and county guidelines and policies encourage, or even require, employees to publicly disclose use of AI tools. The City of Boise, Idaho, states that “disclosure builds trust through transparency,” encouraging employees to cite their AI usage in all cases, but especially in significant public communications or other important purposes. Seattle, Wash.’s generative AI policy goes even further on the principle of transparency, and commits to making their documentation related to city use of AI systems publicly available. Santa Cruz County, Calif., for instance, requires employees to include a notice “when Generative AI contributed substantially to the development of a work product” and that “indicate(s) the product and version used.”

  1. Advance accountability and human oversight in decision-making that incorporates AI

About 14 of the guidance documents stress that responsibility ultimately falls on city and county employees, either when using AI outputs or making decisions using AI tools. Some city governments even take this a step further by including enforcement mechanisms for non-compliance with their AI policies, including employee termination. One example is seen in guidance issued by Alameda County, Calif., which directs all employees to “thoroughly review and fact check all AI-generated content,” emphasizing that “you are responsible for what you create with GenAI assistance.” Another example is the City of Lebanon, NH, stating that employee non-compliance with the guidelines “may result in disciplinary action or restriction of access, and possibly even termination of employment.”

Conclusion

Regardless of the level of government, responsible AI adoption should follow the principles of transparency, accountability, and equity to ensure that AI tools are used to serve constituents in ways that improve their lives. Taking steps to responsibly implement and oversee AI will not only help local governments use these tools effectively but will also build public trust.

Similar to what state governors and lawmakers can do to advance public sector AI regulation, cities and counties should consider these components of AI governance:

  • Promote transparency and disclosure by documenting AI uses through public-facing use case inventories, such as those maintained by New York, NY and San Jose, Calif., and direct notices to individuals impacted by AI systems.
  • Implement substantive risk management practices for high-risk uses by requiring pre- and post-deployment testing and ongoing monitoring of systems with a significant impact on individuals’ rights, safety, and liberties. While specific risk management practices are not included in many local guidance documents, a growing number of state governments have issued requirements for measures like AI impact assessments, and these can serve as valuable resources for city and county governments to draw from.
  • Ensure proper human oversight by training government employees about the risks, limitations, and appropriate uses of AI, and empowering employees to intervene when potential harms are identified.
  • Incorporate community engagement by seeking direct public feedback about the design and implementation of AI. Some cities, like Long Beach, Calif., have already developed innovative approaches to engaging community members around the use of technology by public agencies.

The post AI in Local Government: How Counties & Cities Are Advancing AI Governance appeared first on Center for Democracy and Technology.

]]>
Looking Back at AI Guidance Across State Education Agencies and Looking Forward https://cdt.org/insights/looking-back-at-ai-guidance-across-state-education-agencies-and-looking-forward/ Tue, 15 Apr 2025 14:20:59 +0000 https://cdt.org/?post_type=insight&p=108356 This blog is part of a series of pieces highlighting AI regulation trends across states. See CDT’s other blogs on state AI executive orders, public sector AI legislation, and local AI governance efforts. Artificial intelligence (AI) has shaken up the education sector, particularly since the public release of ChatGPT and other generative AI tools. School […]

The post Looking Back at AI Guidance Across State Education Agencies and Looking Forward appeared first on Center for Democracy and Technology.

]]>
This blog is part of a series of pieces highlighting AI regulation trends across states. See CDT’s other blogs on state AI executive orders, public sector AI legislation, and local AI governance efforts.

Artificial intelligence (AI) has shaken up the education sector, particularly since the public release of ChatGPT and other generative AI tools. School administrators, teachers, students, and parents have grappled with whether and how to utilize AI, amidst fears such as diminishing student academic integrity and even more sinister concerns like rising prevalence of deepfake non-consensual intimate imagery (NCII).

In response to AI taking classrooms by storm, the education agencies of over half of states (Alabama, Arizona, California, Colorado, Connecticut, Delaware, Georgia, Hawaii, Indiana, Kentucky, Louisiana, Michigan, Minnesota, Mississippi, New Jersey, North Carolina, North Dakota, Ohio, Oklahoma, Oregon, Utah, Virginia, Washington, West Virginia, Wisconsin, Wyoming) and Puerto Rico have released guidance for districts and schools on the responsible use of AI in public education. These pieces of guidance vary by types of AI systems they cover, with some solely focusing on generative AI and others encompassing AI more broadly. Analysis of current state education agencies’ (SEAs’) guidance reveals four primary trends:

  1. There is alignment on the potential benefits of AI in education.
  2. Education agencies acknowledge the base risks of AI use in schools.​
  3. Across the board, states emphasize the need for human oversight and investment in AI literacy/education.
  4. As a whole, SEA guidance is missing critical topics related to AI, such as how to meaningfully engage communities on the issue and how to approach deepfakes.

Below, we detail these trends; highlight what SEAs can do to advance responsible, rights-respecting use of AI in education in light of these trends; and explore a few particularly promising examples of SEA AI guidance.

Trends in SEAs’ AI Guidance

  1. Alignment on the potential benefits of AI in education

Guidance out of SEAs consistently recognizes the following four benefits of using and teaching AI in the classroom: 

  • Personalized learning: At least 17 SEAs cite personalized learning for students as a benefit of AI in education. Colorado’s AI roadmap, for instance, states that AI can support students by “tailor[ing] educational content to match each student’s learning pace and style and helping students learn more efficiently by offering individualized resources and strategies that align with their learning goals, styles, and needs.” Another example is Arizona’s generative AI guidance document, which highlights three different methods of personalized learning opportunities for students: interactive learning, AI coaching, and writing enhancement.
  • Expediting workflow and streamlining administrative processes: Roughly 13 SEAs mention AI’s potential benefit of speeding up or even automating tasks, such as writing emails or creating presentations. Washington mentions “streamlin[ing] operational and administrative functions” as an opportunity for AI use in education, and similarly, Oklahoma states that educators can use AI to “increase efficiency and productivity” through means like automating administrative tasks, thus freeing up time to focus on teaching.
  • Preparing students for the future workforce: Around 11 states discuss teaching AI and AI literacy to students now as essential in equipping them for future career opportunities, often predicting that AI tools will revolutionize the workforce. Indiana’s AI in education guidance states that “the ability to use and understand AI effectively is critical to a future where students will enroll in higher education, enlist in the military, or seek employment in the workforce.” Similarly, Delaware’s generative AI in education guidance explains that “students who learn how AI works are better prepared for future careers in a wide range of industries,” due to developing the skills of computational thinking, analyzing data critically, and evaluating the effectiveness of solutions.
  • Making education more accessible to underrepresented groups: At least 11 of the AI in education guidance documents tout AI as making education more accessible, especially for student populations like those with disabilities and English learners. For example, California’s Department of Education and Minnesota’s Department of Education both note that AI can improve access for marginalized populations through functions such as language translation assistance and generating audio descriptions for students with disabilities. In addition to these communities of students, North Dakota’s Department of Public Instruction also mentions that AI tools can make education more accessible for students in rural areas and students from economically disadvantaged backgrounds.
  1. Acknowledgement of the base risks of AI use in schools

The majority of SEA guidance documents enumerate commonly recognized risks of AI in education, namely:

  • Privacy harms: Roughly 20 states explicitly mention privacy harms as a risk or concern related to implementation of AI in education, especially as it pertains to personally identifiable information. For example, Hawaii’s AI in education guidance geared towards students urges them to be vigilant about protecting their privacy by avoiding sharing sensitive personal information with AI tools, such as their address and phone number. Another example is Mississippi’s Department of Education, which highlights that AI can “increase data privacy and security risks depending on the [vendor’s] privacy and data sharing policies.”
  • Inaccuracy of AI-generated outputs: At least 16 SEAs express concerns about AI tools’ ability to produce accurate information, often citing the common generative AI risk of hallucination. North Dakota’s Department of Public Instruction encourages high schoolers to learn about the limitations of AI and to have a “healthy skepticism” of tools due, in part, to the risk of inaccuracies in information. Along the same lines, Wyoming’s AI in education guidance affirms that students are always responsible for checking the accuracy of AI-generated content, and that school staff and students should critically evaluate all AI outputs. 
  • Reduction of students’ critical thinking skills: Around 10 SEAs discuss the risk of students becoming overreliant on AI tools, thus diminishing their necessary critical thinking skills. Puerto Rico’s Department of Education cites the risk of students and staff becoming dependent on AI tools, which can reduce skills such as critical thinking, creativity, independent decision-making, and quality of teaching. Another example is Arizona’s generative AI guidance, stating that overreliance on AI is a risk for both students and teachers – technology cannot replace the deep knowledge teachers have of their students, nor can it “improve student learning if it is used as a crutch.”
  • Perpetuation of bias: At least 22 states cite perpetuating bias as a risk of AI tools in the classroom. One of the ethical considerations that Louisiana puts forth is “avoiding potential biases in algorithms and data” when possible and placing safeguards during AI implementation to address bias. Virginia’s AI guidelines also affirm that the use of AI in education should do no harm, including “ensuring that algorithms are not based on inherent biases that lead to discriminatory outcomes.”
  • Unreliability of AI content detection tools: Many states also express skepticism about the use of AI content detection tools by educators to combat plagiarism, in part due to their unproven efficacy and risk of erroneously flagging non-native English speakers. For example, West Virginia’s Department of Education recommends that teachers do not use AI content detectors “due to concerns about their reliability,” and North Carolina’s generative AI guidance notes that AI detection tools “often create false positives, penalizing non-native speakers and creative writing styles.”
  1. Emphasis on the need for human oversight and investment in education

Across the board, SEAs also stress the importance of taking a human-centric approach to AI use in the classroom – emphasizing that AI is just a tool and users are still responsible for the decisions they make or work they submit. For example, the Georgia Department of Education’s AI guidance asserts that human oversight is critical and that “final decision-making should always involve human judgment.” Similarly, the Kentucky Department of Education emphasizes how vital having a human in the loop is, especially when AI makes decisions that could have significant consequences for individuals or society.

To equip school stakeholders with the skills necessary to be responsible users of AI, many SEA guidance documents also highlight the need for AI literacy and professional development and training for teachers. Colorado’s AI roadmap frequently mentions the need for both teachers and students to be given AI literacy education so that students are prepared to enter the future “AI-driven world.” The Oregon Department of Education’s AI guidance continually mentions the need for educators to be trained to address the equity impacts of generative AI, including training on topics like combating plagiarism and spotting inaccuracies in AI outputs.

  1. Exclude critical topics, such as meaningful community engagement and deepfakes

Creating mechanisms for robust community engagement allows districts and schools to make more informed decisions about AI procurement to ensure systems and their implementations directly respond to the needs and concerns of those the tools impact most. Some pieces of guidance mention including parents in conversations about AI adoption and implementation, but only in a one-way exchange (e.g., the school provides parents resources/information on how AI will be used safely in the classroom). North Carolina, West Virginia, Utah, Georgia, Connecticut, and Louisiana are the only states that talk about more meaningful engagement, like obtaining parental consent for students using AI tools at school, or including parents and other external stakeholders in the policymaking and decision-making processes. For example, Connecticut’s AI guidance states that parents and community members may have questions about AI use in their children’s school, so, “Leaders may consider forming an advisory around the use of technology generally and AI tools specifically to encourage a culture of learning and transparency, as well as to tap the expertise that community experts may offer.”

One of the most pernicious uses of AI that has become a large issue in schools across the country is the creation of deepfakes and deepfake NCII. CDT research has shown that in the 2023-2024 school year, around 40 percent of students said that they knew about a deepfake depicting someone associated with their school, and 15 percent of students reported that they knew about AI-generated deepfake NCII that depicted individuals associated with their school. The harms from using AI for bullying or harassment, including the creation of deepfakes and deepfake NCII, is only mentioned in roughly four of the guidance documents – those from Utah, Washington, West Virginia, and Connecticut. Utah’s AI in education guidance expresses that schools should prohibit students from “using AI tools to manipulate media to impersonate others for bullying, harassment, or any form of intimidation,” and in the same vein, Washington’s Office of Superintendent of Public Instruction explicitly mentions that users should never utilize AI to “create misleading or inappropriate content, take someone’s likeness without permission, or harm humans or the community at large.”

What SEAs Can Do to Advance Responsible AI Use in Education

After analyzing the strengths and weaknesses of current SEAs’ AI guidance documents, the following emerge as priorities for effective guidance:

  1. Improve the form of the guidance itself
  • Tailor guidance for specific audiences: School administrators, teachers, students, and parents each have unique roles in ensuring AI is implemented and used responsibly, thus making it necessary for guidance to clearly define the benefits, risks, risk mitigation strategies, and available resources specific to each audience. Mississippi’s guidance serves as a helpful example of segmenting recommendations for specific groups of school stakeholders (e.g., student, teachers, and school administrators). 
  • Ensure guidance is accessible: SEAs should ensure that guidance documents are written in plain language so that they are more accessible generally, but also specifically for individuals with disabilities. In addition, guidance released online should be in compliance with the Web Content Accessibility Guidelines as required by Title II of the Americans with Disabilities Act.
  • Publish guidance publicly: Making guidance publicly available for all school stakeholders is key in building accountability mechanisms, strengthening community education on AI, and building trust. It can also allow other states, districts, and schools to learn from other approaches to AI policymaking, thus strengthening efforts to ensure responsible AI use in classrooms across the country.
  1. Provide additional clarity on commonly mentioned topics 
  • Promote transparency and disclosure of AI use and risk management practices: Students, parents, and other community members are often unaware of the ways that AI is being used in their districts and schools. To strengthen trust and build accountability mechanisms, SEAs should encourage public sharing about the AI tools being used, including the purposes for their use and whether they process student data. On the same front, guidance should also include audience-specific best practices to ensure students’ privacy, security, and civil rights are protected.
  • Include best practices for human oversight: The majority of current SEA guidance recognizes the importance of having a “human in the loop” when it comes to AI, but few get specific on what that means in practice. Guidance should include clear, audience-specific examples to showcase how individuals can employ the most effective human oversight strategies.
  • Be specific about what should be included in AI literacy/training programs: SEAs recognize the importance of AI literacy and training for school administrators, teachers, and students, but few pieces of guidance include what topics should be covered to best equip school stakeholders with the skills needed to be responsible AI users. Guidance can identify priority areas for these AI literacy/training programs, such as training teachers on how to respond when a student is accused of plagiarism or how students can verify the output of generative AI tools.
  1. Address important topics that are missing entirely
  • Incorporate community engagement throughout the AI lifecycle: Outside of school staff, students, parents, and other community members hold vital expertise that should be considered during the AI policymaking and decision-making process, such as concerns and past experiences.
  • Articulate the risks of deepfake NCII: As previously mentioned, this topic was missing from most SEA guidance. This should be included, with a particular focus on encouraging implementation of policies that address the largest gaps: investing in prevention and supporting victims. 

Promising Examples of SEA AI Guidance

Current AI guidance from SEAs contains strengths and weaknesses, but three states stand out in particular for their detail and unique approaches:

North Carolina Department of Public Instruction

North Carolina’s generative AI guidance stands out for five key reasons:

  • Prioritizes community engagement: The guidance discusses the importance of community engagement when districts and schools are creating generative AI guidelines. It points out that having community expertise from groups like parents establishes a firm foundation for responsible generative AI implementation.
  • Encourages comprehensive AI literacy: The state encourages LEAs to develop a comprehensive AI literacy program for staff to build a “common understanding and common language,” laying the groundwork for responsible use of generative AI in the classroom.
  • Provides actionable examples for school stakeholders: The guidance gives clear examples for concepts, such as how teachers can redesign assignments to combat cheating and a step-by-step academic integrity guide for students.
  • Highlights the benefit of built-for-purpose AI models: It explains that built-for-education tools, or built-for-purpose generative AI models, may be better options for districts or schools concerned with privacy.
  • Encourages transparency and accountability from generative AI vendors: The guidance provides questions for districts or schools to ask vendors when exploring various generative AI tools. One example of a question included to assess “evidence of impact” is, “Are there any examples, metrics, and/or case studies of positive impact in similar settings?”

Kentucky Department of Education

Three details of Kentucky’s AI guidance make it a strong example to highlight: 

  • Positions the SEA as a centralized resource for AI: It is one of the only pieces of guidance that positions the SEA as a resource and thought partner to districts who are creating their own AI policies. As part of the Kentucky Department of Education’s mission, the guidance states that the Department is committed to encouraging districts and schools by providing guidance and support and engaging districts and schools by fostering environments of knowledge-sharing.
  • Provides actionable steps for teachers to ensure responsible AI use: Similar to North Carolina, it provides guiding questions for teachers when considering implementing AI in the classroom. One sample question that teachers can ask is, “Am I feeding any sensitive or personal information/data to an AI that it can use or share with unauthorized people in the future?”
  • Prioritizes transparency: The guidance prioritizes transparency by encouraging districts and schools to provide understandable information to parents, teachers, and students on how an AI tool being used is making decisions or storing their data, and what avenues are available to hold systems accountable if errors arise.

Alabama State Department of Education

Alabama’s AI policy template stands out for four primary aspects:

  • Promotes consistent AI policies: Alabama takes a unique approach by creating a customizable AI policy template for LEAs to use and adapt. This allows for conceptual consistency in AI policymaking, while also leaving room for LEAs to include additional details necessary to govern AI use in their unique contexts.
  • Recognizes the importance of the procurement process: The policy template prioritizes the AI procurement process, by including strong language about what details should be included in vendor contracts. The policy template points out two key statements that LEAs should get written certification from contractors that they will comply with: that “the AI model has been pre-trained and no data is being used to train a model to be used in the development of a new product,” and that “they have used a human-in-the-loop strategy during development, have taken steps to minimize bias as much as possible in the data selection process and algorithm development, and the results have met the expected outcomes.”
  • Provides detailed risk management practices: It gets very specific about risk management practices that LEAs should adhere to. A first key detail included in the template is that the LEA will conduct compliance audits of data used in AI systems, and that if changes need to be made to a system, the contractor will be required to submit a corrective action plan. Another strong detail included is that the LEA must establish performance metrics to evaluate the AI system procured to ensure that the system works as intended. Finally, there is language included that, as part of their risk management framework, the LEA should comply with the National Institute of Standards and Technology’s AI Risk Management Framework (RMF), conduct annual audits to ensure they are in compliance with the RMF, identify risks and share them with vendors to create a remediation plan, and maintain a risk register for all AI systems.
  • Calls out the unique risks of facial recognition technology in schools: Alabama recognizes the specific risks of cameras with AI systems (or facial recognition technologies) on campuses and in classrooms, explicitly stating that LEAs need to be in compliance with federal and state laws.

Conclusion

In the past few years, seemingly endless resources and information have become available to education leaders, aiming to help guide AI implementation and use. Although more information can be useful to navigate this emerging technology, it has created an overwhelming environment, making it difficult to determine what is best practice and implying that AI integration is inevitable. 
As SEAs continue to develop and implement AI guidance in 2025, it is critical to first be clear that AI may not be the best solution to the problem that an education agency or school is attempting to solve, and second, affirm what “responsible” use of AI in education means – creating a governance framework that allows AI tools to enhance childrens’ educational experiences while protecting their privacy and civil rights at the same time.

The post Looking Back at AI Guidance Across State Education Agencies and Looking Forward appeared first on Center for Democracy and Technology.

]]>
Exploring the 2024 Federal AI Inventories: Key Improvements, Trends, and Continued Inconsistencies https://cdt.org/insights/exploring-the-2024-federal-ai-inventories-key-improvements-trends-and-continued-inconsistencies/ Tue, 15 Apr 2025 13:39:09 +0000 https://cdt.org/?post_type=insight&p=108350 Introduction At the end of last year, U.S. federal agencies published the 2024 updates to their public-facing AI use case inventories. These most recent agency AI inventories mark a significant improvement from past years, providing greater transparency and unprecedented information about how one of the world’s largest governments is using AI. Most notably, the 2024 […]

The post Exploring the 2024 Federal AI Inventories: Key Improvements, Trends, and Continued Inconsistencies appeared first on Center for Democracy and Technology.

]]>
Introduction

At the end of last year, U.S. federal agencies published the 2024 updates to their public-facing AI use case inventories. These most recent agency AI inventories mark a significant improvement from past years, providing greater transparency and unprecedented information about how one of the world’s largest governments is using AI. Most notably, the 2024 agency AI inventories include 1,400 more use cases than 2023’s, representing a 200% increase in reported use cases. 

The publication of these inventories reflects federal agencies’ continued commitment to meet their legal obligations to publicly disclose details about how they are using AI. Those requirements were first established under President Trump’s Executive Order 13960 in December 2020, and later enacted into law in 2022 with the passage of the bipartisan Advancing American AI Act. These requirements were recently reaffirmed by the Office of Management and Budget’s updated guidance on federal agencies’ use of AI, which states that agencies are required to submit and publish their AI use case inventories “at least annually.” 

Federal agencies’ AI use case inventories are more crucial now than ever, as many agencies seek to expand their uses of AI for everything from benefits administration to law enforcement. This is underscored by OMB’s directive to agencies to “accelerate the Federal use of AI,” and by reports that DOGE is using AI tools to make high-risk decisions about government operations and programs with little to no public transparency. The Trump Administration now has the opportunity to build on and improve federal agency AI use case inventories as a critical transparency measure for building public trust and confidence in the government’s growing use of this technology. 

CDT examined the 2023 federal AI inventories, and noted some of the challenges in navigating agency inventories as well as some of the common themes. The following analysis provides an update on what we shared previously, examining how federal agencies have taken steps toward improved reporting as well as detailing remaining gaps and inconsistencies that risk diminishing the public utility of agency AI inventories.

A Step in the Right Direction: Improved Reporting and Documentation

Since 2023, federal agencies have made important progress in the breadth and depth of information included in their AI inventories in several key ways. 

First, the Office of Management and Budget (OMB) created and published a more easily accessible centralized repository of all agency inventories. As CDT noted in our past analysis of agency inventories, it was previously difficult to find agency inventories in an accessible and easily navigable format, and this development is a clear improvement on this issue.

Second, the 2024 agency inventories include far greater reporting about the total number of AI use cases. Agencies reported over three times more use cases than last year, from 710 to 2,133 total use cases across the federal government. This large increase in reporting is likely due to the additional clarification provided by the updated reporting guidance published by OMB under President Biden, as well as potential increased use of AI by federal agencies. While greater agency reporting is important, this increase also creates an overwhelming amount of information that does not necessarily give the public a clear picture of which systems have the greatest impacts on rights and safety. Going forward, it will be critical for agencies to maintain this reporting standard in order to track changes in agencies’ use of AI over time.

Finally, the updated agency inventories include significantly more detail about the risks and governance of specific use cases. As a result of OMB’s reporting guidance, agency inventories generally contain more information about each use case’s stage of development, deployment, data use, and other risk management practices. However, as detailed below, this information is reported inconsistently, undermining the usefulness of this greater degree of reporting.

These improvements enable better understanding in two important ways: 

  1. Changes in agency AI use over time
  2. Additional detail about high-risk AI uses

Changes in agency AI use over time

CDT first published its analysis of agency AI inventories in the summer of 2023. In agencies’ 2023 inventories, we found that three common use cases included chatbots, national security-related uses, and uses related to veterans’ mental health. The updated federal agency inventories from 2024 reflect many of the same trends. National security and veterans’ health care were common uses among a broader set of high-risk systems, as discussed in greater detail in the next section. Additionally, chatbots remain commonly used by a number of agencies, ranging from internally-facing employee resource tools to externally-facing tools used to educate the public about agencies’ resources. For instance, the Department of Agriculture reported use of a chatbot to assist employees from the Farm Service Agency in searching loan handbooks, and the U.S. Patent and Trade Office within the Department of Commerce reported use of a public-facing chatbot to help answer questions about trademarks and patents. 

As noted in the federal CIO’s analysis of the 2024 inventories, roughly 46% of all AI use cases are “mission-enabling” uses related to “administrative and IT functions.” Several common use cases emerged in this year’s inventories that reflect this trend. 

First, a number of agencies reported uses of Generative AI tools and large language models (LLMs) to analyze data, summarize information, and generate text, images, and code. For instance, the Department of Commerce’s Bureau of Economic Analysis reported use of an LLM-based chatbot to support text and data analysis, and the Department of Health and Human Services’ Center for Disease Control reported use of an enterprise-wide Generative AI tool to edit written materials. 

Second, a significant number of agencies reported the use of AI tools to manage public input and requests for information. The following seven agencies all reported the use of AI tools to categorize and process public comments and claims:

  • Department of the Interior
  • Department of Health and Human Services
  • Department of Agriculture
  • Federal Fair Housing Agency
  • Federal Reserve
  • Securities and Exchange Commission
  • Department of Justice 

And, the following nine agencies reported the use of AI systems to automate portions of the FOIA process, such as redacting personally identifiable information:

  • Department of Homeland Security
  • Department of the Interior
  • Department of Health and Human Services
  • National Science Foundation
  • Department of State
  • Equal Employment Opportunity Commission
  • National Archives and Records Administration
  • Department of Justice
  • Department of Transportation 

Additional details about high-risk AI uses

In addition to reporting about their overall AI use cases, OMB’s updated reporting guidance required agencies to indicate which uses are high-risk, which OMB defines as rights- and safety-impacting AI systems. This is an important addition to agency inventories because such high-risk uses have the greatest potential impact on individuals’ rights and liberties, including highly invasive surveillance tools and tools that determine access to a variety of government benefits and services. Across all publicly available agency AI inventories, the three most common categories of high-risk systems currently in use include:

  • Law enforcement and national security
  • Public benefits administration
  • Health and human services delivery and administration

Law enforcement and national security

The Department of Justice and Department of Homeland Security both reported a large number of high-risk law enforcement and national security-related use cases. AI use cases reported by the Department Justice, for instance, include tools used to analyze data and video surveillance for criminal investigations, monitor vehicles and automatically read license plates, detect gunshots, predict prison populations and misconduct among incarcerated individuals, and track recidivism, among a number of other uses related to investigations, surveillance, and prison management. Such uses are concerning and in need of the utmost scrutiny because many of these technologies have proven to be frequently inaccurate, subject to inadequate scrutiny and excess reliance, and prone to lead investigators astray; in the context of law enforcement actions, these mistakes can have severe harms to individuals’ lives and liberty. 

Given how serious these risks are, it is alarming that, while the Department of Justice reported a high number of high-risk use cases—124 of the Department’s total 240—the inventory entries for all Department of Justice use cases do not contain any information about risk mitigation or general AI governance procedures, such as information about whether or not systems were developed in-house or procured, whether systems disseminate information to the public, and which demographic variables systems use. Moreover, a number of use cases included in the Department of Justice inventory do not have a risk classification because they are designated as “too new to fully assess.” Many other agencies similarly neglected to share such information, but these omissions are especially concerning in the context of use cases that pose such a significant threat to individuals’ rights, freedom, and liberties. 

The Department of Homeland Security similarly reported a number of high-risk use cases, 34 of the Department’s 183 reported use cases. These tools span uses such as social media monitoring, border surveillance, facial recognition and other forms of biometric identification, automated device analytics, and predicting the risk for non-citizens under ICE’s management to abscond. 

Although the Department of Homeland Security’s inventory is helpful in assessing its law enforcement, immigration enforcement, and national security uses of AI, two omissions and ambiguities on facial recognition highlight the need for additional transparency. First, one use case listed in the Department’s inventory details Border Patrol use of facial recognition in the field, stating the technology is used to “facilitate biometric identification of individuals as they are encountered.” This leaves ambiguity as to whether facial recognition is used as the basis to detain individuals, or if it is merely a check to inform procedures for bringing an individual in for processing after a detainment decision has already been made. The former scenario would raise serious concerns, especially given how variable facial recognition’s accuracy is across field conditions. Second, the Department’s inventory does not include any mention of ICE using facial recognition in conjunction with DMV databases to find individuals’ identity and current address, a practice that has been publicly documented since 2019. Both of these issues highlight the need for the Department to clarify the extent to which specific AI technologies are used and to include all known use cases, even those that may have been discontinued. 

Public benefits administration

The Social Security Administration and the Department of Veterans Affairs both reported a significant number of high-risk use cases related to the administration of public benefits programs. These systems are used for a variety of purposes ranging from processing benefits claims to identifying fraudulent applications and predicting high-risk claims. The Social Security Administration, for example, reported using AI tools to analyze claims with a high likelihood of error, identify instances of overpayment within social security insurance cases, and to triage review of disability benefits determinations, to name only a few. Similarly, the Veterans Benefits Administration within the Department of Veterans Affairs reported using AI to identify fraudulent changes to veterans’ benefit payments and to process and summarize claims materials.   

Health and human services

The delivery and administration of health and human services was another core area of high-risk AI use cases, with a majority housed within the Department of Veterans Affairs, the largest healthcare system in the nation, and the Department of Health and Human Services. For instance, the Office of Refugee Resettlement within the Department of Health and Human Services’ Administration for Children and Families reported use of AI tools to aid in placing and monitoring the safety of refugee children. And, the Department of Veterans Affairs reported a vast number of healthcare and human services-related uses, ranging from clinical diagnostic tools to systems used to predict suicide and overdose risks among veterans. 

Remaining Gaps and Inconsistencies

Although the 2024 agency AI inventories offer greater insight into these core high-risk use cases across the government, there is still significant room for improvement. Most notably, numerous AI inventories contained inconsistent documentation and insufficient detail about compliance with required risk management practices. 

Insufficient detail

Under OMB’s guidance on federal agencies’ use of AI, agencies were permitted to issue waivers or extensions for certain risk management practices if an agency needed additional time to fulfill a requirement, or if a specific practice would increase risk or impede agency operations. Disappointingly, public reporting about these measures was overwhelmingly scarce across all agencies. The Department of Homeland Security, for example, was the only agency in the entire federal government to include specific information about the length of time for which extensions were issued. And, the Department of Housing and Urban Development was the only agency to report information about any waivers issued, while all other agencies merely left entire sections of their inventories blank without further explanation.

Lack of consistency

Beyond these gaps, inventory reporting is incredibly variable within and between federal agencies, including different levels of detail and different approaches to reporting and categorizing the risk level of use cases. Some agencies and subcomponents within agencies completed a majority of the fields required in their inventories while others, including other subcomponents within the same agency, left many of the same fields blank. In addition, many agencies classified very similar tools as having different levels of risk. For example, the Department of Housing and Urban Development classified a AI tool used for translation as rights-impacting while the Department of Homeland Security did not classify a similar translation tool as rights- or safety-impacting.

Across these inconsistencies, one of the greatest barriers to public understanding is that agencies are not required to report information about how they determined whether or not a particular use case is high-risk. Without this information, it remains difficult for the public to understand why similar systems used by different agencies have different risk classifications or why seemingly high-risk tools (such as AI tools used to redact personally identifiable information) are not designated as such. The Department of Homeland Security, however, stands apart from other agencies on this issue. Alongside their updated AI inventory, the Department of Homeland Security published a companion blog post that provides greater explanation about how the agency approached the completion of their updated inventory, including additional information about how the Department’s leadership made determinations about high-risk use cases and about the nature of extensions issued. This should serve as a model for other agencies to publicly communicate additional information about why and how AI governance decisions are made.

Conclusion

Agency AI use case inventories should not be an end unto themselves. Instead, they should serve as the foundation for agencies to build public accountability and trust about how they are using and governing AI tools. 

The value of these inventories as a transparency tool is further reinforced as state and local governments establish similar legal requirements for government agencies to publish AI use case inventories. At least 12 states have formally issued such requirements, through either legislation or executive order, and the updated federal inventories can serve as an important model for these and other states across the country.

OMB now has the opportunity to make significant improvements to federal agencies’ AI use case inventories heading into their 2025 updates. OMB’s recently updated guidance on federal agencies’ use of AI states that OMB will issue additional “detailed instructions to agencies regarding the inventory and its scope.” OMB should use these instructions as a tool to provide agencies with additional clarity about their obligations and to address the gaps and inconsistencies seen in the 2024 inventories. 

AI use case inventories are a critical transparency mechanism for public agencies at all levels of government. They push governments to document and disclose their myriad uses of AI, and the steps they’ve taken to mitigate risks to individuals’ rights and safety in a manner that is clear and accessible to the public. As federal agencies continue to meet their existing legal obligations, ensuring that agencies update their inventories in a timely manner and that their inventories are robust, detailed, and usable should be a key component of meeting this transparency goal.

The post Exploring the 2024 Federal AI Inventories: Key Improvements, Trends, and Continued Inconsistencies appeared first on Center for Democracy and Technology.

]]>
To AI or Not To AI: A Practice Guide for Public Agencies to Decide Whether to Proceed with Artificial Intelligence https://cdt.org/insights/to-ai-or-not-to-ai-a-practice-guide-for-public-agencies-to-decide-whether-to-proceed-with-artificial-intelligence/ Tue, 25 Mar 2025 04:01:00 +0000 https://cdt.org/?post_type=insight&p=108021 This report was authored by Sahana Srinivasan Executive Summary Public agencies have significant incentives to adopt artificial intelligence (AI) in their delivery of services and benefits, particularly amid recent advancements in generative AI. In fact, public agencies have already been using AI for years in use cases ranging from chatbots that help constituents navigate agency […]

The post To AI or Not To AI: A Practice Guide for Public Agencies to Decide Whether to Proceed with Artificial Intelligence appeared first on Center for Democracy and Technology.

]]>
This report was authored by Sahana Srinivasan

Graphic for a CDT report, entitled "To AI or Not To AI: A Practice Guide for Public Agencies to Decide Whether to Proceed with Artificial Intelligence." Falling dark blue gradient of 1s and 0s.
Graphic for a CDT report, entitled “To AI or Not To AI: A Practice Guide for Public Agencies to Decide Whether to Proceed with Artificial Intelligence.” Falling dark blue gradient of 1s and 0s.

Executive Summary

Public agencies have significant incentives to adopt artificial intelligence (AI) in their delivery of services and benefits, particularly amid recent advancements in generative AI. In fact, public agencies have already been using AI for years in use cases ranging from chatbots that help constituents navigate agency websites to fraud detection in benefit applications. Agencies’ resource constraints, as well as their desire to innovate, increase efficiency, and improve the quality of their services, all make AI and the potential benefits it often offers — automation of repetitive tasks, analysis of large swaths of data, and more — an attractive area to invest in. 

However, using AI to solve any problem or for any other agency use case should not be a foregone conclusion. There are limitations both to AI’s capabilities generally and to it being a logical fit for a given situation. Thus, agencies should engage in an explicit decision-making process before developing or procuring AI systems to determine whether AI is a viable option to solve a given problem and a stronger solution than non-AI alternatives. The agency should then repeatedly reevaluate its decision-making throughout the AI development lifecycle if it decides initially to proceed with an AI system. Vetting the use of AI is critical because inappropriate use of AI in government service and benefit delivery can undermine individuals’ rights and safety and waste resources. 

Despite the emergence of new frameworks, guidance, and recommendations to support the overall responsible use of AI by public agencies, there is a dearth of guidance on how to decide whether AI should be used in the first place, including how to compare it to other solutions and how to document and communicate that decision-making process to the public. This brief seeks to address this gap by proposing a four-step framework that public administrators can use to help them determine whether to proceed with an AI system for a particular use case: 

  • Identify priority problems for the public agency and its constituents: Agencies should identify and analyze specific problems they or their constituents face in service or benefit delivery to ensure that any new innovations are targeted to the most pressing needs. Agencies can identify problems and pain points in their service and benefit delivery through mechanisms such as existing agency data, news reports, and constituent engagement and feedback. Agencies should then vet the severity of their problem and set specific and measurable goals and baselines for what they hope their eventual solution accomplishes. 
  • Brainstorm potential solutions to priority problems: Agencies should identify a slate of solution options for their problem. These options may include AI systems but should also consider non-AI and nontechnological alternatives. Desk research, landscape analyses, consultation with other government agencies, and preliminary conversations with vendors can help agencies ensure that they have identified all options at their disposal before potentially focusing on AI. This report will detail preliminary options for solutions to common agency problems, including AI-based and non-AI options. 
  • Evaluate whether AI could be a viable solution before comparing alternatives: Agencies need to evaluate each potential solution on a set of criteria tailored to that solution before deciding on one with which to proceed. This guidance presents an AI Fit Assessment: four criteria that agencies can use to evaluate any solution that involves an AI-based system. Agencies can use this resulting analysis to decide whether proceeding with an AI-based solution is viable. Agencies should adopt rubrics, no-go criteria, green flags, or other signals to determine how their evaluations of solutions on these four criteria correspond to proceeding with or forgoing a solution. They should also reevaluate the AI Fit Assessment, their analysis of alternatives, and their decision to use AI throughout the development process, even if they initially decide to proceed with an AI-based solution. The criteria of the AI Fit Assessment are the following:
    • Evidence base: the level of evidence demonstrating a particular AI system’s capabilities, effectiveness, and appropriateness, specific to the use case and including evidence of its strengths over alternative solutions. 
    • Data quality: the availability and quality of data, from either the vendor or the agency, used to power the solution as well as the ethics of using that data. 
    • Organizational readiness: the agency’s level of preparedness to adopt and monitor AI, including its infrastructure, resources, buy-in, and technical talent. 
    • Risk assessments: the results of risk and/or impact assessments and any risk mitigation plans. 
    • The results of the AI Fit Assessment will provide agencies with an analysis of an AI solution, which they can then weigh against separate analyses of non-AI alternatives to finally determine which solution to initially proceed with. While non-AI solutions can be evaluated using the AI Fit Assessment, not all of the questions will apply, and additional analysis may be needed.
  • Document and communicate agency decision-making on AI uses to the public: For at least all use cases in which they decide to proceed with an AI-based solution, agencies should document the analysis from the preceding three action steps — including their analysis of AI-based solutions, analysis of non-AI alternative solution options, and comparison of the options — and communicate these insights to the public. Communicating the rationale behind their AI use cases to the public helps agencies build constituents’ trust in both the agency itself and in any AI systems constituents interact with. For the sake of transparency and to help others navigate similar use cases, agencies can also consider documenting situations in which they decided against AI. 

Because this brief refers to any form of AI system when discussing AI, including algorithms that predict outcomes or classify data, the guidance can be used when considering whether to proceed with any type of AI use case. 

Most importantly, these action steps should assist public administrators in making informed decisions about whether the promises of AI can be realized in improving agencies’ delivery of services and benefits while still protecting individuals, particularly individuals’ privacy, safety, and civil rights. This decision-making process is especially critical to navigate responsibly when public agencies are considering moderate- or high-risk AI uses that affect constituents’ lives and could potentially affect safety or human rights.

Read the full report.

The post To AI or Not To AI: A Practice Guide for Public Agencies to Decide Whether to Proceed with Artificial Intelligence appeared first on Center for Democracy and Technology.

]]>