AI in Public Benefits Archives - Center for Democracy and Technology https://cdt.org/area-of-focus/equity-in-civic-tech/ai-in-public-benefits/ Tue, 13 May 2025 20:12:27 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://cdt.org/wp-content/uploads/2019/11/cropped-cdt-logo-32x32.png AI in Public Benefits Archives - Center for Democracy and Technology https://cdt.org/area-of-focus/equity-in-civic-tech/ai-in-public-benefits/ 32 32 OMB’s Revised AI Memos Exemplify Bipartisan Consensus on AI Governance Ideals, But Serious Questions Remain About Implementation https://cdt.org/insights/ombs-revised-ai-memos-exemplify-bipartisan-consensus-on-ai-governance-ideals-but-serious-questions-remain-about-implementation/ Tue, 13 May 2025 20:12:25 +0000 https://cdt.org/?post_type=insight&p=108821 On April 3, the Office of Management and Budget (OMB) released updated versions of its guidance to federal agencies on the use (M-25-21) and procurement (M-25-22) of AI. These memos were issued in response to statutory requirements in the AI in Government Act of 2020 and the Advancing American AI Act. The updated memos build […]

The post OMB’s Revised AI Memos Exemplify Bipartisan Consensus on AI Governance Ideals, But Serious Questions Remain About Implementation appeared first on Center for Democracy and Technology.

]]>
On April 3, the Office of Management and Budget (OMB) released updated versions of its guidance to federal agencies on the use (M-25-21) and procurement (M-25-22) of AI. These memos were issued in response to statutory requirements in the AI in Government Act of 2020 and the Advancing American AI Act. The updated memos build on and streamline similar guidance on the use (M-24-10) and procurement (M-24-18) of AI first issued under the Biden Administration.

In fulfilling this legislative requirement, CDT has long advocated that OMB adopt measures to advance responsible AI practices across the federal government’s use and procurement of AI. Doing so will both protect people’s rights and interests, and help ensure that government AI systems are effective and fit for purpose. The most recent OMB guidance retains many of the core AI governance measures that CDT has called for, ranging from heightened protections for high-risk use cases to centralized agency leadership. The updated guidance is especially important as the Trump Administration signals its interest to rapidly expand the use of AI across federal agencies, including efforts by the Department of Government Efficiency (DOGE) to deploy AI tools to make a host of high-stakes decisions

Encouragingly, the publication of this revised guidance confirms that there is bipartisan consensus around core best practices for ensuring the responsible use and development of AI by public agencies. But, while this updated guidance is promising on paper, there are significant unanswered questions about how it will be implemented in practice. The overarching goals and obligations set out by these memos aimed at advancing responsible AI innovation through public trust and safety appear to be in direct tension with the reported actions of DOGE and various federal agencies. 

The true test of the strength and durability of this guidance will be in the efforts to implement and enforce these crucial safeguards over the coming months. In line with CDT’s ongoing advocacy, these memos provide agencies with a clear roadmap for mitigating the risks of AI systems and advancing public trust, through three avenues:

  • Intra- and Inter-Agency AI Governance
  • Risk Management Practices
  • Responsible AI Procurement

Intra- and Inter-Agency AI Governance

AI governance bodies and oversight practices facilitate the robust oversight of AI tools and the promotion of responsible innovation across the federal government. Critical AI governance practices — such as standardizing decision-making processes and appointing leaders specifically responsible for AI — enable agencies to fully assess the benefits and risks of a given system and implement appropriate safeguards across agency operations.

Significantly, OMB’s updated memos retain critical agency and government-wide AI governance structures that establish dedicated AI leadership and coordination functions aimed at supporting agencies’ safe and effective adoption of AI:

  • Agency chief AI officers: Each agency is required to retain or designate a Chief AI Officer (CAIO) responsible for managing the development, acquisition, use, and oversight of AI throughout the agency. These officials serve a critical role in coordinating with leaders across each agency and ensuring that agencies meet their transparency and risk management obligations.
  • Agency AI governance boards: Each agency is required to establish an interdisciplinary governance body — consisting of senior privacy, civil rights, civil liberties, procurement, and customer experience leaders, among others — tasked with developing and overseeing each agency’s AI policies. These governance boards help agencies ensure that a diverse range of internal stakeholders are involved throughout the AI policy development and implementation process, creating a structured forum for agency civil rights and privacy leaders to play a direct role in agency decision-making about AI.
  • Interagency chief AI officer council: OMB is required to convene an interagency council of CAIOs to support government-wide coordination on AI use and oversight. This council supports collaboration and information sharing across the government, allowing for agencies to learn from one another’s successes and failures.
  • Cross-functional procurement teams: Each agency is required to create a cross-functional team — including acquisition, cybersecurity, privacy, civil rights, and budgeting experts — to coordinate agency AI acquisitions. These teams help agencies to effectively identify and evaluate needed safeguards for each procurement and to successfully monitor the performance of acquired tools.  

Risk Management Practices

Not all AI use cases present the same risks to individuals and communities. For instance, an AI tool used to identify fraudulent benefits claims poses a significantly different set of risks than an AI tool used to categorize public comments submitted to an agency. It is therefore widely understood that certain high-risk uses should be subjected to increased scrutiny and care. 

Acknowledging the need to proactively identify and mitigate potential risks, OMB’s updated memos retain and streamline requirements for agencies to establish heightened risk management practices for systems used in high-risk settings. Building on a similar framework established under the previous OMB AI memos, the updated OMB memos define a category of “high-impact AI” use cases for which agencies must implement minimum risk management practices. This categorization of “high-impact AI” simplifies categories that were created under the previous versions of these memos, which defined two separate definitions of “safety-impact” and “rights-impacting” AI systems that were subject to similar minimum risk management practices. This unified category significantly simplifies agencies’ process for identifying high-risk systems by requiring only one determination as opposed to two. 

In line with the earlier versions of these memos, the updated guidance requires agencies to establish the following heightened risk management practices for all “high-impact” use cases:

  • Pre-deployment testing and impact assessments: Agencies are required to conduct impact assessments and testing in real-world scenarios prior to deploying a tool. These processes help agencies proactively assess a system’s performance, identify potential impacts or harms, and develop risk mitigation strategies. 
  • Ongoing monitoring: Agencies are required to conduct periodic performance testing and oversight, allowing agencies to identify changes in a system’s use or function that may lead to harmful or unexpected outcomes.
  • Human training and oversight: Agencies are required to provide ongoing training about the use and risks of AI for agency personnel and to implement human oversight measures. These practices ensure that agency personnel have sufficient information to understand the impacts of the AI tools that they use and are empowered to intervene if harms occur. 
  • Remedy and appeal: Agencies are required to provide avenues for individuals to seek human review and appeal any AI-related adverse actions, ensuring that impacted individuals are able to seek redress for any negative outcomes that may result due to the use of AI. 
  • Public feedback: Agencies are required to seek public feedback about the development, use, and acquisition of AI systems, helping agencies make informed decisions about how AI can best serve the interests of the public.

While many of these core risk management requirements extend those set out under the previous OMB AI guidance, there are several notable differences in the updated OMB memos. First, the updated guidance allows for pilot programs to be exempted from the minimum risk management practices, so long as a pilot is time-bound, limited in scope, and approved by the agency CAIO. Second, the updated guidance removes several previously required minimum risk management practices, including requirements for agencies to provide notice to individuals impacted by an AI tool and to maintain an option for individuals to opt-out of AI-enabled decisions. Third, the updated guidance no longer includes previous requirements for rights-impacting tools to undergo separate assessments on equity and discrimination, although impact assessments still require agencies to evaluate how systems use information related to protected classes and to describe mitigation measures used to prevent unlawful discrimination. Finally, the updated guidance narrows the definition of systems that are presumed to be “high-impact,” removing certain categories previously included in the definitions of “safety-impact” and “rights-impacting” AI systems, such as AI systems to used to maintain the integrity of elections and voting infrastructure and systems used to detect or measure human emotions.

Responsible AI Procurement

Many of the AI tools used by federal agencies are procured from, or developed with the support of, third-party vendors. Because of this, it is critical for agencies to establish additional measures for ensuring the efficacy, safety, and transparency of AI procurement. 

To meet this need, OMB’s updated memos simplify and build on many of the responsible AI procurement practices put in place by the initial version of OMB’s guidance. First, and most importantly, this updated guidance requires agencies to extend their minimum risk management practices to procured AI systems. Similar to OMB’s previous requirements, agencies are directed to proactively identify if a system that they are seeking to acquire is likely high-impact and to disclose such information in a solicitation. And, once an agency is in the process of acquiring a high-impact AI tool, it is obligated to include contract language that ensures compliance with all minimum risk management practices. These measures make sure that the same protections are put in place no matter if a high-impact AI tool is developed in-house or acquired from a vendor. 

Moreover, the updated guidance outlines additional obligations that agencies have to establish for all procured AI systems. To ensure that agency contracts contain sufficient protections, agencies are directed to include contract terms that address the intellectual property rights and use of government data, data privacy, ongoing testing and monitoring, performance standards, and notice requirements to alert agencies prior to the integration of new AI features into a procured system. The updated guidance also has a heightened focus on promoting competition in the AI marketplace, requiring agencies to implement protections against vendor lock-in throughout the solicitation development, selection and award, and contract closeout phases. 

In tandem with these contractual obligations, agencies are required to monitor the ongoing performance of an AI system throughout the administration of a contract and to establish criteria for sunsetting the use of an AI system. One significant difference in OMB’s updated memos, however, is that these procurement obligations only apply to future contracts and renewals, whereas the prior version of OMB’s guidance extended a subset of these requirements to existing contracts for high-impact systems. 

Conclusion

As CDT highlighted when the first version of OMB’s guidance was published a year ago, while this revised guidance is an important step forward, implementation will be the most critical part of this process. OMB and federal agencies have an opportunity to use this updated guidance to address inconsistencies and gaps in AI governance practices across agencies, increasing the standardization and effectiveness of agencies’ adherence to these requirements even as they expand their use of AI. 

Ensuring adequate implementation of OMB’s memos is not only critical to promoting the effective use of taxpayer money, but is especially urgent given alarming reports about the opaque and potentially risky uses of AI at the hands of DOGE. The government has an obligation to lead by example by modeling what responsible AI innovation should look like in practice. These revised memos are a good start, but now it is time for federal agencies to walk the walk and not just talk the talk.

The post OMB’s Revised AI Memos Exemplify Bipartisan Consensus on AI Governance Ideals, But Serious Questions Remain About Implementation appeared first on Center for Democracy and Technology.

]]>
CDT and the Leadership Conference Release New Analysis of Immigration, DOGE, and Data Privacy https://cdt.org/insights/cdt-and-the-leadership-conference-release-new-analysis-of-immigration-doge-and-data-privacy/ Mon, 12 May 2025 13:59:00 +0000 https://cdt.org/?post_type=insight&p=108756 In March, CDT and the Leadership Conference’s Center for Civil Rights and Technology released a fact sheet examining some of the core issues related to the Department of Government Efficiency’s (DOGE) access to and use of sensitive information held by federal agencies. Since we released this analysis, not only has DOGE increased its efforts to […]

The post CDT and the Leadership Conference Release New Analysis of Immigration, DOGE, and Data Privacy appeared first on Center for Democracy and Technology.

]]>
In March, CDT and the Leadership Conference’s Center for Civil Rights and Technology released a fact sheet examining some of the core issues related to the Department of Government Efficiency’s (DOGE) access to and use of sensitive information held by federal agencies. Since we released this analysis, not only has DOGE increased its efforts to access sensitive information across the federal government, but DOGE and federal law enforcement authorities have specifically sought to repurpose administrative data for immigration-related uses. 

As the federal government seeks to rapidly expand the use of sensitive data to target immigrants, CDT and the Leadership Conference developed a follow-up explainer that analyzes the issues surrounding federal immigration authorities and DOGE’s access and use of administrative data for immigration-related activities. This new explainer details:

  • The types of administrative data held by federal agencies, 
  • Examples of how federal administrative data is being repurposed for immigration-related efforts, 
  • The legal protections of federal administrative data and law enforcement exceptions, 
  • The impacts of government data access and use on immigrants and society, and
  • The unanswered questions about and potential future changes to the federal government’s access, use, and sharing of administrative data for immigration-related purposes. 

Repurposing federal administrative data for immigration-related activities may have widespread and significant impacts on the lives of U.S. citizens and non-citizen immigrants alike. Ensuring transparency into the actions of DOGE and federal immigration authorities is a critical step towards protecting and safeguarding data privacy for everyone.

Read the full analysis.

The post CDT and the Leadership Conference Release New Analysis of Immigration, DOGE, and Data Privacy appeared first on Center for Democracy and Technology.

]]>
AI in Local Government: How Counties & Cities Are Advancing AI Governance https://cdt.org/insights/ai-in-local-government-how-counties-cities-are-advancing-ai-governance/ Tue, 15 Apr 2025 14:23:40 +0000 https://cdt.org/?post_type=insight&p=108358 This blog is part of a series of pieces highlighting AI regulation trends across states. See CDT’s other blogs on state AI executive orders, public sector AI legislation, and state education agencies’ AI guidance. Introduction While much attention has been paid to the use of AI by state and federal agencies, city and local governments […]

The post AI in Local Government: How Counties & Cities Are Advancing AI Governance appeared first on Center for Democracy and Technology.

]]>
This blog is part of a series of pieces highlighting AI regulation trends across states. See CDT’s other blogs on state AI executive orders, public sector AI legislation, and state education agencies’ AI guidance.

Introduction

While much attention has been paid to the use of AI by state and federal agencies, city and local governments also are increasingly using AI and should implement safeguards around public sector uses of these tools. City and county governments administer a wide range of public services – including transportation, healthcare, law enforcement, veterans services, and nutrition assistance, to name only a few – that have significant impacts on individuals’ health and safety. AI systems can assist in increasing the efficiency and effectiveness of local governments’ provision of such services, but without proper guardrails these same tools can also harm constituents and impede the safe, dignified, and fair delivery of public services.

In response to both the benefits and risks of using AI in local government, an increasing number of cities and counties have released AI policies and guidance. Organizations like the GovAI Coalition and the National Association of Counties are helping local governments craft and implement their own policies. In particular, the GovAI Coalition, a group of state and local public agencies working to advance responsible AI, created several template AI policies that a number of local agencies have since adopted as part of their own AI governance strategies.

To understand local trends, we analyzed public-facing policy documents from 21 cities and counties. Because most cities and counties do not make their internal IT policies publicly available, the following analysis could be skewed by differences in cities and counties that take proactive steps to disclose their AI policies. Analysis of publicly available AI policies and guidance at the local level reveals five common trends in AI governance, in that these policies: 

  • Draw from federal, state, and other local AI governance guidance;
  • Emphasize that use of AI should align with existing legal obligations;
  • Identify and prioritize mitigation of risks, like bias, reliability, privacy, and security;
  • Prioritize public transparency of AI uses; and
  • Advance accountability and human oversight in decision-making that incorporates AI.

AI Policy and Guidance at the County and City Level

Within the past several years, county and city governments across the country have published AI use policies and guidance to advance responsible AI uses and place guardrails on the ways they use the technology. Counties and cities are using various methods in regulating government AI use, including policies, guidelines, and executive orders. In addition, at least two cities – New York, NY, and San Francisco, Calif. – have enacted city ordinances requiring agencies to create public inventories of their AI use cases.

While many of these documents are not publicly accessible, several counties – Haines Borough, Alaska; Alameda County, Calif.; Los Angeles County, Calif.; Santa Cruz County, Calif.; Sonoma County, Calif.; Miami-Dade County, Fla.; Prince George’s County, Md.; Montgomery County, Md.; Washington County, Ore; and Nashville and Davidson County, Tenn. – and city governments – Baltimore, Md.; Birmingham, Ala.; Boise, Idaho; Boston, Mass.; Lebanon, NH; Long Beach, Calif.; New York City, NY; San Francisco, Calif.; San Jose, Calif.; Seattle, Wash.; and Tempe, Ariz. – have publicly released their policies, providing important insight into key trends across jurisdictions. These policies span states that already have existing state-wide policies and those that do not. Regardless of state-level policy, however, additional county and city-level guidance can help clarify the roles and obligations of local agencies.

Trends in County and City AI Policies and Guidance

  1. Draw from federal, state, and other local AI governance guidance

At both the county and city level, governments are building off of other local, state, and federal guidance as a starting point, mostly through borrowing language. Some of the most commonly cited or used resources are Boston’s AI guidelines, San Jose’s AI guidelines, the National Institute for Standards and Technology’s (NIST’s)  AI Risk Management Framework, and the Biden Administration’s since-rescinded AI Executive Order and AI Bill of Rights

For example, the City of Birmingham, Ala.’s generative AI guidelines acknowledge that the authors drew inspiration from the City of Boston’s guidelines. Likewise, Miami-Dade County’s report on AI policies and guidelines draws from several other government resources, including the cities of Boston, San Jose, and Seattle, the state of Kansas, the White House, and NIST.

  1. Emphasize that use of AI should align with existing legal obligations

At least 15 of the guidance documents that we analyzed explicitly call out the necessity for public agencies to ensure their use of AI tools adheres to existing laws relating to topics such as cybersecurity, public records, and privacy. On the city front, San Jose, Calif.’s AI guidelines state that “users will need to comply with the California Public Records Act and other applicable public records laws” for all city uses of generative AI, and Tempe, Ariz. mentions that all city employees must “comply with applicable laws, standards and regulations related to AI and data protection.” Several counties similarly affirm public agencies’ obligations to use AI systems in compliance with existing laws. Nashville and Davidson County’s guidance states that “all AI and GenAI use shall comply with relevant data privacy laws and shall not violate any intellectual property use,” and Los Angeles County’s technology directive affirms that AI systems must be used in “adherence to relevant laws and regulations.”

Some cities and counties take an additional step by creating access controls to prevent unauthorized use and disclosure of personal information. Santa Cruz County, for example, prohibits the use of AI systems without authorization, and New York City specifies that employees can only use tools that have been “approved by responsible agency personnel” and are “authorized by agency-specific and citywide requirements.” Likewise, Haines Borough requires employees to have specific authorization to use any AI systems that handle sensitive information.

  1. Identify and prioritize mitigation of risks, like bias, reliability, privacy, and security

Cities and counties commonly recognize the following three main risks of using AI:

  • Perpetuating bias: About 12 of the guidelines mention the potential for AI tools to produce biased outputs. One example of this at the city level is Lebanon, NH’s AI policy, which specifies the different types of bias issues that can show up with AI – biased training data, sampling bias, and stereotyping/societal biases – and expresses that “any biases that are identified must be addressed and corrective actions should be taken.” Alameda County, Calif., similarly highlights these issues, stating that “GenAI models can inadvertently amplify biases in the data the models are trained with or that users provide AI.”
  • Accuracy and unreliable outputs: At least 15 cities and counties discuss the unreliability of AI tools (due to issues such as hallucination), often acknowledging this through requiring employees to double-check or verify outputs before using AI-generated information in their work. For instance, Baltimore, Md.’s generative AI executive order prohibits city employees from using generative AI outputs without fact-checking and refining the content, especially if used for decision-making or in public communications. Guidance published by Washington County, Oreg. directs county employees to “fact check and review all content generated by AI,” noting that “while Generative AI can rapidly produce clear prose, the information and content might be inaccurate, outdated, or entirely fictional.” 
  • Privacy and security concerns: Roughly 18 city and county AI guidelines and policies state the importance of protecting privacy and security. These policies emphasize the potential privacy- and security-related harms if employees, for example, input personally identifiable or other sensitive information into an AI tool. The City of San Francisco, Calif., explains that a risk of using generative AI is “exposing non-public data as part of a training data set” and recommends that employees do not enter information that should not be public into non-enterprise generative AI tools. Long Beach, Calif., also recommends that city employees opt out of generative AI tools’ data collection and sharing whenever possible, and even provides a step-by-step guide on how to do so on ChatGPT. Sonoma County, Calif., notes that “there can be risks in using this technology, including… security and privacy concerns with inputting proprietary or confidential information about an employee, client, operations, etc. when interacting with the AI technology.”
  1. Prioritize public transparency of AI uses

Roughly 17 city and county guidelines and policies encourage, or even require, employees to publicly disclose use of AI tools. The City of Boise, Idaho, states that “disclosure builds trust through transparency,” encouraging employees to cite their AI usage in all cases, but especially in significant public communications or other important purposes. Seattle, Wash.’s generative AI policy goes even further on the principle of transparency, and commits to making their documentation related to city use of AI systems publicly available. Santa Cruz County, Calif., for instance, requires employees to include a notice “when Generative AI contributed substantially to the development of a work product” and that “indicate(s) the product and version used.”

  1. Advance accountability and human oversight in decision-making that incorporates AI

About 14 of the guidance documents stress that responsibility ultimately falls on city and county employees, either when using AI outputs or making decisions using AI tools. Some city governments even take this a step further by including enforcement mechanisms for non-compliance with their AI policies, including employee termination. One example is seen in guidance issued by Alameda County, Calif., which directs all employees to “thoroughly review and fact check all AI-generated content,” emphasizing that “you are responsible for what you create with GenAI assistance.” Another example is the City of Lebanon, NH, stating that employee non-compliance with the guidelines “may result in disciplinary action or restriction of access, and possibly even termination of employment.”

Conclusion

Regardless of the level of government, responsible AI adoption should follow the principles of transparency, accountability, and equity to ensure that AI tools are used to serve constituents in ways that improve their lives. Taking steps to responsibly implement and oversee AI will not only help local governments use these tools effectively but will also build public trust.

Similar to what state governors and lawmakers can do to advance public sector AI regulation, cities and counties should consider these components of AI governance:

  • Promote transparency and disclosure by documenting AI uses through public-facing use case inventories, such as those maintained by New York, NY and San Jose, Calif., and direct notices to individuals impacted by AI systems.
  • Implement substantive risk management practices for high-risk uses by requiring pre- and post-deployment testing and ongoing monitoring of systems with a significant impact on individuals’ rights, safety, and liberties. While specific risk management practices are not included in many local guidance documents, a growing number of state governments have issued requirements for measures like AI impact assessments, and these can serve as valuable resources for city and county governments to draw from.
  • Ensure proper human oversight by training government employees about the risks, limitations, and appropriate uses of AI, and empowering employees to intervene when potential harms are identified.
  • Incorporate community engagement by seeking direct public feedback about the design and implementation of AI. Some cities, like Long Beach, Calif., have already developed innovative approaches to engaging community members around the use of technology by public agencies.

The post AI in Local Government: How Counties & Cities Are Advancing AI Governance appeared first on Center for Democracy and Technology.

]]>
Exploring the 2024 Federal AI Inventories: Key Improvements, Trends, and Continued Inconsistencies https://cdt.org/insights/exploring-the-2024-federal-ai-inventories-key-improvements-trends-and-continued-inconsistencies/ Tue, 15 Apr 2025 13:39:09 +0000 https://cdt.org/?post_type=insight&p=108350 Introduction At the end of last year, U.S. federal agencies published the 2024 updates to their public-facing AI use case inventories. These most recent agency AI inventories mark a significant improvement from past years, providing greater transparency and unprecedented information about how one of the world’s largest governments is using AI. Most notably, the 2024 […]

The post Exploring the 2024 Federal AI Inventories: Key Improvements, Trends, and Continued Inconsistencies appeared first on Center for Democracy and Technology.

]]>
Introduction

At the end of last year, U.S. federal agencies published the 2024 updates to their public-facing AI use case inventories. These most recent agency AI inventories mark a significant improvement from past years, providing greater transparency and unprecedented information about how one of the world’s largest governments is using AI. Most notably, the 2024 agency AI inventories include 1,400 more use cases than 2023’s, representing a 200% increase in reported use cases. 

The publication of these inventories reflects federal agencies’ continued commitment to meet their legal obligations to publicly disclose details about how they are using AI. Those requirements were first established under President Trump’s Executive Order 13960 in December 2020, and later enacted into law in 2022 with the passage of the bipartisan Advancing American AI Act. These requirements were recently reaffirmed by the Office of Management and Budget’s updated guidance on federal agencies’ use of AI, which states that agencies are required to submit and publish their AI use case inventories “at least annually.” 

Federal agencies’ AI use case inventories are more crucial now than ever, as many agencies seek to expand their uses of AI for everything from benefits administration to law enforcement. This is underscored by OMB’s directive to agencies to “accelerate the Federal use of AI,” and by reports that DOGE is using AI tools to make high-risk decisions about government operations and programs with little to no public transparency. The Trump Administration now has the opportunity to build on and improve federal agency AI use case inventories as a critical transparency measure for building public trust and confidence in the government’s growing use of this technology. 

CDT examined the 2023 federal AI inventories, and noted some of the challenges in navigating agency inventories as well as some of the common themes. The following analysis provides an update on what we shared previously, examining how federal agencies have taken steps toward improved reporting as well as detailing remaining gaps and inconsistencies that risk diminishing the public utility of agency AI inventories.

A Step in the Right Direction: Improved Reporting and Documentation

Since 2023, federal agencies have made important progress in the breadth and depth of information included in their AI inventories in several key ways. 

First, the Office of Management and Budget (OMB) created and published a more easily accessible centralized repository of all agency inventories. As CDT noted in our past analysis of agency inventories, it was previously difficult to find agency inventories in an accessible and easily navigable format, and this development is a clear improvement on this issue.

Second, the 2024 agency inventories include far greater reporting about the total number of AI use cases. Agencies reported over three times more use cases than last year, from 710 to 2,133 total use cases across the federal government. This large increase in reporting is likely due to the additional clarification provided by the updated reporting guidance published by OMB under President Biden, as well as potential increased use of AI by federal agencies. While greater agency reporting is important, this increase also creates an overwhelming amount of information that does not necessarily give the public a clear picture of which systems have the greatest impacts on rights and safety. Going forward, it will be critical for agencies to maintain this reporting standard in order to track changes in agencies’ use of AI over time.

Finally, the updated agency inventories include significantly more detail about the risks and governance of specific use cases. As a result of OMB’s reporting guidance, agency inventories generally contain more information about each use case’s stage of development, deployment, data use, and other risk management practices. However, as detailed below, this information is reported inconsistently, undermining the usefulness of this greater degree of reporting.

These improvements enable better understanding in two important ways: 

  1. Changes in agency AI use over time
  2. Additional detail about high-risk AI uses

Changes in agency AI use over time

CDT first published its analysis of agency AI inventories in the summer of 2023. In agencies’ 2023 inventories, we found that three common use cases included chatbots, national security-related uses, and uses related to veterans’ mental health. The updated federal agency inventories from 2024 reflect many of the same trends. National security and veterans’ health care were common uses among a broader set of high-risk systems, as discussed in greater detail in the next section. Additionally, chatbots remain commonly used by a number of agencies, ranging from internally-facing employee resource tools to externally-facing tools used to educate the public about agencies’ resources. For instance, the Department of Agriculture reported use of a chatbot to assist employees from the Farm Service Agency in searching loan handbooks, and the U.S. Patent and Trade Office within the Department of Commerce reported use of a public-facing chatbot to help answer questions about trademarks and patents. 

As noted in the federal CIO’s analysis of the 2024 inventories, roughly 46% of all AI use cases are “mission-enabling” uses related to “administrative and IT functions.” Several common use cases emerged in this year’s inventories that reflect this trend. 

First, a number of agencies reported uses of Generative AI tools and large language models (LLMs) to analyze data, summarize information, and generate text, images, and code. For instance, the Department of Commerce’s Bureau of Economic Analysis reported use of an LLM-based chatbot to support text and data analysis, and the Department of Health and Human Services’ Center for Disease Control reported use of an enterprise-wide Generative AI tool to edit written materials. 

Second, a significant number of agencies reported the use of AI tools to manage public input and requests for information. The following seven agencies all reported the use of AI tools to categorize and process public comments and claims:

  • Department of the Interior
  • Department of Health and Human Services
  • Department of Agriculture
  • Federal Fair Housing Agency
  • Federal Reserve
  • Securities and Exchange Commission
  • Department of Justice 

And, the following nine agencies reported the use of AI systems to automate portions of the FOIA process, such as redacting personally identifiable information:

  • Department of Homeland Security
  • Department of the Interior
  • Department of Health and Human Services
  • National Science Foundation
  • Department of State
  • Equal Employment Opportunity Commission
  • National Archives and Records Administration
  • Department of Justice
  • Department of Transportation 

Additional details about high-risk AI uses

In addition to reporting about their overall AI use cases, OMB’s updated reporting guidance required agencies to indicate which uses are high-risk, which OMB defines as rights- and safety-impacting AI systems. This is an important addition to agency inventories because such high-risk uses have the greatest potential impact on individuals’ rights and liberties, including highly invasive surveillance tools and tools that determine access to a variety of government benefits and services. Across all publicly available agency AI inventories, the three most common categories of high-risk systems currently in use include:

  • Law enforcement and national security
  • Public benefits administration
  • Health and human services delivery and administration

Law enforcement and national security

The Department of Justice and Department of Homeland Security both reported a large number of high-risk law enforcement and national security-related use cases. AI use cases reported by the Department Justice, for instance, include tools used to analyze data and video surveillance for criminal investigations, monitor vehicles and automatically read license plates, detect gunshots, predict prison populations and misconduct among incarcerated individuals, and track recidivism, among a number of other uses related to investigations, surveillance, and prison management. Such uses are concerning and in need of the utmost scrutiny because many of these technologies have proven to be frequently inaccurate, subject to inadequate scrutiny and excess reliance, and prone to lead investigators astray; in the context of law enforcement actions, these mistakes can have severe harms to individuals’ lives and liberty. 

Given how serious these risks are, it is alarming that, while the Department of Justice reported a high number of high-risk use cases—124 of the Department’s total 240—the inventory entries for all Department of Justice use cases do not contain any information about risk mitigation or general AI governance procedures, such as information about whether or not systems were developed in-house or procured, whether systems disseminate information to the public, and which demographic variables systems use. Moreover, a number of use cases included in the Department of Justice inventory do not have a risk classification because they are designated as “too new to fully assess.” Many other agencies similarly neglected to share such information, but these omissions are especially concerning in the context of use cases that pose such a significant threat to individuals’ rights, freedom, and liberties. 

The Department of Homeland Security similarly reported a number of high-risk use cases, 34 of the Department’s 183 reported use cases. These tools span uses such as social media monitoring, border surveillance, facial recognition and other forms of biometric identification, automated device analytics, and predicting the risk for non-citizens under ICE’s management to abscond. 

Although the Department of Homeland Security’s inventory is helpful in assessing its law enforcement, immigration enforcement, and national security uses of AI, two omissions and ambiguities on facial recognition highlight the need for additional transparency. First, one use case listed in the Department’s inventory details Border Patrol use of facial recognition in the field, stating the technology is used to “facilitate biometric identification of individuals as they are encountered.” This leaves ambiguity as to whether facial recognition is used as the basis to detain individuals, or if it is merely a check to inform procedures for bringing an individual in for processing after a detainment decision has already been made. The former scenario would raise serious concerns, especially given how variable facial recognition’s accuracy is across field conditions. Second, the Department’s inventory does not include any mention of ICE using facial recognition in conjunction with DMV databases to find individuals’ identity and current address, a practice that has been publicly documented since 2019. Both of these issues highlight the need for the Department to clarify the extent to which specific AI technologies are used and to include all known use cases, even those that may have been discontinued. 

Public benefits administration

The Social Security Administration and the Department of Veterans Affairs both reported a significant number of high-risk use cases related to the administration of public benefits programs. These systems are used for a variety of purposes ranging from processing benefits claims to identifying fraudulent applications and predicting high-risk claims. The Social Security Administration, for example, reported using AI tools to analyze claims with a high likelihood of error, identify instances of overpayment within social security insurance cases, and to triage review of disability benefits determinations, to name only a few. Similarly, the Veterans Benefits Administration within the Department of Veterans Affairs reported using AI to identify fraudulent changes to veterans’ benefit payments and to process and summarize claims materials.   

Health and human services

The delivery and administration of health and human services was another core area of high-risk AI use cases, with a majority housed within the Department of Veterans Affairs, the largest healthcare system in the nation, and the Department of Health and Human Services. For instance, the Office of Refugee Resettlement within the Department of Health and Human Services’ Administration for Children and Families reported use of AI tools to aid in placing and monitoring the safety of refugee children. And, the Department of Veterans Affairs reported a vast number of healthcare and human services-related uses, ranging from clinical diagnostic tools to systems used to predict suicide and overdose risks among veterans. 

Remaining Gaps and Inconsistencies

Although the 2024 agency AI inventories offer greater insight into these core high-risk use cases across the government, there is still significant room for improvement. Most notably, numerous AI inventories contained inconsistent documentation and insufficient detail about compliance with required risk management practices. 

Insufficient detail

Under OMB’s guidance on federal agencies’ use of AI, agencies were permitted to issue waivers or extensions for certain risk management practices if an agency needed additional time to fulfill a requirement, or if a specific practice would increase risk or impede agency operations. Disappointingly, public reporting about these measures was overwhelmingly scarce across all agencies. The Department of Homeland Security, for example, was the only agency in the entire federal government to include specific information about the length of time for which extensions were issued. And, the Department of Housing and Urban Development was the only agency to report information about any waivers issued, while all other agencies merely left entire sections of their inventories blank without further explanation.

Lack of consistency

Beyond these gaps, inventory reporting is incredibly variable within and between federal agencies, including different levels of detail and different approaches to reporting and categorizing the risk level of use cases. Some agencies and subcomponents within agencies completed a majority of the fields required in their inventories while others, including other subcomponents within the same agency, left many of the same fields blank. In addition, many agencies classified very similar tools as having different levels of risk. For example, the Department of Housing and Urban Development classified a AI tool used for translation as rights-impacting while the Department of Homeland Security did not classify a similar translation tool as rights- or safety-impacting.

Across these inconsistencies, one of the greatest barriers to public understanding is that agencies are not required to report information about how they determined whether or not a particular use case is high-risk. Without this information, it remains difficult for the public to understand why similar systems used by different agencies have different risk classifications or why seemingly high-risk tools (such as AI tools used to redact personally identifiable information) are not designated as such. The Department of Homeland Security, however, stands apart from other agencies on this issue. Alongside their updated AI inventory, the Department of Homeland Security published a companion blog post that provides greater explanation about how the agency approached the completion of their updated inventory, including additional information about how the Department’s leadership made determinations about high-risk use cases and about the nature of extensions issued. This should serve as a model for other agencies to publicly communicate additional information about why and how AI governance decisions are made.

Conclusion

Agency AI use case inventories should not be an end unto themselves. Instead, they should serve as the foundation for agencies to build public accountability and trust about how they are using and governing AI tools. 

The value of these inventories as a transparency tool is further reinforced as state and local governments establish similar legal requirements for government agencies to publish AI use case inventories. At least 12 states have formally issued such requirements, through either legislation or executive order, and the updated federal inventories can serve as an important model for these and other states across the country.

OMB now has the opportunity to make significant improvements to federal agencies’ AI use case inventories heading into their 2025 updates. OMB’s recently updated guidance on federal agencies’ use of AI states that OMB will issue additional “detailed instructions to agencies regarding the inventory and its scope.” OMB should use these instructions as a tool to provide agencies with additional clarity about their obligations and to address the gaps and inconsistencies seen in the 2024 inventories. 

AI use case inventories are a critical transparency mechanism for public agencies at all levels of government. They push governments to document and disclose their myriad uses of AI, and the steps they’ve taken to mitigate risks to individuals’ rights and safety in a manner that is clear and accessible to the public. As federal agencies continue to meet their existing legal obligations, ensuring that agencies update their inventories in a timely manner and that their inventories are robust, detailed, and usable should be a key component of meeting this transparency goal.

The post Exploring the 2024 Federal AI Inventories: Key Improvements, Trends, and Continued Inconsistencies appeared first on Center for Democracy and Technology.

]]>
To AI or Not To AI: A Practice Guide for Public Agencies to Decide Whether to Proceed with Artificial Intelligence https://cdt.org/insights/to-ai-or-not-to-ai-a-practice-guide-for-public-agencies-to-decide-whether-to-proceed-with-artificial-intelligence/ Tue, 25 Mar 2025 04:01:00 +0000 https://cdt.org/?post_type=insight&p=108021 This report was authored by Sahana Srinivasan Executive Summary Public agencies have significant incentives to adopt artificial intelligence (AI) in their delivery of services and benefits, particularly amid recent advancements in generative AI. In fact, public agencies have already been using AI for years in use cases ranging from chatbots that help constituents navigate agency […]

The post To AI or Not To AI: A Practice Guide for Public Agencies to Decide Whether to Proceed with Artificial Intelligence appeared first on Center for Democracy and Technology.

]]>
This report was authored by Sahana Srinivasan

Graphic for a CDT report, entitled "To AI or Not To AI: A Practice Guide for Public Agencies to Decide Whether to Proceed with Artificial Intelligence." Falling dark blue gradient of 1s and 0s.
Graphic for a CDT report, entitled “To AI or Not To AI: A Practice Guide for Public Agencies to Decide Whether to Proceed with Artificial Intelligence.” Falling dark blue gradient of 1s and 0s.

Executive Summary

Public agencies have significant incentives to adopt artificial intelligence (AI) in their delivery of services and benefits, particularly amid recent advancements in generative AI. In fact, public agencies have already been using AI for years in use cases ranging from chatbots that help constituents navigate agency websites to fraud detection in benefit applications. Agencies’ resource constraints, as well as their desire to innovate, increase efficiency, and improve the quality of their services, all make AI and the potential benefits it often offers — automation of repetitive tasks, analysis of large swaths of data, and more — an attractive area to invest in. 

However, using AI to solve any problem or for any other agency use case should not be a foregone conclusion. There are limitations both to AI’s capabilities generally and to it being a logical fit for a given situation. Thus, agencies should engage in an explicit decision-making process before developing or procuring AI systems to determine whether AI is a viable option to solve a given problem and a stronger solution than non-AI alternatives. The agency should then repeatedly reevaluate its decision-making throughout the AI development lifecycle if it decides initially to proceed with an AI system. Vetting the use of AI is critical because inappropriate use of AI in government service and benefit delivery can undermine individuals’ rights and safety and waste resources. 

Despite the emergence of new frameworks, guidance, and recommendations to support the overall responsible use of AI by public agencies, there is a dearth of guidance on how to decide whether AI should be used in the first place, including how to compare it to other solutions and how to document and communicate that decision-making process to the public. This brief seeks to address this gap by proposing a four-step framework that public administrators can use to help them determine whether to proceed with an AI system for a particular use case: 

  • Identify priority problems for the public agency and its constituents: Agencies should identify and analyze specific problems they or their constituents face in service or benefit delivery to ensure that any new innovations are targeted to the most pressing needs. Agencies can identify problems and pain points in their service and benefit delivery through mechanisms such as existing agency data, news reports, and constituent engagement and feedback. Agencies should then vet the severity of their problem and set specific and measurable goals and baselines for what they hope their eventual solution accomplishes. 
  • Brainstorm potential solutions to priority problems: Agencies should identify a slate of solution options for their problem. These options may include AI systems but should also consider non-AI and nontechnological alternatives. Desk research, landscape analyses, consultation with other government agencies, and preliminary conversations with vendors can help agencies ensure that they have identified all options at their disposal before potentially focusing on AI. This report will detail preliminary options for solutions to common agency problems, including AI-based and non-AI options. 
  • Evaluate whether AI could be a viable solution before comparing alternatives: Agencies need to evaluate each potential solution on a set of criteria tailored to that solution before deciding on one with which to proceed. This guidance presents an AI Fit Assessment: four criteria that agencies can use to evaluate any solution that involves an AI-based system. Agencies can use this resulting analysis to decide whether proceeding with an AI-based solution is viable. Agencies should adopt rubrics, no-go criteria, green flags, or other signals to determine how their evaluations of solutions on these four criteria correspond to proceeding with or forgoing a solution. They should also reevaluate the AI Fit Assessment, their analysis of alternatives, and their decision to use AI throughout the development process, even if they initially decide to proceed with an AI-based solution. The criteria of the AI Fit Assessment are the following:
    • Evidence base: the level of evidence demonstrating a particular AI system’s capabilities, effectiveness, and appropriateness, specific to the use case and including evidence of its strengths over alternative solutions. 
    • Data quality: the availability and quality of data, from either the vendor or the agency, used to power the solution as well as the ethics of using that data. 
    • Organizational readiness: the agency’s level of preparedness to adopt and monitor AI, including its infrastructure, resources, buy-in, and technical talent. 
    • Risk assessments: the results of risk and/or impact assessments and any risk mitigation plans. 
    • The results of the AI Fit Assessment will provide agencies with an analysis of an AI solution, which they can then weigh against separate analyses of non-AI alternatives to finally determine which solution to initially proceed with. While non-AI solutions can be evaluated using the AI Fit Assessment, not all of the questions will apply, and additional analysis may be needed.
  • Document and communicate agency decision-making on AI uses to the public: For at least all use cases in which they decide to proceed with an AI-based solution, agencies should document the analysis from the preceding three action steps — including their analysis of AI-based solutions, analysis of non-AI alternative solution options, and comparison of the options — and communicate these insights to the public. Communicating the rationale behind their AI use cases to the public helps agencies build constituents’ trust in both the agency itself and in any AI systems constituents interact with. For the sake of transparency and to help others navigate similar use cases, agencies can also consider documenting situations in which they decided against AI. 

Because this brief refers to any form of AI system when discussing AI, including algorithms that predict outcomes or classify data, the guidance can be used when considering whether to proceed with any type of AI use case. 

Most importantly, these action steps should assist public administrators in making informed decisions about whether the promises of AI can be realized in improving agencies’ delivery of services and benefits while still protecting individuals, particularly individuals’ privacy, safety, and civil rights. This decision-making process is especially critical to navigate responsibly when public agencies are considering moderate- or high-risk AI uses that affect constituents’ lives and could potentially affect safety or human rights.

Read the full report.

The post To AI or Not To AI: A Practice Guide for Public Agencies to Decide Whether to Proceed with Artificial Intelligence appeared first on Center for Democracy and Technology.

]]>
CDT and The Leadership Conference Release New Analysis of DOGE, Government Data, and Privacy Trends https://cdt.org/insights/cdt-and-the-leadership-conference-release-new-analysis-of-doge-government-data-and-privacy-trends/ Wed, 19 Mar 2025 18:35:50 +0000 https://cdt.org/?post_type=insight&p=107987 Six weeks ago, we shared CDT’s initial analysis of two lawsuits that alleged violations of long-standing privacy protections of federal administrative data as the Department of Government Efficiency (DOGE) seeks, and has often been granted, access to very sensitive information about individuals. Since then, 14 lawsuits have been filed that allege violations of six privacy […]

The post CDT and The Leadership Conference Release New Analysis of DOGE, Government Data, and Privacy Trends appeared first on Center for Democracy and Technology.

]]>
Six weeks ago, we shared CDT’s initial analysis of two lawsuits that alleged violations of long-standing privacy protections of federal administrative data as the Department of Government Efficiency (DOGE) seeks, and has often been granted, access to very sensitive information about individuals. Since then, 14 lawsuits have been filed that allege violations of six privacy statutes across eight federal agencies

Given the rapidly evolving situation, we worked alongside The Leadership Conference’s Center for Civil Rights and Technology to create a fact sheet that analyzes some of the core issues related to DOGE’s efforts to access and use sensitive information held by federal agencies. This new resource, which we will continue to update as there are new developments, details:

  • The federal privacy protections in play,
  • Reported DOGE security incidents,
  • Examples of the types of sensitive data potentially accessed, and
  • The impacts of DOGE’s reported use of AI on government data.

The first step to ensuring that sensitive data provided to the federal government by tax filers, student loan borrowers, Social Security recipients, and other individuals is not only accessed legally, but is also safeguarded and used responsibly, is greater understanding of and visibility into what DOGE is doing.

Read the fact sheet.
[Last updated March 17, 2025]

The post CDT and The Leadership Conference Release New Analysis of DOGE, Government Data, and Privacy Trends appeared first on Center for Democracy and Technology.

]]>
Two Lawsuits Raise Critical Questions About Whether Privacy Rights Have Been Violated by DOGE Members https://cdt.org/insights/two-lawsuits-raise-critical-questions-about-whether-privacy-rights-have-been-violated-by-doge-members/ Tue, 04 Feb 2025 22:11:56 +0000 https://cdt.org/?post_type=insight&p=107294 In the last 24 hours, two lawsuits have been filed that raise serious concerns about whether the Trump Administration is violating critical, long-standing legal privacy protections as members of Elon Musk’s Department of Government Efficiency (DOGE) team gain unprecedented access to government systems. On Monday, the Alliance for Retired Americans, the American Federation of Government […]

The post Two Lawsuits Raise Critical Questions About Whether Privacy Rights Have Been Violated by DOGE Members appeared first on Center for Democracy and Technology.

]]>
In the last 24 hours, two lawsuits have been filed that raise serious concerns about whether the Trump Administration is violating critical, long-standing legal privacy protections as members of Elon Musk’s Department of Government Efficiency (DOGE) team gain unprecedented access to government systems.

On Monday, the Alliance for Retired Americans, the American Federation of Government Employees, and the Service Employees International Union, represented by Public Citizen and the State Democracy Defenders Fund, sued the Department of the Treasury for granting DOGE staff access to highly sensitive information about taxpayers and others who send and receive payments from the government, including allegations of violations of privacy protections that were enacted in the 1970s as well as Internal Revenue Service code. These protections exist to ensure that sensitive information held by the Treasury Department, like social security numbers and pensions, are treated with the utmost care given the dire consequences of their misuse. The suit implicates a core tenet of privacy regulation: data collected for a specific authorized purpose shouldn’t be shared or used for a different purpose without knowing consent.

On Tuesday, two employees of the Office of Personnel Management sued their employer for failing to meet their legal obligation put forth in the E-Government Act of 2002 to evaluate privacy concerns before using new email procedures to collect confidential information from every civilian federal employee, risking exposure of the personally identifiable information of over two million government employees. Playing fast and loose with privacy in this instance could, for example, open federal agencies to increased risk of phishing attacks or expose the personal information of federal employees to unauthorized parties. If a data leak were to result, it’s virtually impossible to undo the damage.

While the particulars of the cases differ, together they underscore that the government’s efforts to improve efficiency cannot supersede long-standing legal obligations to protect people’s privacy — not just for government employees but for everyone whose lives the government touches (which is to say, everyone). 

Both complaints also cite a common challenge in evaluating whether federal agencies are acting in accordance with their long-standing privacy requirements — lack of transparency about who is accessing information and for what reasons. It is crucial that the judicial branch act quickly on these lawsuits to potentially curb illegal activities; at the same time, Congress should exercise its oversight authority to investigate whether existing privacy protections are being actively violated in the name of government efficiency. Government transparency is a core, bipartisan tenet of effective, efficient, and democratic governance and is needed more than ever as sweeping changes to federal agencies and programs are pursued.

These lawsuits raise serious specific concerns, but they also point to even bigger potential threats. The government — whether through schools, health agencies, social insurance programs, prisons or other entities — holds a tremendous amount of sensitive information. And it has an obligation to treat that information with extreme care, especially since individuals are often required to share such information with government agencies in the first place. Fortunately, Congress and federal agencies have long recognized this important responsibility, and, as such, have existing legal obligations. The task ahead is to ensure that federal agencies continue to meet these legal obligations, which were designed to protect the privacy of the American people. Reckless behavior has real, potentially catastrophic, consequences for individuals and for communities alike.

Digital safeguards exist for important reasons – as do laws that create them. When government officials, including the President, attempt to override them, it’s critical that civil society stand firm against those efforts.

The post Two Lawsuits Raise Critical Questions About Whether Privacy Rights Have Been Violated by DOGE Members appeared first on Center for Democracy and Technology.

]]>
State Government Use of AI: The Opportunities of Executive Action in 2025 https://cdt.org/insights/state-government-use-of-ai-the-opportunities-of-executive-action-in-2025/ Fri, 10 Jan 2025 19:37:29 +0000 https://cdt.org/?post_type=insight&p=106915 Following the release of ChatGPT in 2022 when artificial intelligence (AI) – and generative AI more specifically – captivated the public consciousness, state legislatures and governors across the country moved to regulate its use in government in the absence of Congressional action. Efforts to regulate state government use of AI have primarily taken the form […]

The post State Government Use of AI: The Opportunities of Executive Action in 2025 appeared first on Center for Democracy and Technology.

]]>
Following the release of ChatGPT in 2022 when artificial intelligence (AI) – and generative AI more specifically – captivated the public consciousness, state legislatures and governors across the country moved to regulate its use in government in the absence of Congressional action. Efforts to regulate state government use of AI have primarily taken the form of public sector specific legislation (which CDT analyzes here) and executive orders (EOs).

So far, thirteen states (Alabama, California, Maryland, Massachusetts, Mississippi, New Jersey, Oklahoma, Oregon, Pennsylvania, Rhode Island, Virginia, Washington, Wisconsin) and D.C. have issued EOs that primarily address whether and how AI is or should be used in state government. Analysis of these EOs reveals four main trends:

  1. States do not have a consistent definition of AI.
  2. Current state EOs acknowledge the potential harms of AI in the delivery of public services.
  3. The majority of these EOs suggest pilot projects as a starting point for government agencies.
  4. States are prioritizing AI governance and planning prior to implementation.

Digging Into the Trends of State AI EOs

Lack a consistent definition of AI

State EOs vary in their focus — many address only generative AI rather than AI more broadly. But regardless of focus, states largely utilize their own definitions of AI, independent of the federal government. Maryland, Massachusetts, and Mississippi are the only states with EOs that draw from an established federal definition of AI, using text directly from the National Artificial Intelligence Initiative (NAII) Act of 2020.

Acknowledge the potential harms of AI to individuals

The majority of state EOs recognize that, although AI holds promise to deliver public services more efficiently, there are risks to individuals’ privacy, security, and civil rights given the highly-sensitive nature of their training data and the high stakes decisions they affect. To this end, many state EOs include language about using AI to deliver services or benefits more efficiently, but responsibly. For example, California’s EO states that the state “seeks to realize the potential benefits of [generative AI] for the good of all California residents, through the development and deployment of [generative AI] tools that improve the equitable and timely delivery of services, while balancing the benefits and risks of these new technologies.” 

Almost all state EOs also incorporate concepts associated with protecting individuals’ civil rights, but only three states explicitly name civil rights as a priority — Washington, Oregon, and Maryland. Maryland’s EO sets out principles that must guide state agencies’ use of AI, including “fairness and equity,” and states that “the State’s use of AI must take into account the fact that AI systems can perpetuate harmful biases, and take steps to mitigate those risks, in order to avoid discrimination or disparate impact to individuals or communities based on their [legally protected characteristics].”

Suggest AI pilot projects as a starting point for agencies

Another major element seen across most state EOs is the encouragement of pilot projects to test how AI can best serve state government. In many cases, however, EOs don’t explicitly identify desired outcomes — Alabama and California are the only states to specify what the goals of agencies’ pilot projects should be: projects should show how generative AI can improve citizens’ experience with and access to government services and support state employees in the performance of their duties.

Prioritize AI governance and strategy

Finally, many of the state EOs create task forces to understand the current uses of AI in state government, effectively creating a centralized body to guide each state’s approach to AI exploration and implementation. EOs that establish task forces define who should be included, but each state varies in its approach. For example, individuals in senior roles across agencies like the Chief Technology Officer or the Secretary of Labor make up the bulk of task force members in Maryland, New Jersey, and Pennsylvania’s, while the remaining states leave it to the Governor or members of the State House to appoint the majority of their task forces.

The goal of these AI task forces generally is to provide recommendations for how agencies should proceed in a number of areas related to AI implementation. One example is Pennsylvania’s EO, which tasks their AI Governing Board to make recommendations when agencies request to use a generative AI tool “based upon a review process that evaluates the technology’s bias and security, and whether the agency’s requested use of generative AI adheres to the values set forward” in the EO. Another example is New Jersey’s EO, which mandates that the task force study emerging AI tools and give “recommendations to identify government actions appropriate to encourage the ethical and responsible use of AI technologies” in a final, publicly available report to the Governor.

Promising Examples of State AI EOs

While each EO has strengths and weaknesses, a few stand out for their scope, specificity, and focus on protecting individuals from AI harms:

Washington EO 24-01

Five primary aspects of Washington’s EO stand out:

  • First, it defines a “high-risk” generative AI system to give agencies a common understanding of what use cases may most acutely impact the privacy and civil rights of individuals. 
  • Second, Washington’s EO uses federal guidance as a starting point — it directs that guidelines for public sector use, procurement, and ongoing monitoring draw from the Biden Administration’s AI Bill of Rights and the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework. It also mandates that vendors who are providing AI systems for high-risk uses certify that they have implemented a governance program consistent with NIST’s framework. 
  • Third, the EO prioritizes protecting marginalized communities, who may be most impacted by AI harms in the delivery of public services. Washington’s AI governance body (Consolidated Technology Services (WaTech)) is required to develop and publicly publish guidelines for agencies to analyze the impact that generative AI may have on vulnerable communities, and the Office of Equity is assigned to develop and implement a framework for accountability on this topic. 
  • Fourth, Washington’s EO recognizes the power of government AI procurement — the Department of Enterprise Services is required to update the state’s procurement and contract templates to meet the generative AI moment. 
  • And finally, the EO requires WaTech to produce guidance on risk assessments for the deployment of high-risk generative AI systems, including evaluations of a system’s fitness for its intended purpose and the likelihood of discriminatory outcomes.

California EO N-12-23

Four details of California’s EO make it a particularly strong example:

  • First, the EO specifically directs government agencies to ensure ethical outcomes for marginalized communities when using AI. The Government Operations Agency, the California Department of Technology, and the Office of Data and Innovation, in partnership with other state agencies, are required to develop guidelines for how agencies should analyze the impact of generative AI on marginalized communities, “including criteria to evaluate equitable outcomes in deployment and implementation of high-risk use cases.” 
  • Second, California’s EO recognizes the importance of AI procurement, requiring an update to existing procurement terms, which has already been released
  • Third, the EO takes positive steps towards transparency and documentation by  requiring state agencies to create and share inventories of all high-risk uses.
  • And finally, California’s EO uniquely requires the Government Operations Agency, the California Department of Human Resources, and the Labor and Workforce Development Agency, in partnership with state government employees or organizations that represent them, to develop criteria for measuring the impact of generative AI on the government workforce. Essentially, agencies must provide evidence that acquiring new generative AI tools will add value to operations and the delivery of public services.

Pennsylvania EO 2023-19

Pennsylvania’s EO stands out for four primary reasons:

  • First, it uniquely states that the development of generative AI policies should not “overly burden end users or agencies,” meaning that guardrails put on the use of generative AI must be reasonable and not detract from the goal of responsibly and more efficiently delivering public services. 
  • Second, Pennsylvania’s EO prioritizes transparency by requiring state agencies to publicly disclose when generative AI is used in a service and if bias testing on the tool has been completed. 
  • Third, as in Washington and California, the EO recognizes the importance of procurement by obligating the AI Governing Board to work with the Office of Administration to develop procurement recommendations for generative AI products. 
  • Lastly, Pennsylvania’s EO specifically identifies community engagement as a vital tool for feedback on the government’s use of generative AI.

What Governors Can Do To Advance Responsible AI Use in the Public Sector

Based on our analysis of current state AI EOs, Governors should incorporate several priorities into their actions on this issue:

  • Align definitions of AI with cross-state government bodies/agencies: Developing consistent definitions of AI across government allows clarity and common understanding of what tools or systems are subject to the guidelines set forth by an EO.
  • Define clear priorities and goals for the adoption and use of AI within state government: Providing a uniform vision for agencies in their exploration and implementation of AI ensures that these tools are deployed with clear objectives that align with constituent needs from the outset. These priorities should align with existing state programs, laws, and regulations.
  • Include robust risk management practices: Use of AI in the delivery of public services carries significant risks due to the sensitive data used and the consequences of potential errors. State agencies should be required to implement appropriate risk management measures, such as pre- and post-deployment monitoring.
  • Promote transparency and disclosure by requiring AI inventories: To build trust with constituents and ensure adequate internal and external visibility into the scope of government AI use, EOs should require annual, publicly available inventories of how state agencies are using AI regardless of the use cases’ risk level.
  • Ensure pilot projects have clear goals and appropriate safeguards: If pilot projects are part of the broader AI strategy, state agencies should have a clear understanding of the desired outcomes and necessary safeguards, with requirements such as not inputting sensitive data and implementing periodic monitoring to discern if the system is working how it is intended.
  • Ensure task forces contain senior level and cross agency members: Individuals in senior technology, privacy, accessibility, and civil rights positions (such as Chief Data Officers, Chief Privacy Officers, Chief Accessibility Officers, and Attorneys General) have the necessary expertise to provide input. Having these senior individuals on a task force can help ensure that decisions and recommendations made by the task force are appropriately incorporated by agencies. Including representatives of cross governmental agencies ensures that different perspectives and voices are heard in the important process of AI governance and planning.
  • Incorporate community engagement requirements: Hearing directly from experts and impacted groups strengthens public trust and ensures that government use of AI is directly responsive to the needs and concerns of the people they serve.

With the rapid evolution of AI and the frenzied push for governments to adopt AI systems, EOs are an important lever for governors to establish responsible practices across agencies. In 2025, governors have an unprecedented window of opportunity to determine whether and how AI is integrated in the public sector in ways that protect individuals and their families.

The post State Government Use of AI: The Opportunities of Executive Action in 2025 appeared first on Center for Democracy and Technology.

]]>
Regulating Public Sector AI: Emerging Trends in State Legislation https://cdt.org/insights/regulating-public-sector-ai-emerging-trends-in-state-legislation/ Fri, 10 Jan 2025 19:34:58 +0000 https://cdt.org/?post_type=insight&p=106912 In light of Congressional inaction and the increasing use of artificial intelligence (AI) by public agencies, states have an important role to play in ensuring that government uses of AI are effective, safe, responsible, and rights-protecting. AI offers potential benefits such as improved customer service and increased efficiency, and legislation is often designed to promote […]

The post Regulating Public Sector AI: Emerging Trends in State Legislation appeared first on Center for Democracy and Technology.

]]>
In light of Congressional inaction and the increasing use of artificial intelligence (AI) by public agencies, states have an important role to play in ensuring that government uses of AI are effective, safe, responsible, and rights-protecting. AI offers potential benefits such as improved customer service and increased efficiency, and legislation is often designed to promote such uses in a manner that is trustworthy and transparent. Many state lawmakers have already acknowledged this as an important subject of legislation, and governors have taken on the issue through executive action

During the 2024 state legislative session alone, state legislatures introduced over 40 bills specifically focused on public sector uses of AI, 12 of which were passed into law (California’s SB 896, Delaware’s HB 333, Florida’s SB 1680, Indiana’s SB 150, Maryland’s SB 818, New Hampshire’s HB 1688, New York’s SB 7543, Pennsylvania’s HR 170, Tennessee’s HB 2325, Virginia’s SB 487, Washington’s SB 5838, and West Virginia’s HB 5690). This trend builds on legislation passed during the 2023 legislative session, which included several state bills that require public agencies to inventory their uses of AI systems (e.g., California’s AB 302 and Connecticut’s SB 1103). To date, at least 16 states, including Maryland, Vermont, and Connecticut, have passed legislation that specifically addresses the use of AI by government agencies. This number reflects only legislation focused solely on government agencies’ use of AI and does not include sector-specific bills, automated decision-making bills, or comprehensive AI governance bills, so the total number of laws regulating public sector uses of AI is likely larger. Moreover, many private sector bills that ostensibly exempt public agencies often indirectly address government uses of AI by imposing requirements on private companies that provide services to government agencies.

Legislative proposals on government uses of AI generally aim to promote the transparent, responsible, and safe use of these tools. Some proposals from 2024 include strong, substantive guardrails on the use of AI in the public sector. For instance, some bills require agencies to implement risk management practices for high risk uses of AI, appoint Chief AI officers to oversee the use and management of AI in government, and publicly document and disclose how they are using AI. 

However, the vast majority of public sector AI bills introduced in 2024 do not impose binding requirements on government agencies. Instead, these bills simply require reporting and the creation of pilot projects to study the use of AI in state government or the establishment of task forces to issue recommendations on the issue.

Analysis of public sector-specific AI legislation from 2024 reveals several common themes across states, as well as key areas for improvement that should inform state lawmakers’ efforts going into the 2025 legislative session. 

Trends in Public Sector AI Legislation

Among the 43 public sector AI legislative proposals from the 2024 session, six themes emerge. These bills would require public agencies to:

  • Create task forces and studies
  • Implement risk management practices
  • Publish AI inventories
  • Impose new procurement requirements
  • Establish pilot programs
  • Hire or appoint chief AI officers

Create Task Forces and Studies

Twenty-one proposed bills would establish task forces or commissions to study or oversee the use of AI within the state and issue recommendations on potential safeguards. This is by far the largest category of state-level public sector AI legislation.

The roles and responsibilities of these task forces, however, vary significantly between proposals. Some bills, like New York’s SB 8755, would confer a significant degree of oversight and regulatory authority to an AI task force, including the responsibility to assess and report on all public sector uses of AI within the state. Other bills, like Virginia’s SB 487, would afford much less power to task forces, making these bodies solely advisory in nature. There is also significant variability between the required representatives who would be appointed to state task forces, with some reserving roles for community members, academics, and civil society, and others largely excluding these constituents.

In addition, five proposed bills would initiate studies to examine the role of AI within states. The studies commissioned under these proposals would differ in focus between assessing current uses of AI within state government (e.g., New Jersey’s AB 4399) and considering the potential risks and benefits of the technology more broadly (e.g., California’s SB 398). Some of these studies would be broadly focused on all types of AI, while others would be  more narrowly focused on generative AI (e.g., California’s SB 896).

Implement Risk Management Practices

Fifteen proposed bills would require state agencies to implement risk management practices when using AI. This is the second largest category of state-level public sector AI legislation. While it is an encouraging sign that risk management practices occupy such a significant area of focus among state policymakers, only three of these proposals passed in 2024, Maryland’s SB 818, New Hampshire’s HB 1688, and New York’s SB 7543

The nature and scope of these practices differ from proposal to proposal but generally share similar core requirements like impact assessments and public notice obligations. Some proposals would impose holistic risk management requirements, similar to those established by OMB’s guidance on federal agencies’ use of AI, including impact assessments, human oversight, notice and appeal, and ongoing monitoring (e.g., Alaska’s SB 177 and Maryland’s SB 818). Within this group of proposals, some would directly specify agency obligations, while others, like Illinois’ HB 4836, would direct agencies to comply with existing federal frameworks such as NIST’s AI Risk Management Framework

Some proposals, however, are more narrowly focused. For example, California’s SB 896 would specifically impose transparency requirements for state agencies that use generative AI, and Kentucky’s HB 734 would prohibit agencies from solely relying on AI to identify fraud or discontinue benefits.

Publish AI Inventories

Twelve proposed bills would require state agencies to create inventories of AI use cases. These proposed requirements, however, vary in important areas such as scope, frequency, and detail. For example, some proposals, such as Hawaii’s HB 2152, would only require public agencies to inventory high risk use cases, while others, such as Indiana’s SB 150, would require public agencies to inventory all use cases regardless of risk. In addition, the amount of detail required as part of these inventories differs significantly between proposals, with some requiring agencies to document their testing and mitigation practices and others only imposing minimal reporting requirements about the tools that agencies use. Some proposals would require public agencies to update these inventories annually (e.g., Illinois’ HB 4705), while others would require these to be updated less frequently (e.g., biennially, as required under Alaska’s SB 177) and others would have no specified requirements for public agencies to update these at all (e.g., Idaho’s HB 568). Importantly, only a subset of these legislative proposals would require AI inventories to be made publicly available.

Impose New Procurement Requirements

Seven proposed bills would establish specific requirements for AI systems procured by state agencies. Some of these bills are solely focused on procurement, while others include procurement requirements within a broader set of obligations for public sector AI. Some of these proposals would establish affirmative obligations for AI systems procured by state agencies (e.g., New York’s AB 5309). Others, like California’s SB 892, would create a process for the state to develop and adopt procurement requirements through consultation with the public, experts, state and local employees, and other stakeholders. While still other proposals, like Illinois’ HB 5228, are more narrowly focused and would require vendors to disclose when AI is used to fulfill a government contract, but would not impose any other obligations.

Establish Pilot Programs

Three proposed bills — Hawaii’s HB 2152, Hawaii’s HB 2245, and Maryland’s SB 818 — would establish pilot programs for state agencies to test and evaluate the use of AI in government benefits and services. In general, these programs are designed to identify potential AI use cases within state government and test these at a smaller scale to assess performance and feasibility. Only one of these proposals, Hawaii’s HB 2245, would require state agencies to retrospectively examine and report on any findings or recommendations arising from such pilots.

Hire or Appoint Chief AI Officers

Three proposed bills — Illinois’ HB 4705, New Jersey’s SB 1438, and New York’s AB 10231 — would establish chief AI officer positions within state government. Some of these proposals would designate one overarching CAIO position (New York’s AB 10231), while others would require state agencies to individually appoint their own CAIOs instead (Illinois’ HB 4705). Several of these proposals (New Jersey’s SB 1438 and New York’s AB 10231) would include detailed competencies and responsibilities for CAIOs, seeking to ensure that the individuals appointed to these positions would have sufficient experience and authority to successfully carry out their duties.

Strong Examples

Lawmakers should consider several promising examples from the 2024 session as potential resources for their own state. Of all the bills that passed in 2024, Maryland’s SB 818 and New York’s SB 7543 are clear standouts. Maryland’s bill imposes strong guardrails on state agencies’ uses of AI, requiring agencies to conduct impact assessments and publicly report about any high risk AI systems. Similarly, the New York bill requires all state agencies to conduct impact assessments prior to the deployment of any automated decision-making system and prohibits the use of such systems in public benefits without human oversight. Maryland’s bill also establishes the Governor’s Artificial Intelligence Subcabinet, which is responsible for developing policies and procedures for state agencies to conduct ongoing monitoring of AI systems.

There were also several promising examples of bills that didn’t pass, including California’s SB 892 that proposes AI procurement standards, Illinois’ HB 5228 that would require every state agency to implement NIST’s AI Risk Management Framework, and Washington’s SB 5356 that would require the State Chief Information Officer to issue guidance on the development, procurement, and use of AI by public agencies. 

Recommendations for State Lawmakers

As state lawmakers look to develop legislation to regulate the use of AI in the public sector during the 2025 session, several key considerations should form the basis of such proposals:

  • Ensure robust transparency through AI inventories that are conducted annually, released publicly, and required for all AI systems regardless of risk level;
  • Implement substantive, robust guardrails on high risk uses by requiring risk management practices for any system used or procured by an agency, including pre- and post-deployment assessments and independent oversight;
  • Establish AI governance processes by requiring every agency to implement AI governance and oversight practices and providing sufficient funding and resources for agencies to do so;
  • Prioritize meaningful public engagement by requiring agencies to consult the public before deploying high risk AI systems and including substantive representation from civil society, academia, and impacted communities in state-wide task forces;
  • Avoid unintended consequences by ensuring that prohibitions are narrowly tailored — for instance, prohibiting the use of AI without human oversight for benefits determinations as opposed to a blanket prohibition on any AI use related to public service delivery — so as to avoid impeding routine service delivery.

Conclusion

As state lawmakers return to state capitols across the country for the 2025 legislative session, AI is poised to be a signifigant area of focus. Public sector uses of this technology should remain top priority for lawmakers as state and local governments increasingly use these tools to deliver critical services to individuals and their families. Crafting legislation that creates real protections for people and is specifically tailored to the unique needs of the public sector is more important now than ever.

The post Regulating Public Sector AI: Emerging Trends in State Legislation appeared first on Center for Democracy and Technology.

]]>
Analysis of Federal Agencies’ Plans to Comply with Recent AI Risk Management Guidance: Inconsistencies with AI Governance May Leave Harms Unaddressed https://cdt.org/insights/analysis-of-federal-agencies-plans-to-comply-with-recent-ai-risk-management-guidance-inconsistencies-with-ai-governance-may-leave-harms-unaddressed/ Mon, 09 Dec 2024 14:36:48 +0000 https://cdt.org/?post_type=insight&p=106683 Federal agencies are just one week away from their December 16th deadline to publish updated AI use case inventories detailing their implementation of the required minimum risk management practices established by the Office of Management and Budget’s (OMB) memorandum on agency use of AI (M-24-10), titled Advancing Governance, Innovation, and Risk Management for Agency use […]

The post Analysis of Federal Agencies’ Plans to Comply with Recent AI Risk Management Guidance: Inconsistencies with AI Governance May Leave Harms Unaddressed appeared first on Center for Democracy and Technology.

]]>
Federal agencies are just one week away from their December 16th deadline to publish updated AI use case inventories detailing their implementation of the required minimum risk management practices established by the Office of Management and Budget’s (OMB) memorandum on agency use of AI (M-24-10), titled Advancing Governance, Innovation, and Risk Management for Agency use of Artificial Intelligence. These practices matter because they ensure that agencies take appropriate steps throughout the planning, acquisition, development, and use of any AI system to identify and address potential harms as they grapple with whether and how to use this technology.

Agencies’ recently published plans for complying with M-24-10 lay the groundwork for agencies not only to meet this deadline but to advance robust and transparent AI governance practices across the whole of the federal government, and CDT read all 38 publicly available compliance plans, so you don’t have to. In reading them, we found several inconsistencies related to AI governance which could lead to under-identifying or failing to address potential harms that result from AI use. At the same time, several federal agencies are adopting innovative approaches to AI governance that warrant closer consideration as potential models for other agencies. 

While some of these compliance plans incorporate encouraging practices, there is still significant room for improvement. Agencies should now take steps to build on these initial plans and establish the internal infrastructure to sustain this work going forward.

Federal agencies have inconsistent approaches to AI governance

M-24-10 charges each agency Chief AI Officer with “instituting the requisite governance and oversight process to achieve compliance with this memorandum and enable responsible use of AI in the agency.” 

Agency compliance plans provide important insight into the overarching process that agencies are putting in place to govern their use of AI. Each compliance plan outlines how agencies will 1) solicit and collect information about AI use cases across the agency, 2) review all use cases for their impact on the rights and safety of the public, and 3) certify that each use case is in compliance with M-24-10. Indeed, an agency’s ability to manage the risks of AI will only be as strong as its governance and oversight practices. 

However, the plans we reviewed reveal that agencies are adopting inconsistent approaches to fulfilling these obligations. This could be a result of the varied maturity levels between agencies’ AI governance programs, differing strategies for integrating these new obligations within existing agency operations, and significant differences in the level of rigor that agencies applied to fulfilling their M-24-10 requirements. AI governance plans vary widely in whether they:

  • Create multi-phase, multidisciplinary AI governance processes; 
  • Establish new AI governance protocols; 
  • Review agency use cases beyond what is required in M-24-10; and
  • Address civil rights and privacy explicitly.

Create multi-phase, multidisciplinary AI governance processes

Robust and accountable AI governance requires that multiple levels of review and expertise are engaged throughout the planning, acquisition, development, and use of an AI system. Some agencies have already taken steps towards achieving this goal by creating clear multi-phase, multidisciplinary processes for the review and certification of agency AI uses. 

  • The Department of Housing and Urban Development (page 10) adopted “review gates,” which establish clear stages of approvals throughout the deployment process that must be met before an AI use case can move forward;
  • The Department of Labor (page 11) established a “Use Case Impact Assessment Framework” that guides all risk determinations, and requires that all such determinations are reviewed by policy offices throughout the agency including the Civil Rights Center, Office of Disability Employment Policy, the Privacy Office, and several others; 
  • The Department of Veterans Affairs requires that the agency’s AI Governance Council approves the agency’s complete annual AI inventory and use case determinations, following review by both the Chief AI Officer and agency subcomponents; and 
  • The Department of State (page 11) requires that any significant changes to an AI use case are independently reviewed and evaluated by agency governance bodies. 

Together, these approaches help ensure that AI governance procedures are uniformly implemented across an agency and that AI systems are subject to multiple phases of review that involve different groups of experts within an agency.

But many other agencies are implementing these practices on a more ad hoc basis. Most concerningly, some agencies do not have any specific cross-agency review process for making important decisions like assessing AI use cases and determining which impact rights or safety. Instead, these agencies leave this oversight authority solely to the Chief AI Officer, as opposed to the more common practice of directly involving an agency’s AI Governance Board. For example, the Chief AI Officers at the Departments of Agriculture (page 15) and Defense (pages 3-4) are given sole responsibility for managing the documentation and validation of agency risk management practices. 

Without additional agency-wide coordination and oversight, these deficiencies could result in significant gaps or errors in agencies’ AI governance practices, leaving the public vulnerable to potential AI harms.

Establish new AI governance protocols

Some agencies are embedding standardized decision-making processes for the review and approval of AI systems into existing governance processes. For instance, agencies like the Department of Interior (page 10), Department of Veterans Affairs, Federal Trade Commission (page 2), and Office of Personnel Management (page 12) are integrating their AI governance work into existing agency-wide risk management and information technology review programs. 

Other agencies — including the Department of Health and Human Services (page 4), Department of Labor (page 11) and General Services Administration  — are taking a different approach by creating new systems to support the intake and review of AI uses. To aid in this process, these agencies are creating standard operating procedures, questionnaires, and other standardized documentation to aid their subcomponents in fulfilling their AI governance obligations. 

Both approaches offer benefits and drawbacks. Creating new systems allows agencies to establish processes specifically tailored for AI technologies, whereas adapting existing processes enables agencies to leverage already available resources. We look forward to learning more as agencies continue to fulfill the requirements in M-24-10.

Review agency use cases beyond the requirements of M-24-10

Many agencies are also implementing oversight mechanisms to create routine review processes that supplement their required annual review of all AI use cases. For example, a subset of agencies have created procedures for the consistent, semi-annual review of all AI use cases, including the Department of Agriculture (page 4), Department of Labor (page 5), Nuclear Regulatory Commission, U.S. International Development Finance Corporation (page 17), and Department of Treasury (page 3). Other agencies — like the Department of Homeland Security (page 6), Department of Transportation (page 6), Department of Treasury (page 4), Social Security Administration (page 3), and Department of Interior (page 4) — are standing up specific processes to audit and re-review any use cases excluded from the required risk management practices or non-public national security-related use cases for compliance with M-24-10.

Most agencies, however, have only committed to reviewing department use cases on an annual basis, which may be insufficient to keep up with the rate of AI adoption in government, and many also do not have specific protocols for auditing excluded or non-public use cases. 

Address civil rights and privacy explicitly

A cornerstone of M-24-10 is its focus on protecting the rights and safety of the public. As such, it directs every Chief AI Officer to coordinate “with officials responsible for privacy and civil rights and civil liberties on identifying safety-impacting and rights-impacting AI within the agency.” The extent of such engagement, however, varies widely between agencies.

Promisingly, the vast majority of agencies have appointed senior civil rights, civil liberties, human rights, and privacy officials to their AI Governance Boards, which every agency is required to establish under M-24-10 to oversee both risk management and innovation. But this does not go far enough. Agencies need to take additional steps to embed these officials into the decision-making process for ensuring that AI systems are in compliance with M-24-10. For instance, several agencies — including the Department of Energy (page 1), Department of Labor (page 5), Department of Transportation (page 3), General Services Administration, and the Office of Personnel Management (pages 4-5) — have already done this by creating separate, dedicated working groups with civil and human rights, privacy, and sociotechnical expertise that are charged with the review and oversight of rights- and safety-impacting use cases. This structure ensures that civil rights and privacy experts have a dedicated seat at the table and are able to provide direct input about agencies’ highest risk use cases. 

While this is a positive development, a majority of agencies have no civil rights or privacy officials in substantive decision-making roles within their broader AI governance process. Instead, many agencies only have representation from such officials in a purely advisory capacity through their AI Governance Boards. 

To address this shortcoming, agencies should prioritize integrating senior civil rights and privacy officials into their decision-making processes, especially for any rights- or safety-impacting use cases. Agencies should also consider opportunities to upskill their offices of civil rights and to engage with external civil rights and privacy advocates. 

Promising practices that others should consider

Agencies’ compliance plans also reveal a range of emerging and innovative practices that show promise as potential tools to increase the effectiveness of agencies’ AI governance. These include the following examples:

  • Partnering with academia and other experts: The Department of Labor (page 3) partnered with Stanford University to develop the agency’s internal guidance on M-24-10 compliance, which includes agency-specific risk standards.
  • Creating independent review process: The Department of Labor (page 12) established a third party review process for any use cases where the agency’s Chief AI Officer was directly involved in development of the AI system to ensure the independence and accuracy of all use case evaluations. 
  • Centralizing permissions for staff to use AI: Several agencies, such as the Department of Treasury (pages 7-8) and the Social Security Administration (page 5), established dedicated processes to prevent the unauthorized uses of online-based AI systems and to remove any unapproved systems from Department networks.

OMB should encourage agencies to continue experimenting with such approaches, and the CAIO Council should leverage its position as the central interagency forum on AI governance to facilitate sharing best practices and collaboration between agencies. As a starting point, other agencies should look to these practices as examples to inform and supplement their own AI governance work. 

Conclusion

Agencies’ M-24-10 compliance plans are a promising start, and reveal that many agencies are well underway with their work to complete their updated use case inventories by December 16th. Ultimately, however, the impact of these compliance plans will only be as strong as their implementation. As we head into the new year, it is critical for agencies to keep up the momentum and urgency around implementing these critical safeguards.

The post Analysis of Federal Agencies’ Plans to Comply with Recent AI Risk Management Guidance: Inconsistencies with AI Governance May Leave Harms Unaddressed appeared first on Center for Democracy and Technology.

]]>