Skip to Content

OMB’s Revised AI Memos Exemplify Bipartisan Consensus on AI Governance Ideals, But Serious Questions Remain About Implementation

On April 3, the Office of Management and Budget (OMB) released updated versions of its guidance to federal agencies on the use (M-25-21) and procurement (M-25-22) of AI. These memos were issued in response to statutory requirements in the AI in Government Act of 2020 and the Advancing American AI Act. The updated memos build on and streamline similar guidance on the use (M-24-10) and procurement (M-24-18) of AI first issued under the Biden Administration.

In fulfilling this legislative requirement, CDT has long advocated that OMB adopt measures to advance responsible AI practices across the federal government’s use and procurement of AI. Doing so will both protect people’s rights and interests, and help ensure that government AI systems are effective and fit for purpose. The most recent OMB guidance retains many of the core AI governance measures that CDT has called for, ranging from heightened protections for high-risk use cases to centralized agency leadership. The updated guidance is especially important as the Trump Administration signals its interest to rapidly expand the use of AI across federal agencies, including efforts by the Department of Government Efficiency (DOGE) to deploy AI tools to make a host of high-stakes decisions

Encouragingly, the publication of this revised guidance confirms that there is bipartisan consensus around core best practices for ensuring the responsible use and development of AI by public agencies. But, while this updated guidance is promising on paper, there are significant unanswered questions about how it will be implemented in practice. The overarching goals and obligations set out by these memos aimed at advancing responsible AI innovation through public trust and safety appear to be in direct tension with the reported actions of DOGE and various federal agencies. 

The true test of the strength and durability of this guidance will be in the efforts to implement and enforce these crucial safeguards over the coming months. In line with CDT’s ongoing advocacy, these memos provide agencies with a clear roadmap for mitigating the risks of AI systems and advancing public trust, through three avenues:

  • Intra- and Inter-Agency AI Governance
  • Risk Management Practices
  • Responsible AI Procurement

Intra- and Inter-Agency AI Governance

AI governance bodies and oversight practices facilitate the robust oversight of AI tools and the promotion of responsible innovation across the federal government. Critical AI governance practices — such as standardizing decision-making processes and appointing leaders specifically responsible for AI — enable agencies to fully assess the benefits and risks of a given system and implement appropriate safeguards across agency operations.

Significantly, OMB’s updated memos retain critical agency and government-wide AI governance structures that establish dedicated AI leadership and coordination functions aimed at supporting agencies’ safe and effective adoption of AI:

  • Agency chief AI officers: Each agency is required to retain or designate a Chief AI Officer (CAIO) responsible for managing the development, acquisition, use, and oversight of AI throughout the agency. These officials serve a critical role in coordinating with leaders across each agency and ensuring that agencies meet their transparency and risk management obligations.
  • Agency AI governance boards: Each agency is required to establish an interdisciplinary governance body — consisting of senior privacy, civil rights, civil liberties, procurement, and customer experience leaders, among others — tasked with developing and overseeing each agency’s AI policies. These governance boards help agencies ensure that a diverse range of internal stakeholders are involved throughout the AI policy development and implementation process, creating a structured forum for agency civil rights and privacy leaders to play a direct role in agency decision-making about AI.
  • Interagency chief AI officer council: OMB is required to convene an interagency council of CAIOs to support government-wide coordination on AI use and oversight. This council supports collaboration and information sharing across the government, allowing for agencies to learn from one another’s successes and failures.
  • Cross-functional procurement teams: Each agency is required to create a cross-functional team — including acquisition, cybersecurity, privacy, civil rights, and budgeting experts — to coordinate agency AI acquisitions. These teams help agencies to effectively identify and evaluate needed safeguards for each procurement and to successfully monitor the performance of acquired tools.  

Risk Management Practices

Not all AI use cases present the same risks to individuals and communities. For instance, an AI tool used to identify fraudulent benefits claims poses a significantly different set of risks than an AI tool used to categorize public comments submitted to an agency. It is therefore widely understood that certain high-risk uses should be subjected to increased scrutiny and care. 

Acknowledging the need to proactively identify and mitigate potential risks, OMB’s updated memos retain and streamline requirements for agencies to establish heightened risk management practices for systems used in high-risk settings. Building on a similar framework established under the previous OMB AI memos, the updated OMB memos define a category of “high-impact AI” use cases for which agencies must implement minimum risk management practices. This categorization of “high-impact AI” simplifies categories that were created under the previous versions of these memos, which defined two separate definitions of “safety-impact” and “rights-impacting” AI systems that were subject to similar minimum risk management practices. This unified category significantly simplifies agencies’ process for identifying high-risk systems by requiring only one determination as opposed to two. 

In line with the earlier versions of these memos, the updated guidance requires agencies to establish the following heightened risk management practices for all “high-impact” use cases:

  • Pre-deployment testing and impact assessments: Agencies are required to conduct impact assessments and testing in real-world scenarios prior to deploying a tool. These processes help agencies proactively assess a system’s performance, identify potential impacts or harms, and develop risk mitigation strategies. 
  • Ongoing monitoring: Agencies are required to conduct periodic performance testing and oversight, allowing agencies to identify changes in a system’s use or function that may lead to harmful or unexpected outcomes.
  • Human training and oversight: Agencies are required to provide ongoing training about the use and risks of AI for agency personnel and to implement human oversight measures. These practices ensure that agency personnel have sufficient information to understand the impacts of the AI tools that they use and are empowered to intervene if harms occur. 
  • Remedy and appeal: Agencies are required to provide avenues for individuals to seek human review and appeal any AI-related adverse actions, ensuring that impacted individuals are able to seek redress for any negative outcomes that may result due to the use of AI. 
  • Public feedback: Agencies are required to seek public feedback about the development, use, and acquisition of AI systems, helping agencies make informed decisions about how AI can best serve the interests of the public.

While many of these core risk management requirements extend those set out under the previous OMB AI guidance, there are several notable differences in the updated OMB memos. First, the updated guidance allows for pilot programs to be exempted from the minimum risk management practices, so long as a pilot is time-bound, limited in scope, and approved by the agency CAIO. Second, the updated guidance removes several previously required minimum risk management practices, including requirements for agencies to provide notice to individuals impacted by an AI tool and to maintain an option for individuals to opt-out of AI-enabled decisions. Third, the updated guidance no longer includes previous requirements for rights-impacting tools to undergo separate assessments on equity and discrimination, although impact assessments still require agencies to evaluate how systems use information related to protected classes and to describe mitigation measures used to prevent unlawful discrimination. Finally, the updated guidance narrows the definition of systems that are presumed to be “high-impact,” removing certain categories previously included in the definitions of “safety-impact” and “rights-impacting” AI systems, such as AI systems to used to maintain the integrity of elections and voting infrastructure and systems used to detect or measure human emotions.

Responsible AI Procurement

Many of the AI tools used by federal agencies are procured from, or developed with the support of, third-party vendors. Because of this, it is critical for agencies to establish additional measures for ensuring the efficacy, safety, and transparency of AI procurement. 

To meet this need, OMB’s updated memos simplify and build on many of the responsible AI procurement practices put in place by the initial version of OMB’s guidance. First, and most importantly, this updated guidance requires agencies to extend their minimum risk management practices to procured AI systems. Similar to OMB’s previous requirements, agencies are directed to proactively identify if a system that they are seeking to acquire is likely high-impact and to disclose such information in a solicitation. And, once an agency is in the process of acquiring a high-impact AI tool, it is obligated to include contract language that ensures compliance with all minimum risk management practices. These measures make sure that the same protections are put in place no matter if a high-impact AI tool is developed in-house or acquired from a vendor. 

Moreover, the updated guidance outlines additional obligations that agencies have to establish for all procured AI systems. To ensure that agency contracts contain sufficient protections, agencies are directed to include contract terms that address the intellectual property rights and use of government data, data privacy, ongoing testing and monitoring, performance standards, and notice requirements to alert agencies prior to the integration of new AI features into a procured system. The updated guidance also has a heightened focus on promoting competition in the AI marketplace, requiring agencies to implement protections against vendor lock-in throughout the solicitation development, selection and award, and contract closeout phases. 

In tandem with these contractual obligations, agencies are required to monitor the ongoing performance of an AI system throughout the administration of a contract and to establish criteria for sunsetting the use of an AI system. One significant difference in OMB’s updated memos, however, is that these procurement obligations only apply to future contracts and renewals, whereas the prior version of OMB’s guidance extended a subset of these requirements to existing contracts for high-impact systems. 

Conclusion

As CDT highlighted when the first version of OMB’s guidance was published a year ago, while this revised guidance is an important step forward, implementation will be the most critical part of this process. OMB and federal agencies have an opportunity to use this updated guidance to address inconsistencies and gaps in AI governance practices across agencies, increasing the standardization and effectiveness of agencies’ adherence to these requirements even as they expand their use of AI. 

Ensuring adequate implementation of OMB’s memos is not only critical to promoting the effective use of taxpayer money, but is especially urgent given alarming reports about the opaque and potentially risky uses of AI at the hands of DOGE. The government has an obligation to lead by example by modeling what responsible AI innovation should look like in practice. These revised memos are a good start, but now it is time for federal agencies to walk the walk and not just talk the talk.