{"id":108821,"date":"2025-05-13T16:12:25","date_gmt":"2025-05-13T20:12:25","guid":{"rendered":"https:\/\/cdt.org\/?post_type=insight&p=108821"},"modified":"2025-05-13T16:12:27","modified_gmt":"2025-05-13T20:12:27","slug":"ombs-revised-ai-memos-exemplify-bipartisan-consensus-on-ai-governance-ideals-but-serious-questions-remain-about-implementation","status":"publish","type":"insight","link":"https:\/\/cdt.org\/insights\/ombs-revised-ai-memos-exemplify-bipartisan-consensus-on-ai-governance-ideals-but-serious-questions-remain-about-implementation\/","title":{"rendered":"OMB\u2019s Revised AI Memos Exemplify Bipartisan Consensus on AI Governance Ideals, But Serious Questions Remain About Implementation"},"content":{"rendered":"\n
On April 3, the Office of Management and Budget (OMB) released updated versions of its guidance to federal agencies on the use<\/a> (M-25-21) and procurement<\/a> (M-25-22) of AI. These memos were issued in response to statutory requirements in the AI in Government Act of 2020<\/a> and the Advancing American AI Act<\/a>. The updated memos build on and streamline similar guidance on the use<\/a> (M-24-10) and procurement<\/a> (M-24-18) of AI first issued under the Biden Administration.<\/p>\n\n\n\n In fulfilling this legislative requirement, CDT has long advocated that OMB adopt measures to advance responsible AI practices across the federal government\u2019s use<\/a> and procurement<\/a> of AI. Doing so will both protect people\u2019s rights and interests, and help ensure that government AI systems are effective and fit for purpose. The most recent OMB guidance retains many of the core AI governance measures that CDT has called for, ranging from heightened protections for high-risk use cases to centralized agency leadership. The updated guidance is especially important as the Trump Administration signals its interest to rapidly expand<\/a> the use of AI across federal agencies, including efforts by the Department of Government Efficiency (DOGE) to deploy AI tools to make a host of high-stakes decisions<\/a>. <\/p>\n\n\n\n Encouragingly, the publication of this revised guidance confirms that there is bipartisan consensus around core best practices for ensuring the responsible use and development of AI by public agencies. But, while this updated guidance is promising on paper, there are significant unanswered questions about how it will be implemented in practice. The overarching goals and obligations set out by these memos aimed at advancing responsible AI innovation through public trust and safety appear to be in direct tension with the reported actions of DOGE and various federal agencies. <\/p>\n\n\n\n The true test of the strength and durability of this guidance will be in the efforts to implement and enforce these crucial safeguards over the coming months. In line with CDT\u2019s ongoing advocacy, these memos provide agencies with a clear roadmap for mitigating the risks of AI systems and advancing public trust, through three avenues:<\/p>\n\n\n\n Intra- and Inter-Agency AI Governance<\/strong><\/p>\n\n\n\n AI governance bodies and oversight practices facilitate the robust oversight of AI tools and the promotion of responsible innovation across the federal government. Critical AI governance practices \u2014 such as standardizing decision-making processes and appointing leaders specifically responsible for AI \u2014 enable agencies to fully assess the benefits and risks of a given system and implement appropriate safeguards across agency operations.<\/p>\n\n\n\n Significantly, OMB\u2019s updated memos retain critical agency and government-wide AI governance structures that establish dedicated AI leadership and coordination functions aimed at supporting agencies\u2019 safe and effective adoption of AI:<\/p>\n\n\n\n Risk Management Practices<\/strong><\/p>\n\n\n\n Not all AI use cases present the same risks to individuals and communities. For instance, an AI tool used to identify fraudulent benefits claims poses a significantly different set of risks than an AI tool used to categorize public comments submitted to an agency. It is therefore widely understood that certain high-risk uses should be subjected to increased scrutiny and care. <\/p>\n\n\n\n Acknowledging the need to proactively identify and mitigate potential risks, OMB\u2019s updated memos retain and streamline requirements for agencies to establish heightened risk management practices for systems used in high-risk settings. Building on a similar framework established under the previous OMB AI memos, the updated OMB memos define a category of \u201chigh-impact AI\u201d use cases for which agencies must implement minimum risk management practices. This categorization of \u201chigh-impact AI\u201d simplifies categories that were created under the previous versions of these memos, which defined two separate definitions of \u201csafety-impact\u201d and \u201crights-impacting\u201d AI systems that were subject to similar minimum risk management practices. This unified category significantly simplifies agencies\u2019 process for identifying high-risk systems by requiring only one determination as opposed to two. <\/p>\n\n\n\n In line with the earlier versions of these memos, the updated guidance requires agencies to establish the following heightened risk management practices for all \u201chigh-impact\u201d use cases:<\/p>\n\n\n\n While many of these core risk management requirements extend those set out under the previous OMB AI guidance, there are several notable differences in the updated OMB memos. First, the updated guidance allows for pilot programs to be exempted from the minimum risk management practices, so long as a pilot is time-bound, limited in scope, and approved by the agency CAIO. Second, the updated guidance removes several previously required minimum risk management practices, including requirements for agencies to provide notice to individuals impacted by an AI tool and to maintain an option for individuals to opt-out of AI-enabled decisions. Third, the updated guidance no longer includes previous requirements for rights-impacting tools to undergo separate assessments on equity and discrimination, although impact assessments still require agencies to evaluate how systems use information related to protected classes and to describe mitigation measures used to prevent unlawful discrimination. Finally, the updated guidance narrows the definition of systems that are presumed to be \u201chigh-impact,\u201d removing certain categories previously included in the definitions of \u201csafety-impact\u201d and \u201crights-impacting\u201d AI systems, such as AI systems to used to maintain the integrity of elections and voting infrastructure and systems used to detect or measure human emotions.<\/p>\n\n\n\n Responsible AI Procurement<\/strong><\/p>\n\n\n\n Many of the AI tools used by federal agencies are procured from, or developed with the support of, third-party vendors. Because of this, it is critical for agencies to establish additional measures for ensuring the efficacy, safety, and transparency of AI procurement. <\/p>\n\n\n\n To meet this need, OMB\u2019s updated memos simplify and build on many of the responsible AI procurement practices put in place by the initial version of OMB\u2019s guidance. First, and most importantly, this updated guidance requires agencies to extend their minimum risk management practices to procured AI systems. Similar to OMB\u2019s previous requirements, agencies are directed to proactively identify if a system that they are seeking to acquire is likely high-impact and to disclose such information in a solicitation. And, once an agency is in the process of acquiring a high-impact AI tool, it is obligated to include contract language that ensures compliance with all minimum risk management practices. These measures make sure that the same protections are put in place no matter if a high-impact AI tool is developed in-house or acquired from a vendor. <\/p>\n\n\n\n Moreover, the updated guidance outlines additional obligations that agencies have to establish for all<\/em> procured AI systems. To ensure that agency contracts contain sufficient protections, agencies are directed to include contract terms that address the intellectual property rights and use of government data, data privacy, ongoing testing and monitoring, performance standards, and notice requirements to alert agencies prior to the integration of new AI features into a procured system. The updated guidance also has a heightened focus on promoting competition in the AI marketplace, requiring agencies to implement protections against vendor lock-in throughout the solicitation development, selection and award, and contract closeout phases. <\/p>\n\n\n\n In tandem with these contractual obligations, agencies are required to monitor the ongoing performance of an AI system throughout the administration of a contract and to establish criteria for sunsetting the use of an AI system. One significant difference in OMB\u2019s updated memos, however, is that these procurement obligations only apply to future contracts and renewals, whereas the prior version of OMB\u2019s guidance extended a subset of these requirements to existing contracts for high-impact systems. <\/p>\n\n\n\n Conclusion<\/strong><\/p>\n\n\n\n As CDT highlighted<\/a> when the first version of OMB\u2019s guidance was published a year ago, while this revised guidance is an important step forward, implementation will be the most critical part of this process. OMB and federal agencies have an opportunity to use this updated guidance to address inconsistencies and gaps in AI governance practices across agencies<\/a>, increasing the standardization and effectiveness of agencies\u2019 adherence to these requirements even as they expand their use of AI. <\/p>\n\n\n\n Ensuring adequate implementation of OMB\u2019s memos is not only critical to promoting the effective use of taxpayer money, but is especially urgent given alarming reports about the opaque and potentially risky uses of AI at the hands of DOGE. The government has an obligation to lead by example by modeling what responsible AI innovation should look like in practice. These revised memos are a good start, but now it is time for federal agencies to walk the walk and not just talk the talk.<\/p>\n","protected":false},"featured_media":86101,"template":"","content_type":[7251],"area-of-focus":[10221],"class_list":["post-108821","insight","type-insight","status-publish","has-post-thumbnail","hentry","content_type-blog","area-of-focus-ai-in-public-benefits"],"acf":[],"_links":{"self":[{"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/insight\/108821","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/insight"}],"about":[{"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/types\/insight"}],"version-history":[{"count":1,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/insight\/108821\/revisions"}],"predecessor-version":[{"id":108822,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/insight\/108821\/revisions\/108822"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/media\/86101"}],"wp:attachment":[{"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/media?parent=108821"}],"wp:term":[{"taxonomy":"content_type","embeddable":true,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/content_type?post=108821"},{"taxonomy":"area-of-focus","embeddable":true,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/area-of-focus?post=108821"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}\n
\n
\n