{"id":108021,"date":"2025-03-25T00:01:00","date_gmt":"2025-03-25T04:01:00","guid":{"rendered":"https:\/\/cdt.org\/?post_type=insight&p=108021"},"modified":"2025-04-11T14:17:41","modified_gmt":"2025-04-11T18:17:41","slug":"to-ai-or-not-to-ai-a-practice-guide-for-public-agencies-to-decide-whether-to-proceed-with-artificial-intelligence","status":"publish","type":"insight","link":"https:\/\/cdt.org\/insights\/to-ai-or-not-to-ai-a-practice-guide-for-public-agencies-to-decide-whether-to-proceed-with-artificial-intelligence\/","title":{"rendered":"To AI or Not To AI: A Practice Guide for Public Agencies to Decide Whether to Proceed with Artificial Intelligence"},"content":{"rendered":"\n
This report was authored by Sahana Srinivasan<\/em><\/strong><\/p>\n\n\n\n <\/p>\n\n\n\n Executive Summary<\/strong><\/p>\n\n\n\n Public agencies have significant incentives to adopt artificial intelligence (AI) in their delivery of services and benefits, particularly amid recent advancements in generative AI. In fact, public agencies have already been using AI for years in use cases ranging from chatbots that help constituents navigate agency websites to fraud detection in benefit applications. Agencies\u2019 resource constraints, as well as their desire to innovate, increase efficiency, and improve the quality of their services, all make AI and the potential benefits it often offers \u2014 automation of repetitive tasks, analysis of large swaths of data, and more \u2014 an attractive area to invest in. <\/p>\n\n\n\n However, using AI to solve any problem or for any other agency use case should not be a foregone conclusion. There are limitations both to AI\u2019s capabilities generally and to it being a logical fit for a given situation. Thus, agencies should engage in an explicit decision-making process before <\/em>developing or procuring AI systems to determine whether AI is a viable option to solve a given problem and a stronger solution than non-AI alternatives. The agency should then repeatedly reevaluate its decision-making throughout the AI development lifecycle if it decides initially to proceed with an AI system. Vetting the use of AI is critical because inappropriate use of AI in government service and benefit delivery can undermine individuals\u2019 rights and safety and waste resources. <\/p>\n\n\n\n Despite the emergence of new frameworks, guidance, and recommendations to support the overall responsible use of AI by public agencies, there is a dearth of guidance on how to decide whether AI should be used in the first place, including how to compare it to other solutions and how to document and communicate that decision-making process to the public. This brief seeks to address this gap by proposing a four-step framework that public administrators can use to help them determine whether to proceed with an AI system for a particular use case: <\/p>\n\n\n\n Because this brief refers to any form of AI system when discussing AI, including algorithms that predict outcomes or classify data, the guidance can be used when considering whether to proceed with any type of AI use case. <\/p>\n\n\n\n Most importantly, these action steps should assist public administrators in making informed decisions about whether the promises of AI can be realized in improving agencies\u2019 delivery of services and benefits while still protecting individuals, particularly individuals\u2019 privacy, safety, and civil rights. This decision-making process is especially critical to navigate responsibly when public agencies are considering moderate- or high-risk AI uses that affect constituents\u2019 lives and could potentially affect safety or human rights.<\/p>\n\n\n\n\n
\n