{"id":106948,"date":"2025-01-16T00:04:00","date_gmt":"2025-01-16T05:04:00","guid":{"rendered":"https:\/\/cdt.org\/?post_type=insight&p=106948"},"modified":"2025-01-15T15:52:21","modified_gmt":"2025-01-15T20:52:21","slug":"assessing-ai-surveying-the-spectrum-of-approaches-to-understanding-and-auditing-ai-systems","status":"publish","type":"insight","link":"https:\/\/cdt.org\/insights\/assessing-ai-surveying-the-spectrum-of-approaches-to-understanding-and-auditing-ai-systems\/","title":{"rendered":"Assessing AI: Surveying the Spectrum of Approaches to Understanding and Auditing AI Systems"},"content":{"rendered":"\n

With contributions from Chinmay Deshpande, Ruchika Joshi, Evani Radiya-Dixit, Amy Winecoff, and Kevin Bankston<\/em><\/p>\n\n\n\n

\"Graphic<\/a>
Graphic for CDT AI Gov Lab’s report, “Assessing AI: Surveying the Spectrum of Approaches to Understanding and Auditing AI Systems.” Illustration of a collection of AI “tools” and “toolbox” \u2013 a hammer and red toolbox \u2013 and a stack of checklists with a pencil.<\/em><\/figcaption><\/figure>\n\n\n\n

<\/p>\n\n\n\n

What do we mean when we talk about \u201cassessing\u201d AI systems?<\/p>\n\n\n\n

The importance of a strong ecosystem of AI risk management and accountability has only increased in recent years, yet critical concepts like auditing<\/em>, impact assessment<\/em>, red-teaming<\/em>, evaluation<\/em>, and assurance<\/em> are often used interchangeably \u2014 and risk losing their meaning without a stronger understanding of the specific goals that drive the underlying accountability exercise. Articulating and mapping the goals of various AI assessment approaches against policy proposals and practitioner actions can be helpful in tuning accountability practices to best suit their desired aims. <\/p>\n\n\n\n

That is the purpose of this Center for Democracy & Technology report: to map the spectrum of AI assessment approaches, from narrowest to broadest and from least to most independent, to identify which approaches best serve which goals.<\/strong><\/p>\n\n\n\n

Executive Summary<\/h2>\n\n\n\n

Goals of AI assessment and evaluation<\/strong> generally fall under the following categories:<\/p>\n\n\n\n