AI Policy & Governance Archives - Center for Democracy and Technology https://cdt.org/area-of-focus/ai-policy-governance/ Wed, 14 May 2025 18:55:51 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://cdt.org/wp-content/uploads/2019/11/cropped-cdt-logo-32x32.png AI Policy & Governance Archives - Center for Democracy and Technology https://cdt.org/area-of-focus/ai-policy-governance/ 32 32 Op-Ed – Artificial Sweeteners: The Dangers of Sycophantic AI https://cdt.org/insights/op-ed-artificial-sweeteners-the-dangers-of-sycophantic-ai/ Wed, 14 May 2025 18:49:56 +0000 https://cdt.org/?post_type=insight&p=108846 This op-ed – authored by CDT’s Amy Winecoff  – first appeared in Tech Policy Press on May 14, 2025. A portion of the text has been pasted below. At the end of April, OpenAI released a model update that made ChatGPT feel less like a helpful assistant and more like a yes-man. The update was quickly rolled back, […]

The post Op-Ed – Artificial Sweeteners: The Dangers of Sycophantic AI appeared first on Center for Democracy and Technology.

]]>
This op-ed – authored by CDT’s Amy Winecoff  – first appeared in Tech Policy Press on May 14, 2025. A portion of the text has been pasted below.

At the end of April, OpenAI released a model update that made ChatGPT feel less like a helpful assistant and more like a yes-man. The update was quickly rolled back, with CEO Sam Altman admitting the model had become “too sycophant-y and annoying.” But framing the concern as just about the tool’s irritating cheerfulness downplays the potential seriousness of the issue. Users reported the model encouraging them to stop taking their medication or lash out at strangers.

This problem isn’t limited to OpenAI’s recent update. A growing number of anecdotes and reportssuggest that overly flattering, affirming AI systems may be reinforcing delusional thinking, deepening social isolation, and distorting users’ grip on reality. In this context, the OpenAI incident serves as a sharp warning: in the effort to make AI friendly and agreeable, tech firms may also be introducing new dangers.

At the center of AI sycophancy are techniques designed to make systems safer and more “aligned” with human values. AI systems are typically trained on massive datasets sourced from the public internet. As a result, these systems learn not only from useful information but also from toxic, illegal, and unethical content. To address these problems, AI developers have introduced techniques to help AI systems respond in ways that better match users’ intentions.

Read the full article.

The post Op-Ed – Artificial Sweeteners: The Dangers of Sycophantic AI appeared first on Center for Democracy and Technology.

]]>
AI Agents In Focus: Technical and Policy Considerations https://cdt.org/insights/ai-agents-in-focus-technical-and-policy-considerations/ Wed, 14 May 2025 15:26:42 +0000 https://cdt.org/?post_type=insight&p=108816 AI agents are moving rapidly from prototypes to real-world products. These systems are increasingly embedded into consumer tools, enterprise workflows, and developer platforms. Yet despite their growing visibility, the term “AI agent” lacks a clear definition and is used to describe a wide spectrum of systems — from conversational assistants to action-oriented tools capable of […]

The post AI Agents In Focus: Technical and Policy Considerations appeared first on Center for Democracy and Technology.

]]>
AI Agents In Focus: Technical and Policy Considerations. White and black document on a grey background.
Brief entitled, “AI Agents In Focus: Technical and Policy Considerations.” White and black document on a grey background.

AI agents are moving rapidly from prototypes to real-world products. These systems are increasingly embedded into consumer tools, enterprise workflows, and developer platforms. Yet despite their growing visibility, the term “AI agent” lacks a clear definition and is used to describe a wide spectrum of systems — from conversational assistants to action-oriented tools capable of executing complex tasks. This brief focuses on a narrower and increasingly relevant subset: action-taking AI agents, which pursue goals by making decisions and interacting with digital environments or tools, often with limited human oversight. 

As an emerging class of AI systems, action-taking agents indicate a distinct shift from earlier generations of generative AI. Unlike passive assistants that respond to user prompts, these systems can initiate tasks, revise plans based on new information, and operate across applications or time horizons. They typically combine large language models (LLMs) with structured workflows and tool access, enabling them to navigate interfaces, retrieve and input data, and coordinate tasks across systems, in addition to often offering conversational interfaces. In more advanced settings, they operate in orchestration frameworks where multiple agents collaborate, each with distinct roles or domain expertise.

This brief begins by outlining how action-taking agents function, the technical components that enable them, and the kinds of agentic products being built. It then explains how technical components of AI agents — such as control loop complexity, tool access, and scaffolding architecture — shape their behavior in practice. Finally, it surfaces emerging areas of policy concern where the risks posed by agents increasingly appear to outpace the safeguards currently in place, including security, privacy, control, human-likeness, governance infrastructure, and allocation of responsibility. Together, these sections aim to clarify both how AI agents currently work and what is needed to ensure they are responsibly developed and deployed.

Read the full brief.

The post AI Agents In Focus: Technical and Policy Considerations appeared first on Center for Democracy and Technology.

]]>
CDT and the Leadership Conference Release New Analysis of Immigration, DOGE, and Data Privacy https://cdt.org/insights/cdt-and-the-leadership-conference-release-new-analysis-of-immigration-doge-and-data-privacy/ Mon, 12 May 2025 13:59:00 +0000 https://cdt.org/?post_type=insight&p=108756 In March, CDT and the Leadership Conference’s Center for Civil Rights and Technology released a fact sheet examining some of the core issues related to the Department of Government Efficiency’s (DOGE) access to and use of sensitive information held by federal agencies. Since we released this analysis, not only has DOGE increased its efforts to […]

The post CDT and the Leadership Conference Release New Analysis of Immigration, DOGE, and Data Privacy appeared first on Center for Democracy and Technology.

]]>
In March, CDT and the Leadership Conference’s Center for Civil Rights and Technology released a fact sheet examining some of the core issues related to the Department of Government Efficiency’s (DOGE) access to and use of sensitive information held by federal agencies. Since we released this analysis, not only has DOGE increased its efforts to access sensitive information across the federal government, but DOGE and federal law enforcement authorities have specifically sought to repurpose administrative data for immigration-related uses. 

As the federal government seeks to rapidly expand the use of sensitive data to target immigrants, CDT and the Leadership Conference developed a follow-up explainer that analyzes the issues surrounding federal immigration authorities and DOGE’s access and use of administrative data for immigration-related activities. This new explainer details:

  • The types of administrative data held by federal agencies, 
  • Examples of how federal administrative data is being repurposed for immigration-related efforts, 
  • The legal protections of federal administrative data and law enforcement exceptions, 
  • The impacts of government data access and use on immigrants and society, and
  • The unanswered questions about and potential future changes to the federal government’s access, use, and sharing of administrative data for immigration-related purposes. 

Repurposing federal administrative data for immigration-related activities may have widespread and significant impacts on the lives of U.S. citizens and non-citizen immigrants alike. Ensuring transparency into the actions of DOGE and federal immigration authorities is a critical step towards protecting and safeguarding data privacy for everyone.

Read the full analysis.

The post CDT and the Leadership Conference Release New Analysis of Immigration, DOGE, and Data Privacy appeared first on Center for Democracy and Technology.

]]>
It’s (Getting) Personal: How Advanced AI Systems Are Personalized https://cdt.org/insights/its-getting-personal-how-advanced-ai-systems-are-personalized/ Fri, 02 May 2025 21:37:40 +0000 https://cdt.org/?post_type=insight&p=108515 This brief was co-authored by Princess Sampson. Generative artificial intelligence has reshaped the landscape of consumer technology and injected new dimensions into familiar technical tools. Search engines and research databases now by default offer AI-generated summaries of hundreds of results relevant to a query, productivity software promises knowledge workers the ability to quickly create documents […]

The post It’s (Getting) Personal: How Advanced AI Systems Are Personalized appeared first on Center for Democracy and Technology.

]]>
This brief was co-authored by Princess Sampson.

It’s (Getting) Personal: How Advanced AI Systems Are Personalized. White and black document on a grey background.
It’s (Getting) Personal: How Advanced AI Systems Are Personalized. White and black document on a grey background.

Generative artificial intelligence has reshaped the landscape of consumer technology and injected new dimensions into familiar technical tools. Search engines and research databases now by default offer AI-generated summaries of hundreds of results relevant to a query, productivity software promises knowledge workers the ability to quickly create documents and presentations, and social media and e-commerce platforms offer in-app AI-powered tools for creating and discovering content, products, and services.

Many of today’s advanced AI systems like chatbots, assistants, and agents are powered by foundation models: large-scale AI models trained on enormous collections of text, images, or audio gathered from the open internet, social media, academic databases, and the public domain. These sources of reasonably generalized knowledge allow AI assistants and other generative AI systems to respond to a wide variety of user queries, synthesize new content, and analyze or summarize a document outside of their training data.

But out of the box, generic foundation models often struggle to surface details likely to be most relevant to specific users. AI developers have begun to make the case that increasing personalization will make these technologies more helpful, reliable, and appealing by providing more individualized information and support. As visions for powerful AI assistants and agents that can plan and execute actions on behalf of users motivate developers to make tools increasingly “useful” to people — that is, more personalized — practitioners and policymakers will be asked to weigh in with increasing urgency on what many will argue are tradeoffs between privacy and utility, and on how to preserve human agency and reduce the risk of addictive behavior.

Much attention has been paid to the immense stores of personal data used to train the foundation models that power these tools. This brief continues that story by highlighting how generative AI-powered tools use user data to deliver progressively personalized experiences, teeing up conversations about the policy implications of these approaches.

Read the full brief.

The post It’s (Getting) Personal: How Advanced AI Systems Are Personalized appeared first on Center for Democracy and Technology.

]]>
CDT Europe’s AI Bulletin: April 2025 https://cdt.org/insights/cdt-europes-ai-bulletin-april-2025/ Tue, 29 Apr 2025 22:22:26 +0000 https://cdt.org/?post_type=insight&p=108506 AILD Withdrawal Maintained Despite Concerns from Civil Society and Lawmakers On 7 April, CDT Europe joined a coalition of civil society organisations in sending an open letter to European Commission Executive Vice-President Virkkunen and Commissioner McGrath, expressing deep concern over the Commission’s recent decision to withdraw the proposed Artificial Intelligence Liability Directive (AILD) and stressing […]

The post CDT Europe’s AI Bulletin: April 2025 appeared first on Center for Democracy and Technology.

]]>
AILD Withdrawal Maintained Despite Concerns from Civil Society and Lawmakers

On 7 April, CDT Europe joined a coalition of civil society organisations in sending an open letter to European Commission Executive Vice-President Virkkunen and Commissioner McGrath, expressing deep concern over the Commission’s recent decision to withdraw the proposed Artificial Intelligence Liability Directive (AILD) and stressing the urgent need to immediately begin preparatory work on a new, robust liability framework. We argued that the proposal is necessary because individuals seeking compensation for AI-induced harm will need to prove that damage was caused by a faulty AI system, which would be an insurmountable burden without a liability framework.  

In a scheduled hearing before the European Parliament’s JURI Committee Commissioner Virkkunen defended the withdrawal, restating the need to reduce overlapping obligations and ensure simpler compliance with the digital acquis for businesses. Crucially, she suggested fully implementing and enforcing the AI Act before any new legislation would be proposed. 

Following the hearing, the Rapporteur of the Directive, Axel Voss, as well as the Rapporteur of the AI Act, Brando Benifei, sent a joint letter to the European Commission expressing their concern over the proposed withdrawal. They recalled that several key proposals of the European Parliament were withdrawn during the AI Act negotiations based on the promise that the AILD would address those concerns. They also noted the persisting gaps for victims of AI-specific harms, and suggested that the Commission include an updated proposal as part of the upcoming Digital Omnibus Package. 

AI Continent Plan Unveiled by the European Commission

The European Commission published the AI Continent Action Plan on 9 April, outlining their strategy to support AI scale-up in the EU through five distinct pillars including computing infrastructure, data, regulatory simplification, and attracting talent. The most notable suggestions include a Data Union Strategy and regulatory simplification measures, both aimed at reducing compliance burdens and removing structural bottlenecks for AI developers and deployers. 

The Data Union Strategy, set for release in Q3 2025, is designed to improve access and use of high-quality and sector-specific data across the EU by improving cross-border data availability, including by reducing the legal and technical conditions for data-sharing. In this regard, the Plan announces a public consultation set to open in May 2025, where stakeholders will be asked to describe current barriers to accessing data and how to simplify compliance with EU data rules. 

The Action Plan similarly considers regulatory simplification in connection with the AI Act, announcing as a first step the July 2025 establishment of an AI Act Service Desk to provide practical compliance guidance, interactive tools, and direct support for startups and SMEs. However, in a public consultation launched simultaneously, the European Commission prompts stakeholders to identify regulatory challenges and recommend further measures to facilitate compliance and possible simplification of the AI Act, paving the way for further deregulatory efforts. 

Finally, the plan includes a proposal for a Cloud and AI Development Act, expected by early 2026, to fast-track environmental permits for data centres, enable a common EU cloud services marketplace, and scale the EU’s computing infrastructure, explicitly seeking to triple EU data centre capacity by 2035.

The Commission’s AI Continent Action Plan sets out a roadmap for five consultative processes in total:

  1. A call for evidence for a European Strategy for AI in science, with a submission deadline of 5 June 2025
  2. A call for evidence and public consultation on the Apply AI Strategy, with a submission deadline of 4 June 2025
  3. A public consultation on the Data Union Strategy, expected to open in May 2025
  4. A call for evidence and public consultation on the Cloud and AI Development Act, with a submission deadline of 4 June 2025
  5. A call for interest on AI GigaFactories, with a submission deadline of 20 June 2025

Public Consultation on Guidelines for General-Purpose AI Models Opened

The European Commission opened a public consultation seeking input that will feed into the upcoming guidelines under the AI Act on general-purpose AI (GPAI) models, which are distinct from the ongoing Code of Practice process. These guidelines are aimed to provide more clarity on various issues, including the definition of GPAI models; the definition of providers along the value chain; the clarification of what ‘placing on the market’ entails; and specifications regarding the exemption for open-source models. They will also provide more detail on the enforcement approach taken by the AI Office. 

The guidelines will complement the Code of Practice on GPAI by explaining what signing and adhering to the Code of Practice means for companies. While the Code of Practice addresses GPAI model providers’ obligations, the guidelines clarify to whom and when those obligations apply. According to the consultation, both the guidelines and the final Code of Practice are expected to be published before August 2025. The consultation is open for all interested stakeholders until 22 May. 

In Other ‘AI & EU’ News 

  • The deadline for the final draft of the Code of Practice on general-purpose AI models to be published is 2 May. However, the latest consultation by the European Commission on GPAI models suggests that the publication may take place in either May or June this year.  
  • The Irish Data Protection Commission (DPC) opened an investigation into the Grok AI model developed by xAI. In particular, the DPC will examine whether training the model on publicly-accessible posts by EU users on the platform X is compliant with xAI’s obligations under the General Data Protection Regulation.
  • Following Meta’s announcement that it would train its AI using public content shared by adults on their products in the EU, several data protection authorities — including those from France, Belgium, the Netherlands, and Hamburg —  notified EU residents that they can take steps to object to the processing. Users wishing to object will have to do so before 27 May.
  • 30 MEPs warned the European Commission against watering down its definition of open-source AI. The letter’s signatories asked the Commission to clarify that certain models, such as those in Meta’s Llama series, are not considered open-source under the AI Act, given that Meta does not share the training code of its models and prohibits the use of its models to train other AI systems. They therefore asked the Commission to consider developing guidance on the meaning of open-source for the purpose of enforcing the AI Act, taking into account international standards including the Open Source Initiative.
  • Spain’s AI draft bill has come under fire by academics and civil society organisations for a provision that exempts public authorities from administrative fines. Critics argue that the exemption could weaken enforcement of AI safeguards and dilute protection of individual rights. For example, misuse of prohibited technologies, such as real-time remote biometric identification, by public bodies would result only in a warning and cessation of the activity. Civil society is calling for removal of the exemption, as well as introduction of disciplinary measures for officials, including disqualification from public office.
  • The next public webinar in the AI Pact series, which aims to promote knowledge-sharing and provide participants with a better understanding of the AI Act and its implementation, will be held on 27 May. You can find more information, as well as recordings of the past events, here.

Content of the Month 📚📺🎧

CDT Europe presents our freshly curated recommended reads for the month. For more on AI, take a look at CDT’s work.  

The post CDT Europe’s AI Bulletin: April 2025 appeared first on Center for Democracy and Technology.

]]>
Op-ed: Before AI Agents Act, We Need Answers https://cdt.org/insights/op-ed-before-ai-agents-act-we-need-answers/ Tue, 22 Apr 2025 13:40:41 +0000 https://cdt.org/?post_type=insight&p=108437 CDT Ruchika Joshi penned a new op-ed that first appeared in Tech Policy Press on April 17, 2025. Read an excerpt: Tech companies are betting big on AI agents. From sweeping organizational overhauls to CEOs claiming agents will ‘join the workforce’ and power a multi-trillion-dollar industry, the race to match hype is on. While the boundaries of what qualifies as […]

The post Op-ed: Before AI Agents Act, We Need Answers appeared first on Center for Democracy and Technology.

]]>
CDT Ruchika Joshi penned a new op-ed that first appeared in Tech Policy Press on April 17, 2025.

Read an excerpt:

Tech companies are betting big on AI agents. From sweeping organizational overhauls to CEOs claiming agents will ‘join the workforce’ and power a multi-trillion-dollar industry, the race to match hype is on.

While the boundaries of what qualifies as an ‘AI agent’ remain fuzzy, the term is commonly used to describe AI systems designed to plan and execute tasks on behalf of users with increasing autonomy. Unlike AI-powered systems like chatbots or recommendation engines, which can generate responses or make suggestions to assist users in making decisions, AI agents are envisioned to execute those decisions by directly interacting with external websites or tools via APIs.

Where an AI chatbot might have previously suggested flight routes to a given destination, AI agents are now being designed to find which flight is cheapest, book the ticket, fill out the user’s passport information, and email the boarding pass. Building on that idea, early demonstrations of agent use include operating a computer for grocery shoppingautomating HR approvals, or managing legal compliance tasks.

Yet current AI agents have been quick to break, indicating that reliable task execution remains an elusive goal. This is unsurprising, since AI agents rely on the same foundation models as non-agentic AI and so are prone to familiar challenges of bias, hallucination, brittle reasoning, and limited real-world grounding. Non-agentic AI systems have already been shown to make expensive mistakesexhibit biased decision making, and mislead users about their ‘thinking’. Enabling such systems to now act on behalf of users will only raise the stakes of these failures.

As companies race to build and deploy AI agents to act with less supervision than earlier systems, what is keeping these agents from harming people?

The unsettling answer is that no one really knows, and the documentation that the agent developers provide doesn’t add much clarity. For example, while system or model cards released by OpenAI and Anthropic offer some details on agent capabilities and safety testing, they also include vague assurances on risk mitigation efforts without providing supporting evidence. Others have released no documentation at all or only done so after considerable delay.

Read the full op-ed.

The post Op-ed: Before AI Agents Act, We Need Answers appeared first on Center for Democracy and Technology.

]]>
AI in Local Government: How Counties & Cities Are Advancing AI Governance https://cdt.org/insights/ai-in-local-government-how-counties-cities-are-advancing-ai-governance/ Tue, 15 Apr 2025 14:23:40 +0000 https://cdt.org/?post_type=insight&p=108358 This blog is part of a series of pieces highlighting AI regulation trends across states. See CDT’s other blogs on state AI executive orders, public sector AI legislation, and state education agencies’ AI guidance. Introduction While much attention has been paid to the use of AI by state and federal agencies, city and local governments […]

The post AI in Local Government: How Counties & Cities Are Advancing AI Governance appeared first on Center for Democracy and Technology.

]]>
This blog is part of a series of pieces highlighting AI regulation trends across states. See CDT’s other blogs on state AI executive orders, public sector AI legislation, and state education agencies’ AI guidance.

Introduction

While much attention has been paid to the use of AI by state and federal agencies, city and local governments also are increasingly using AI and should implement safeguards around public sector uses of these tools. City and county governments administer a wide range of public services – including transportation, healthcare, law enforcement, veterans services, and nutrition assistance, to name only a few – that have significant impacts on individuals’ health and safety. AI systems can assist in increasing the efficiency and effectiveness of local governments’ provision of such services, but without proper guardrails these same tools can also harm constituents and impede the safe, dignified, and fair delivery of public services.

In response to both the benefits and risks of using AI in local government, an increasing number of cities and counties have released AI policies and guidance. Organizations like the GovAI Coalition and the National Association of Counties are helping local governments craft and implement their own policies. In particular, the GovAI Coalition, a group of state and local public agencies working to advance responsible AI, created several template AI policies that a number of local agencies have since adopted as part of their own AI governance strategies.

To understand local trends, we analyzed public-facing policy documents from 21 cities and counties. Because most cities and counties do not make their internal IT policies publicly available, the following analysis could be skewed by differences in cities and counties that take proactive steps to disclose their AI policies. Analysis of publicly available AI policies and guidance at the local level reveals five common trends in AI governance, in that these policies: 

  • Draw from federal, state, and other local AI governance guidance;
  • Emphasize that use of AI should align with existing legal obligations;
  • Identify and prioritize mitigation of risks, like bias, reliability, privacy, and security;
  • Prioritize public transparency of AI uses; and
  • Advance accountability and human oversight in decision-making that incorporates AI.

AI Policy and Guidance at the County and City Level

Within the past several years, county and city governments across the country have published AI use policies and guidance to advance responsible AI uses and place guardrails on the ways they use the technology. Counties and cities are using various methods in regulating government AI use, including policies, guidelines, and executive orders. In addition, at least two cities – New York, NY, and San Francisco, Calif. – have enacted city ordinances requiring agencies to create public inventories of their AI use cases.

While many of these documents are not publicly accessible, several counties – Haines Borough, Alaska; Alameda County, Calif.; Los Angeles County, Calif.; Santa Cruz County, Calif.; Sonoma County, Calif.; Miami-Dade County, Fla.; Prince George’s County, Md.; Montgomery County, Md.; Washington County, Ore; and Nashville and Davidson County, Tenn. – and city governments – Baltimore, Md.; Birmingham, Ala.; Boise, Idaho; Boston, Mass.; Lebanon, NH; Long Beach, Calif.; New York City, NY; San Francisco, Calif.; San Jose, Calif.; Seattle, Wash.; and Tempe, Ariz. – have publicly released their policies, providing important insight into key trends across jurisdictions. These policies span states that already have existing state-wide policies and those that do not. Regardless of state-level policy, however, additional county and city-level guidance can help clarify the roles and obligations of local agencies.

Trends in County and City AI Policies and Guidance

  1. Draw from federal, state, and other local AI governance guidance

At both the county and city level, governments are building off of other local, state, and federal guidance as a starting point, mostly through borrowing language. Some of the most commonly cited or used resources are Boston’s AI guidelines, San Jose’s AI guidelines, the National Institute for Standards and Technology’s (NIST’s)  AI Risk Management Framework, and the Biden Administration’s since-rescinded AI Executive Order and AI Bill of Rights

For example, the City of Birmingham, Ala.’s generative AI guidelines acknowledge that the authors drew inspiration from the City of Boston’s guidelines. Likewise, Miami-Dade County’s report on AI policies and guidelines draws from several other government resources, including the cities of Boston, San Jose, and Seattle, the state of Kansas, the White House, and NIST.

  1. Emphasize that use of AI should align with existing legal obligations

At least 15 of the guidance documents that we analyzed explicitly call out the necessity for public agencies to ensure their use of AI tools adheres to existing laws relating to topics such as cybersecurity, public records, and privacy. On the city front, San Jose, Calif.’s AI guidelines state that “users will need to comply with the California Public Records Act and other applicable public records laws” for all city uses of generative AI, and Tempe, Ariz. mentions that all city employees must “comply with applicable laws, standards and regulations related to AI and data protection.” Several counties similarly affirm public agencies’ obligations to use AI systems in compliance with existing laws. Nashville and Davidson County’s guidance states that “all AI and GenAI use shall comply with relevant data privacy laws and shall not violate any intellectual property use,” and Los Angeles County’s technology directive affirms that AI systems must be used in “adherence to relevant laws and regulations.”

Some cities and counties take an additional step by creating access controls to prevent unauthorized use and disclosure of personal information. Santa Cruz County, for example, prohibits the use of AI systems without authorization, and New York City specifies that employees can only use tools that have been “approved by responsible agency personnel” and are “authorized by agency-specific and citywide requirements.” Likewise, Haines Borough requires employees to have specific authorization to use any AI systems that handle sensitive information.

  1. Identify and prioritize mitigation of risks, like bias, reliability, privacy, and security

Cities and counties commonly recognize the following three main risks of using AI:

  • Perpetuating bias: About 12 of the guidelines mention the potential for AI tools to produce biased outputs. One example of this at the city level is Lebanon, NH’s AI policy, which specifies the different types of bias issues that can show up with AI – biased training data, sampling bias, and stereotyping/societal biases – and expresses that “any biases that are identified must be addressed and corrective actions should be taken.” Alameda County, Calif., similarly highlights these issues, stating that “GenAI models can inadvertently amplify biases in the data the models are trained with or that users provide AI.”
  • Accuracy and unreliable outputs: At least 15 cities and counties discuss the unreliability of AI tools (due to issues such as hallucination), often acknowledging this through requiring employees to double-check or verify outputs before using AI-generated information in their work. For instance, Baltimore, Md.’s generative AI executive order prohibits city employees from using generative AI outputs without fact-checking and refining the content, especially if used for decision-making or in public communications. Guidance published by Washington County, Oreg. directs county employees to “fact check and review all content generated by AI,” noting that “while Generative AI can rapidly produce clear prose, the information and content might be inaccurate, outdated, or entirely fictional.” 
  • Privacy and security concerns: Roughly 18 city and county AI guidelines and policies state the importance of protecting privacy and security. These policies emphasize the potential privacy- and security-related harms if employees, for example, input personally identifiable or other sensitive information into an AI tool. The City of San Francisco, Calif., explains that a risk of using generative AI is “exposing non-public data as part of a training data set” and recommends that employees do not enter information that should not be public into non-enterprise generative AI tools. Long Beach, Calif., also recommends that city employees opt out of generative AI tools’ data collection and sharing whenever possible, and even provides a step-by-step guide on how to do so on ChatGPT. Sonoma County, Calif., notes that “there can be risks in using this technology, including… security and privacy concerns with inputting proprietary or confidential information about an employee, client, operations, etc. when interacting with the AI technology.”
  1. Prioritize public transparency of AI uses

Roughly 17 city and county guidelines and policies encourage, or even require, employees to publicly disclose use of AI tools. The City of Boise, Idaho, states that “disclosure builds trust through transparency,” encouraging employees to cite their AI usage in all cases, but especially in significant public communications or other important purposes. Seattle, Wash.’s generative AI policy goes even further on the principle of transparency, and commits to making their documentation related to city use of AI systems publicly available. Santa Cruz County, Calif., for instance, requires employees to include a notice “when Generative AI contributed substantially to the development of a work product” and that “indicate(s) the product and version used.”

  1. Advance accountability and human oversight in decision-making that incorporates AI

About 14 of the guidance documents stress that responsibility ultimately falls on city and county employees, either when using AI outputs or making decisions using AI tools. Some city governments even take this a step further by including enforcement mechanisms for non-compliance with their AI policies, including employee termination. One example is seen in guidance issued by Alameda County, Calif., which directs all employees to “thoroughly review and fact check all AI-generated content,” emphasizing that “you are responsible for what you create with GenAI assistance.” Another example is the City of Lebanon, NH, stating that employee non-compliance with the guidelines “may result in disciplinary action or restriction of access, and possibly even termination of employment.”

Conclusion

Regardless of the level of government, responsible AI adoption should follow the principles of transparency, accountability, and equity to ensure that AI tools are used to serve constituents in ways that improve their lives. Taking steps to responsibly implement and oversee AI will not only help local governments use these tools effectively but will also build public trust.

Similar to what state governors and lawmakers can do to advance public sector AI regulation, cities and counties should consider these components of AI governance:

  • Promote transparency and disclosure by documenting AI uses through public-facing use case inventories, such as those maintained by New York, NY and San Jose, Calif., and direct notices to individuals impacted by AI systems.
  • Implement substantive risk management practices for high-risk uses by requiring pre- and post-deployment testing and ongoing monitoring of systems with a significant impact on individuals’ rights, safety, and liberties. While specific risk management practices are not included in many local guidance documents, a growing number of state governments have issued requirements for measures like AI impact assessments, and these can serve as valuable resources for city and county governments to draw from.
  • Ensure proper human oversight by training government employees about the risks, limitations, and appropriate uses of AI, and empowering employees to intervene when potential harms are identified.
  • Incorporate community engagement by seeking direct public feedback about the design and implementation of AI. Some cities, like Long Beach, Calif., have already developed innovative approaches to engaging community members around the use of technology by public agencies.

The post AI in Local Government: How Counties & Cities Are Advancing AI Governance appeared first on Center for Democracy and Technology.

]]>
Looking Back at AI Guidance Across State Education Agencies and Looking Forward https://cdt.org/insights/looking-back-at-ai-guidance-across-state-education-agencies-and-looking-forward/ Tue, 15 Apr 2025 14:20:59 +0000 https://cdt.org/?post_type=insight&p=108356 This blog is part of a series of pieces highlighting AI regulation trends across states. See CDT’s other blogs on state AI executive orders, public sector AI legislation, and local AI governance efforts. Artificial intelligence (AI) has shaken up the education sector, particularly since the public release of ChatGPT and other generative AI tools. School […]

The post Looking Back at AI Guidance Across State Education Agencies and Looking Forward appeared first on Center for Democracy and Technology.

]]>
This blog is part of a series of pieces highlighting AI regulation trends across states. See CDT’s other blogs on state AI executive orders, public sector AI legislation, and local AI governance efforts.

Artificial intelligence (AI) has shaken up the education sector, particularly since the public release of ChatGPT and other generative AI tools. School administrators, teachers, students, and parents have grappled with whether and how to utilize AI, amidst fears such as diminishing student academic integrity and even more sinister concerns like rising prevalence of deepfake non-consensual intimate imagery (NCII).

In response to AI taking classrooms by storm, the education agencies of over half of states (Alabama, Arizona, California, Colorado, Connecticut, Delaware, Georgia, Hawaii, Indiana, Kentucky, Louisiana, Michigan, Minnesota, Mississippi, New Jersey, North Carolina, North Dakota, Ohio, Oklahoma, Oregon, Utah, Virginia, Washington, West Virginia, Wisconsin, Wyoming) and Puerto Rico have released guidance for districts and schools on the responsible use of AI in public education. These pieces of guidance vary by types of AI systems they cover, with some solely focusing on generative AI and others encompassing AI more broadly. Analysis of current state education agencies’ (SEAs’) guidance reveals four primary trends:

  1. There is alignment on the potential benefits of AI in education.
  2. Education agencies acknowledge the base risks of AI use in schools.​
  3. Across the board, states emphasize the need for human oversight and investment in AI literacy/education.
  4. As a whole, SEA guidance is missing critical topics related to AI, such as how to meaningfully engage communities on the issue and how to approach deepfakes.

Below, we detail these trends; highlight what SEAs can do to advance responsible, rights-respecting use of AI in education in light of these trends; and explore a few particularly promising examples of SEA AI guidance.

Trends in SEAs’ AI Guidance

  1. Alignment on the potential benefits of AI in education

Guidance out of SEAs consistently recognizes the following four benefits of using and teaching AI in the classroom: 

  • Personalized learning: At least 17 SEAs cite personalized learning for students as a benefit of AI in education. Colorado’s AI roadmap, for instance, states that AI can support students by “tailor[ing] educational content to match each student’s learning pace and style and helping students learn more efficiently by offering individualized resources and strategies that align with their learning goals, styles, and needs.” Another example is Arizona’s generative AI guidance document, which highlights three different methods of personalized learning opportunities for students: interactive learning, AI coaching, and writing enhancement.
  • Expediting workflow and streamlining administrative processes: Roughly 13 SEAs mention AI’s potential benefit of speeding up or even automating tasks, such as writing emails or creating presentations. Washington mentions “streamlin[ing] operational and administrative functions” as an opportunity for AI use in education, and similarly, Oklahoma states that educators can use AI to “increase efficiency and productivity” through means like automating administrative tasks, thus freeing up time to focus on teaching.
  • Preparing students for the future workforce: Around 11 states discuss teaching AI and AI literacy to students now as essential in equipping them for future career opportunities, often predicting that AI tools will revolutionize the workforce. Indiana’s AI in education guidance states that “the ability to use and understand AI effectively is critical to a future where students will enroll in higher education, enlist in the military, or seek employment in the workforce.” Similarly, Delaware’s generative AI in education guidance explains that “students who learn how AI works are better prepared for future careers in a wide range of industries,” due to developing the skills of computational thinking, analyzing data critically, and evaluating the effectiveness of solutions.
  • Making education more accessible to underrepresented groups: At least 11 of the AI in education guidance documents tout AI as making education more accessible, especially for student populations like those with disabilities and English learners. For example, California’s Department of Education and Minnesota’s Department of Education both note that AI can improve access for marginalized populations through functions such as language translation assistance and generating audio descriptions for students with disabilities. In addition to these communities of students, North Dakota’s Department of Public Instruction also mentions that AI tools can make education more accessible for students in rural areas and students from economically disadvantaged backgrounds.
  1. Acknowledgement of the base risks of AI use in schools

The majority of SEA guidance documents enumerate commonly recognized risks of AI in education, namely:

  • Privacy harms: Roughly 20 states explicitly mention privacy harms as a risk or concern related to implementation of AI in education, especially as it pertains to personally identifiable information. For example, Hawaii’s AI in education guidance geared towards students urges them to be vigilant about protecting their privacy by avoiding sharing sensitive personal information with AI tools, such as their address and phone number. Another example is Mississippi’s Department of Education, which highlights that AI can “increase data privacy and security risks depending on the [vendor’s] privacy and data sharing policies.”
  • Inaccuracy of AI-generated outputs: At least 16 SEAs express concerns about AI tools’ ability to produce accurate information, often citing the common generative AI risk of hallucination. North Dakota’s Department of Public Instruction encourages high schoolers to learn about the limitations of AI and to have a “healthy skepticism” of tools due, in part, to the risk of inaccuracies in information. Along the same lines, Wyoming’s AI in education guidance affirms that students are always responsible for checking the accuracy of AI-generated content, and that school staff and students should critically evaluate all AI outputs. 
  • Reduction of students’ critical thinking skills: Around 10 SEAs discuss the risk of students becoming overreliant on AI tools, thus diminishing their necessary critical thinking skills. Puerto Rico’s Department of Education cites the risk of students and staff becoming dependent on AI tools, which can reduce skills such as critical thinking, creativity, independent decision-making, and quality of teaching. Another example is Arizona’s generative AI guidance, stating that overreliance on AI is a risk for both students and teachers – technology cannot replace the deep knowledge teachers have of their students, nor can it “improve student learning if it is used as a crutch.”
  • Perpetuation of bias: At least 22 states cite perpetuating bias as a risk of AI tools in the classroom. One of the ethical considerations that Louisiana puts forth is “avoiding potential biases in algorithms and data” when possible and placing safeguards during AI implementation to address bias. Virginia’s AI guidelines also affirm that the use of AI in education should do no harm, including “ensuring that algorithms are not based on inherent biases that lead to discriminatory outcomes.”
  • Unreliability of AI content detection tools: Many states also express skepticism about the use of AI content detection tools by educators to combat plagiarism, in part due to their unproven efficacy and risk of erroneously flagging non-native English speakers. For example, West Virginia’s Department of Education recommends that teachers do not use AI content detectors “due to concerns about their reliability,” and North Carolina’s generative AI guidance notes that AI detection tools “often create false positives, penalizing non-native speakers and creative writing styles.”
  1. Emphasis on the need for human oversight and investment in education

Across the board, SEAs also stress the importance of taking a human-centric approach to AI use in the classroom – emphasizing that AI is just a tool and users are still responsible for the decisions they make or work they submit. For example, the Georgia Department of Education’s AI guidance asserts that human oversight is critical and that “final decision-making should always involve human judgment.” Similarly, the Kentucky Department of Education emphasizes how vital having a human in the loop is, especially when AI makes decisions that could have significant consequences for individuals or society.

To equip school stakeholders with the skills necessary to be responsible users of AI, many SEA guidance documents also highlight the need for AI literacy and professional development and training for teachers. Colorado’s AI roadmap frequently mentions the need for both teachers and students to be given AI literacy education so that students are prepared to enter the future “AI-driven world.” The Oregon Department of Education’s AI guidance continually mentions the need for educators to be trained to address the equity impacts of generative AI, including training on topics like combating plagiarism and spotting inaccuracies in AI outputs.

  1. Exclude critical topics, such as meaningful community engagement and deepfakes

Creating mechanisms for robust community engagement allows districts and schools to make more informed decisions about AI procurement to ensure systems and their implementations directly respond to the needs and concerns of those the tools impact most. Some pieces of guidance mention including parents in conversations about AI adoption and implementation, but only in a one-way exchange (e.g., the school provides parents resources/information on how AI will be used safely in the classroom). North Carolina, West Virginia, Utah, Georgia, Connecticut, and Louisiana are the only states that talk about more meaningful engagement, like obtaining parental consent for students using AI tools at school, or including parents and other external stakeholders in the policymaking and decision-making processes. For example, Connecticut’s AI guidance states that parents and community members may have questions about AI use in their children’s school, so, “Leaders may consider forming an advisory around the use of technology generally and AI tools specifically to encourage a culture of learning and transparency, as well as to tap the expertise that community experts may offer.”

One of the most pernicious uses of AI that has become a large issue in schools across the country is the creation of deepfakes and deepfake NCII. CDT research has shown that in the 2023-2024 school year, around 40 percent of students said that they knew about a deepfake depicting someone associated with their school, and 15 percent of students reported that they knew about AI-generated deepfake NCII that depicted individuals associated with their school. The harms from using AI for bullying or harassment, including the creation of deepfakes and deepfake NCII, is only mentioned in roughly four of the guidance documents – those from Utah, Washington, West Virginia, and Connecticut. Utah’s AI in education guidance expresses that schools should prohibit students from “using AI tools to manipulate media to impersonate others for bullying, harassment, or any form of intimidation,” and in the same vein, Washington’s Office of Superintendent of Public Instruction explicitly mentions that users should never utilize AI to “create misleading or inappropriate content, take someone’s likeness without permission, or harm humans or the community at large.”

What SEAs Can Do to Advance Responsible AI Use in Education

After analyzing the strengths and weaknesses of current SEAs’ AI guidance documents, the following emerge as priorities for effective guidance:

  1. Improve the form of the guidance itself
  • Tailor guidance for specific audiences: School administrators, teachers, students, and parents each have unique roles in ensuring AI is implemented and used responsibly, thus making it necessary for guidance to clearly define the benefits, risks, risk mitigation strategies, and available resources specific to each audience. Mississippi’s guidance serves as a helpful example of segmenting recommendations for specific groups of school stakeholders (e.g., student, teachers, and school administrators). 
  • Ensure guidance is accessible: SEAs should ensure that guidance documents are written in plain language so that they are more accessible generally, but also specifically for individuals with disabilities. In addition, guidance released online should be in compliance with the Web Content Accessibility Guidelines as required by Title II of the Americans with Disabilities Act.
  • Publish guidance publicly: Making guidance publicly available for all school stakeholders is key in building accountability mechanisms, strengthening community education on AI, and building trust. It can also allow other states, districts, and schools to learn from other approaches to AI policymaking, thus strengthening efforts to ensure responsible AI use in classrooms across the country.
  1. Provide additional clarity on commonly mentioned topics 
  • Promote transparency and disclosure of AI use and risk management practices: Students, parents, and other community members are often unaware of the ways that AI is being used in their districts and schools. To strengthen trust and build accountability mechanisms, SEAs should encourage public sharing about the AI tools being used, including the purposes for their use and whether they process student data. On the same front, guidance should also include audience-specific best practices to ensure students’ privacy, security, and civil rights are protected.
  • Include best practices for human oversight: The majority of current SEA guidance recognizes the importance of having a “human in the loop” when it comes to AI, but few get specific on what that means in practice. Guidance should include clear, audience-specific examples to showcase how individuals can employ the most effective human oversight strategies.
  • Be specific about what should be included in AI literacy/training programs: SEAs recognize the importance of AI literacy and training for school administrators, teachers, and students, but few pieces of guidance include what topics should be covered to best equip school stakeholders with the skills needed to be responsible AI users. Guidance can identify priority areas for these AI literacy/training programs, such as training teachers on how to respond when a student is accused of plagiarism or how students can verify the output of generative AI tools.
  1. Address important topics that are missing entirely
  • Incorporate community engagement throughout the AI lifecycle: Outside of school staff, students, parents, and other community members hold vital expertise that should be considered during the AI policymaking and decision-making process, such as concerns and past experiences.
  • Articulate the risks of deepfake NCII: As previously mentioned, this topic was missing from most SEA guidance. This should be included, with a particular focus on encouraging implementation of policies that address the largest gaps: investing in prevention and supporting victims. 

Promising Examples of SEA AI Guidance

Current AI guidance from SEAs contains strengths and weaknesses, but three states stand out in particular for their detail and unique approaches:

North Carolina Department of Public Instruction

North Carolina’s generative AI guidance stands out for five key reasons:

  • Prioritizes community engagement: The guidance discusses the importance of community engagement when districts and schools are creating generative AI guidelines. It points out that having community expertise from groups like parents establishes a firm foundation for responsible generative AI implementation.
  • Encourages comprehensive AI literacy: The state encourages LEAs to develop a comprehensive AI literacy program for staff to build a “common understanding and common language,” laying the groundwork for responsible use of generative AI in the classroom.
  • Provides actionable examples for school stakeholders: The guidance gives clear examples for concepts, such as how teachers can redesign assignments to combat cheating and a step-by-step academic integrity guide for students.
  • Highlights the benefit of built-for-purpose AI models: It explains that built-for-education tools, or built-for-purpose generative AI models, may be better options for districts or schools concerned with privacy.
  • Encourages transparency and accountability from generative AI vendors: The guidance provides questions for districts or schools to ask vendors when exploring various generative AI tools. One example of a question included to assess “evidence of impact” is, “Are there any examples, metrics, and/or case studies of positive impact in similar settings?”

Kentucky Department of Education

Three details of Kentucky’s AI guidance make it a strong example to highlight: 

  • Positions the SEA as a centralized resource for AI: It is one of the only pieces of guidance that positions the SEA as a resource and thought partner to districts who are creating their own AI policies. As part of the Kentucky Department of Education’s mission, the guidance states that the Department is committed to encouraging districts and schools by providing guidance and support and engaging districts and schools by fostering environments of knowledge-sharing.
  • Provides actionable steps for teachers to ensure responsible AI use: Similar to North Carolina, it provides guiding questions for teachers when considering implementing AI in the classroom. One sample question that teachers can ask is, “Am I feeding any sensitive or personal information/data to an AI that it can use or share with unauthorized people in the future?”
  • Prioritizes transparency: The guidance prioritizes transparency by encouraging districts and schools to provide understandable information to parents, teachers, and students on how an AI tool being used is making decisions or storing their data, and what avenues are available to hold systems accountable if errors arise.

Alabama State Department of Education

Alabama’s AI policy template stands out for four primary aspects:

  • Promotes consistent AI policies: Alabama takes a unique approach by creating a customizable AI policy template for LEAs to use and adapt. This allows for conceptual consistency in AI policymaking, while also leaving room for LEAs to include additional details necessary to govern AI use in their unique contexts.
  • Recognizes the importance of the procurement process: The policy template prioritizes the AI procurement process, by including strong language about what details should be included in vendor contracts. The policy template points out two key statements that LEAs should get written certification from contractors that they will comply with: that “the AI model has been pre-trained and no data is being used to train a model to be used in the development of a new product,” and that “they have used a human-in-the-loop strategy during development, have taken steps to minimize bias as much as possible in the data selection process and algorithm development, and the results have met the expected outcomes.”
  • Provides detailed risk management practices: It gets very specific about risk management practices that LEAs should adhere to. A first key detail included in the template is that the LEA will conduct compliance audits of data used in AI systems, and that if changes need to be made to a system, the contractor will be required to submit a corrective action plan. Another strong detail included is that the LEA must establish performance metrics to evaluate the AI system procured to ensure that the system works as intended. Finally, there is language included that, as part of their risk management framework, the LEA should comply with the National Institute of Standards and Technology’s AI Risk Management Framework (RMF), conduct annual audits to ensure they are in compliance with the RMF, identify risks and share them with vendors to create a remediation plan, and maintain a risk register for all AI systems.
  • Calls out the unique risks of facial recognition technology in schools: Alabama recognizes the specific risks of cameras with AI systems (or facial recognition technologies) on campuses and in classrooms, explicitly stating that LEAs need to be in compliance with federal and state laws.

Conclusion

In the past few years, seemingly endless resources and information have become available to education leaders, aiming to help guide AI implementation and use. Although more information can be useful to navigate this emerging technology, it has created an overwhelming environment, making it difficult to determine what is best practice and implying that AI integration is inevitable. 
As SEAs continue to develop and implement AI guidance in 2025, it is critical to first be clear that AI may not be the best solution to the problem that an education agency or school is attempting to solve, and second, affirm what “responsible” use of AI in education means – creating a governance framework that allows AI tools to enhance childrens’ educational experiences while protecting their privacy and civil rights at the same time.

The post Looking Back at AI Guidance Across State Education Agencies and Looking Forward appeared first on Center for Democracy and Technology.

]]>
Exploring the 2024 Federal AI Inventories: Key Improvements, Trends, and Continued Inconsistencies https://cdt.org/insights/exploring-the-2024-federal-ai-inventories-key-improvements-trends-and-continued-inconsistencies/ Tue, 15 Apr 2025 13:39:09 +0000 https://cdt.org/?post_type=insight&p=108350 Introduction At the end of last year, U.S. federal agencies published the 2024 updates to their public-facing AI use case inventories. These most recent agency AI inventories mark a significant improvement from past years, providing greater transparency and unprecedented information about how one of the world’s largest governments is using AI. Most notably, the 2024 […]

The post Exploring the 2024 Federal AI Inventories: Key Improvements, Trends, and Continued Inconsistencies appeared first on Center for Democracy and Technology.

]]>
Introduction

At the end of last year, U.S. federal agencies published the 2024 updates to their public-facing AI use case inventories. These most recent agency AI inventories mark a significant improvement from past years, providing greater transparency and unprecedented information about how one of the world’s largest governments is using AI. Most notably, the 2024 agency AI inventories include 1,400 more use cases than 2023’s, representing a 200% increase in reported use cases. 

The publication of these inventories reflects federal agencies’ continued commitment to meet their legal obligations to publicly disclose details about how they are using AI. Those requirements were first established under President Trump’s Executive Order 13960 in December 2020, and later enacted into law in 2022 with the passage of the bipartisan Advancing American AI Act. These requirements were recently reaffirmed by the Office of Management and Budget’s updated guidance on federal agencies’ use of AI, which states that agencies are required to submit and publish their AI use case inventories “at least annually.” 

Federal agencies’ AI use case inventories are more crucial now than ever, as many agencies seek to expand their uses of AI for everything from benefits administration to law enforcement. This is underscored by OMB’s directive to agencies to “accelerate the Federal use of AI,” and by reports that DOGE is using AI tools to make high-risk decisions about government operations and programs with little to no public transparency. The Trump Administration now has the opportunity to build on and improve federal agency AI use case inventories as a critical transparency measure for building public trust and confidence in the government’s growing use of this technology. 

CDT examined the 2023 federal AI inventories, and noted some of the challenges in navigating agency inventories as well as some of the common themes. The following analysis provides an update on what we shared previously, examining how federal agencies have taken steps toward improved reporting as well as detailing remaining gaps and inconsistencies that risk diminishing the public utility of agency AI inventories.

A Step in the Right Direction: Improved Reporting and Documentation

Since 2023, federal agencies have made important progress in the breadth and depth of information included in their AI inventories in several key ways. 

First, the Office of Management and Budget (OMB) created and published a more easily accessible centralized repository of all agency inventories. As CDT noted in our past analysis of agency inventories, it was previously difficult to find agency inventories in an accessible and easily navigable format, and this development is a clear improvement on this issue.

Second, the 2024 agency inventories include far greater reporting about the total number of AI use cases. Agencies reported over three times more use cases than last year, from 710 to 2,133 total use cases across the federal government. This large increase in reporting is likely due to the additional clarification provided by the updated reporting guidance published by OMB under President Biden, as well as potential increased use of AI by federal agencies. While greater agency reporting is important, this increase also creates an overwhelming amount of information that does not necessarily give the public a clear picture of which systems have the greatest impacts on rights and safety. Going forward, it will be critical for agencies to maintain this reporting standard in order to track changes in agencies’ use of AI over time.

Finally, the updated agency inventories include significantly more detail about the risks and governance of specific use cases. As a result of OMB’s reporting guidance, agency inventories generally contain more information about each use case’s stage of development, deployment, data use, and other risk management practices. However, as detailed below, this information is reported inconsistently, undermining the usefulness of this greater degree of reporting.

These improvements enable better understanding in two important ways: 

  1. Changes in agency AI use over time
  2. Additional detail about high-risk AI uses

Changes in agency AI use over time

CDT first published its analysis of agency AI inventories in the summer of 2023. In agencies’ 2023 inventories, we found that three common use cases included chatbots, national security-related uses, and uses related to veterans’ mental health. The updated federal agency inventories from 2024 reflect many of the same trends. National security and veterans’ health care were common uses among a broader set of high-risk systems, as discussed in greater detail in the next section. Additionally, chatbots remain commonly used by a number of agencies, ranging from internally-facing employee resource tools to externally-facing tools used to educate the public about agencies’ resources. For instance, the Department of Agriculture reported use of a chatbot to assist employees from the Farm Service Agency in searching loan handbooks, and the U.S. Patent and Trade Office within the Department of Commerce reported use of a public-facing chatbot to help answer questions about trademarks and patents. 

As noted in the federal CIO’s analysis of the 2024 inventories, roughly 46% of all AI use cases are “mission-enabling” uses related to “administrative and IT functions.” Several common use cases emerged in this year’s inventories that reflect this trend. 

First, a number of agencies reported uses of Generative AI tools and large language models (LLMs) to analyze data, summarize information, and generate text, images, and code. For instance, the Department of Commerce’s Bureau of Economic Analysis reported use of an LLM-based chatbot to support text and data analysis, and the Department of Health and Human Services’ Center for Disease Control reported use of an enterprise-wide Generative AI tool to edit written materials. 

Second, a significant number of agencies reported the use of AI tools to manage public input and requests for information. The following seven agencies all reported the use of AI tools to categorize and process public comments and claims:

  • Department of the Interior
  • Department of Health and Human Services
  • Department of Agriculture
  • Federal Fair Housing Agency
  • Federal Reserve
  • Securities and Exchange Commission
  • Department of Justice 

And, the following nine agencies reported the use of AI systems to automate portions of the FOIA process, such as redacting personally identifiable information:

  • Department of Homeland Security
  • Department of the Interior
  • Department of Health and Human Services
  • National Science Foundation
  • Department of State
  • Equal Employment Opportunity Commission
  • National Archives and Records Administration
  • Department of Justice
  • Department of Transportation 

Additional details about high-risk AI uses

In addition to reporting about their overall AI use cases, OMB’s updated reporting guidance required agencies to indicate which uses are high-risk, which OMB defines as rights- and safety-impacting AI systems. This is an important addition to agency inventories because such high-risk uses have the greatest potential impact on individuals’ rights and liberties, including highly invasive surveillance tools and tools that determine access to a variety of government benefits and services. Across all publicly available agency AI inventories, the three most common categories of high-risk systems currently in use include:

  • Law enforcement and national security
  • Public benefits administration
  • Health and human services delivery and administration

Law enforcement and national security

The Department of Justice and Department of Homeland Security both reported a large number of high-risk law enforcement and national security-related use cases. AI use cases reported by the Department Justice, for instance, include tools used to analyze data and video surveillance for criminal investigations, monitor vehicles and automatically read license plates, detect gunshots, predict prison populations and misconduct among incarcerated individuals, and track recidivism, among a number of other uses related to investigations, surveillance, and prison management. Such uses are concerning and in need of the utmost scrutiny because many of these technologies have proven to be frequently inaccurate, subject to inadequate scrutiny and excess reliance, and prone to lead investigators astray; in the context of law enforcement actions, these mistakes can have severe harms to individuals’ lives and liberty. 

Given how serious these risks are, it is alarming that, while the Department of Justice reported a high number of high-risk use cases—124 of the Department’s total 240—the inventory entries for all Department of Justice use cases do not contain any information about risk mitigation or general AI governance procedures, such as information about whether or not systems were developed in-house or procured, whether systems disseminate information to the public, and which demographic variables systems use. Moreover, a number of use cases included in the Department of Justice inventory do not have a risk classification because they are designated as “too new to fully assess.” Many other agencies similarly neglected to share such information, but these omissions are especially concerning in the context of use cases that pose such a significant threat to individuals’ rights, freedom, and liberties. 

The Department of Homeland Security similarly reported a number of high-risk use cases, 34 of the Department’s 183 reported use cases. These tools span uses such as social media monitoring, border surveillance, facial recognition and other forms of biometric identification, automated device analytics, and predicting the risk for non-citizens under ICE’s management to abscond. 

Although the Department of Homeland Security’s inventory is helpful in assessing its law enforcement, immigration enforcement, and national security uses of AI, two omissions and ambiguities on facial recognition highlight the need for additional transparency. First, one use case listed in the Department’s inventory details Border Patrol use of facial recognition in the field, stating the technology is used to “facilitate biometric identification of individuals as they are encountered.” This leaves ambiguity as to whether facial recognition is used as the basis to detain individuals, or if it is merely a check to inform procedures for bringing an individual in for processing after a detainment decision has already been made. The former scenario would raise serious concerns, especially given how variable facial recognition’s accuracy is across field conditions. Second, the Department’s inventory does not include any mention of ICE using facial recognition in conjunction with DMV databases to find individuals’ identity and current address, a practice that has been publicly documented since 2019. Both of these issues highlight the need for the Department to clarify the extent to which specific AI technologies are used and to include all known use cases, even those that may have been discontinued. 

Public benefits administration

The Social Security Administration and the Department of Veterans Affairs both reported a significant number of high-risk use cases related to the administration of public benefits programs. These systems are used for a variety of purposes ranging from processing benefits claims to identifying fraudulent applications and predicting high-risk claims. The Social Security Administration, for example, reported using AI tools to analyze claims with a high likelihood of error, identify instances of overpayment within social security insurance cases, and to triage review of disability benefits determinations, to name only a few. Similarly, the Veterans Benefits Administration within the Department of Veterans Affairs reported using AI to identify fraudulent changes to veterans’ benefit payments and to process and summarize claims materials.   

Health and human services

The delivery and administration of health and human services was another core area of high-risk AI use cases, with a majority housed within the Department of Veterans Affairs, the largest healthcare system in the nation, and the Department of Health and Human Services. For instance, the Office of Refugee Resettlement within the Department of Health and Human Services’ Administration for Children and Families reported use of AI tools to aid in placing and monitoring the safety of refugee children. And, the Department of Veterans Affairs reported a vast number of healthcare and human services-related uses, ranging from clinical diagnostic tools to systems used to predict suicide and overdose risks among veterans. 

Remaining Gaps and Inconsistencies

Although the 2024 agency AI inventories offer greater insight into these core high-risk use cases across the government, there is still significant room for improvement. Most notably, numerous AI inventories contained inconsistent documentation and insufficient detail about compliance with required risk management practices. 

Insufficient detail

Under OMB’s guidance on federal agencies’ use of AI, agencies were permitted to issue waivers or extensions for certain risk management practices if an agency needed additional time to fulfill a requirement, or if a specific practice would increase risk or impede agency operations. Disappointingly, public reporting about these measures was overwhelmingly scarce across all agencies. The Department of Homeland Security, for example, was the only agency in the entire federal government to include specific information about the length of time for which extensions were issued. And, the Department of Housing and Urban Development was the only agency to report information about any waivers issued, while all other agencies merely left entire sections of their inventories blank without further explanation.

Lack of consistency

Beyond these gaps, inventory reporting is incredibly variable within and between federal agencies, including different levels of detail and different approaches to reporting and categorizing the risk level of use cases. Some agencies and subcomponents within agencies completed a majority of the fields required in their inventories while others, including other subcomponents within the same agency, left many of the same fields blank. In addition, many agencies classified very similar tools as having different levels of risk. For example, the Department of Housing and Urban Development classified a AI tool used for translation as rights-impacting while the Department of Homeland Security did not classify a similar translation tool as rights- or safety-impacting.

Across these inconsistencies, one of the greatest barriers to public understanding is that agencies are not required to report information about how they determined whether or not a particular use case is high-risk. Without this information, it remains difficult for the public to understand why similar systems used by different agencies have different risk classifications or why seemingly high-risk tools (such as AI tools used to redact personally identifiable information) are not designated as such. The Department of Homeland Security, however, stands apart from other agencies on this issue. Alongside their updated AI inventory, the Department of Homeland Security published a companion blog post that provides greater explanation about how the agency approached the completion of their updated inventory, including additional information about how the Department’s leadership made determinations about high-risk use cases and about the nature of extensions issued. This should serve as a model for other agencies to publicly communicate additional information about why and how AI governance decisions are made.

Conclusion

Agency AI use case inventories should not be an end unto themselves. Instead, they should serve as the foundation for agencies to build public accountability and trust about how they are using and governing AI tools. 

The value of these inventories as a transparency tool is further reinforced as state and local governments establish similar legal requirements for government agencies to publish AI use case inventories. At least 12 states have formally issued such requirements, through either legislation or executive order, and the updated federal inventories can serve as an important model for these and other states across the country.

OMB now has the opportunity to make significant improvements to federal agencies’ AI use case inventories heading into their 2025 updates. OMB’s recently updated guidance on federal agencies’ use of AI states that OMB will issue additional “detailed instructions to agencies regarding the inventory and its scope.” OMB should use these instructions as a tool to provide agencies with additional clarity about their obligations and to address the gaps and inconsistencies seen in the 2024 inventories. 

AI use case inventories are a critical transparency mechanism for public agencies at all levels of government. They push governments to document and disclose their myriad uses of AI, and the steps they’ve taken to mitigate risks to individuals’ rights and safety in a manner that is clear and accessible to the public. As federal agencies continue to meet their existing legal obligations, ensuring that agencies update their inventories in a timely manner and that their inventories are robust, detailed, and usable should be a key component of meeting this transparency goal.

The post Exploring the 2024 Federal AI Inventories: Key Improvements, Trends, and Continued Inconsistencies appeared first on Center for Democracy and Technology.

]]>
EU AI Act Brief – Pt. 4, AI at Work https://cdt.org/insights/eu-ai-act-brief-pt-4-ai-at-work/ Mon, 14 Apr 2025 19:44:15 +0000 https://cdt.org/?post_type=insight&p=108334 AI ACT SERIES: CDT Europe has been publishing a series of blog posts and briefing papers on the EU AI Act and what it means for human rights. To receive these briefings in your inbox, don’t forget to subscribe to our AI Bulletin here. Below is the fourth post of the series where we examine the deployment of AI […]

The post EU AI Act Brief – Pt. 4, AI at Work appeared first on Center for Democracy and Technology.

]]>
Graphic for EU AI Act Brief–Pt. 4, AI at Work. Yellow gradient background, black and dark yellow text.
Graphic for EU AI Act Brief–Pt. 4, AI at Work. Yellow gradient background, black and dark yellow text.

AI ACT SERIES: CDT Europe has been publishing a series of blog posts and briefing papers on the EU AI Act and what it means for human rights. To receive these briefings in your inbox, don’t forget to subscribe to our AI Bulletin here. Below is the fourth post of the series where we examine the deployment of AI systems in the workplace and the EU AI Act’s specific obligations aimed at ensuring the protection of workers.

[ PDF version ]

***

In the past years, the use of algorithmic management and decision-making systems in the workplace has become more and more widespread: a recent OECD survey found that over 70% of consulted managers reported that their firms used at least one automated tool to instruct, monitor or evaluate employees. This increase in use is understandably being met with apprehension. A survey conducted this year by the European Commission underscores workers’ overwhelming support for rules regulating the use of AI in the workplace, endorsing the European Trade Union Confederation’s previous calls for a Directive on algorithmic systems in the workplace that would specifically tackle some of the emerging challenges. 

The EU’s AI Act, the first cross-cutting landmark regulation of AI, recognises the risks involved in the deployment of AI systems in the workplace and it creates specific obligations aimed at ensuring the protection of workers through prohibitions and increased safeguards, with varying levels of success. 

Building on the previous explainers in this series, this brief zooms in on the specific aspects of the AI Act that are most relevant in the context of employment and the rights of workers in light of existing EU legislation on the protection of workers. 

This explainer will focus on the obligations of employers using AI systems in the workplace. Under the AI Act taxonomy, employers using AI will qualify as deployers of an AI system, regardless of whether an AI system is developed in-house – in which case they could be considered to be both providers and deployers – or acquired for use in the workplace.

Prohibited AI systems: red lines in the employment context

In line with its risk-based approach, the AI Act prohibits several AI practices which it considers to pose an unacceptable risk – several of which directly or indirectly are relevant to the workplace.  While only a prohibition on the use of emotion recognition systems in the workplace explicitly relates to the employment context, several other prohibited AI systems have the potential to adversely impact the rights of workers, such as biometric categorisation systems or social scoring systems. We explore the prohibitions with the most salient impacts on the workforce below, in order of strength. 

Biometric categorisation – entirely prohibited

The Act prohibits AI systems which categorise individuals based on their biometric data to deduce or infer a series of attributes, including race, political opinions, and trade union membership among others (Article 5(1)(g)). This prohibition captures an employer relying on biometric categorisation to find out whether an individual belongs to a specific trade union, which could lead to negative consequences for that individual worker. This prohibition could similarly be relevant in the context of recruitment, for example if a job advertisement is only shown to certain groups of people based on their prior categorisation.

Emotion recognition – (Mostly) prohibited in employment settings

Acknowledging the well-established unreliability of emotion recognition systems (Recital 44), the AI Act prohibits the placing in the market and use of AI systems that infer emotions from individuals in the workplace, except when such systems are put in place for medical or safety reasons (Article 5(1)(f)).  Emotion recognition under the Act is defined not in terms of an AI system’s capability, but in terms of its purpose, namely “identifying or inferring emotions or intentions of natural persons on the basis of their biometric data”. The Act excludes from the definition systems to recognise physical states, such as pain or fatigue (Recital 18), which are otherwise permitted.  

The guidelines on prohibited AI practices issued by the EU AI Office provide key clarifications on the scope of the prohibition. First, the guidelines apply a broad interpretation of “workplace”, clarifying that the prohibition extends to the recruitment process – in other words, job applicants or candidates are protected even in the absence of a formal employment or contractual relationship. Second, the guidelines clarify that the exception for medical and safety reasons should be interpreted narrowly, with any proposed interventions being required to be (i) responsive to an explicit need, (ii) limited to what is “strictly necessary”, including limits in time, personal application and scale, and (iii) accompanied by sufficient safeguards. Consequently, the guidelines specify that the “medical reasons” exception cannot be relied upon to legitimise  the detection of general aspects of wellbeing, including monitoring of stress levels. Likewise, “safety reasons” pertain only to the protection of life and health, and cannot be relied upon to legitimise the use of emotion recognition for the purposes of protecting property interests, for example to protect against theft or fraud. 

Despite the welcome clarifications above, the guidelines introduce carve-outs not foreseen in the text of the prohibitions itself. Notably, they exclude systems deployed for personal training purposes as long as the results are not shared with human-resources responsible persons and cannot impact the work relationship of the person trained or their professional progression. This carve-out enables employers to require workers to undergo emotion recognition for training purposes – even if the results are not shared, a third-party company contracted to provide such training could inform the employer whether such training was undertaken or not. Moreover, the guidelines state that crowd-control measures in public spaces continue to be allowed even if this means that employees present in the area will be subject to emotion recognition, given that this is not the primary aim of the measure. Consequently, employees working for example at a sports stadium could still be lawfully subject to emotion recognition according to the guidelines.

Social scoring – prohibited on a case-by-case basis

Furthermore, the AI Act prohibits systems used for social scoring of individuals or groups based on their social behaviour or known or inferred characteristics whenever the score leads to detrimental treatment in an unrelated context or to detrimental treatment disproportionate to the social behaviour or its gravity (Article 5(1)(c)). In the workplace context, the latter is likely to be more relevant, and could include situations where a worker is fired or demoted based on their behaviour and inferred personality traits – such as perceived introversion or aloofness – such that treatment is unjustified or disproportionate to the social behaviour itself or its gravity. However, whether or not the practical consequence of a poor social scoring results in disproportionate treatment will likely ultimately turn on the facts of the specific case at hand. In this regard, it is crucial to note that the Act itself notes that the social scoring prohibition does not apply to lawful evaluation practices carried out for a specific purpose (Recital 31), and the guidelines on prohibited practices specifically cite specific employee evaluations as an example of lawful evaluation practices, noting that “they are not per se prohibited, if lawful and undertaken in line with the AI Act and other applicable Union law and national law”. The guidelines therefore signal that the use of social scoring in worker evaluations is not de facto prohibited, while cautioning that it could fall foul of the AI Act if all elements of the prohibition were met. 

Real-time biometric identification – permitted

Finally, the AI Act prohibits real-time remote biometric identification specifically in the context of law enforcement (Article 5(1)(h)), implicitly acquiescing to the lawfulness of its use whenever used for purposes other than law enforcement. Such systems can therefore potentially be lawfully introduced and used by the employer to surveil workers under the AI Act, even as they might be subject to restrictions under the General Data Protection Regulation or other laws.

Limited protections from high-risk systems

The bulk of the AI Act is dedicated to regulating the development and deployment of high-risk AI systems, which are overall permitted but subject to safeguards, ranging from general notice requirements to the availability of effective remedies. 

An AI system can be considered high-risk under the Act if it is listed in Annex III of the Act. This includes systems deployed in employment and self-employment, in particular i) recruitment and selection, ii) promotions and termination, iii) allocation of tasks and monitoring and iv) evaluation of performance (Annex III 4(a)).

As we have commented numerous times, one of the key shortcomings of the Act is that it allows the possibility for an AI system deployed in any of the settings described in Annex III – including those set out above – to escape the high-risk classification if it is considered that a given system does not pose a significant risk of harm to the health, safety or fundamental rights of individuals (Article 6(3)). If a system is not recognised as being high-risk by a provider, most of the AI Act obligations are inapplicable – including those pertaining to deployers. Nevertheless, providers deeming an AI system not to be high-risk despite being covered by Annex III are asked to document this assessment (Article 6(4)), and register their system in a publicly available database (Article 49(2)). The AI Act further requires deployers who are public authorities not to use a high-risk AI system if it has not been listed by a provider in the publicly available database, creating an additional safeguard for their employees (Article 26(8)), but no similar restriction operates for private sector employees.

The high-risk classification is essential for key fundamental rights protections to kick in. High-risk systems are subject to risk management obligations, which include the identification of risks that the high-risk AI system can pose to health, safety or fundamental rights, transparency obligations towards deployers, and guarantees relative to human oversight, among others. 

Deployers of a high-risk AI system – which includes employers – specifically have several key obligations enabling the transparency and accountability of the use of AI systems in the workplace. These obligations vary based on the identity of the deployer.

Obligations applying to all deployers

The AI Act imposes general obligations on deployers, including ensuring some level of human oversight and monitoring the functioning of an AI system. 

Where the workplace is concerned, the AI Act creates a concrete notice obligation for deployers, requiring deployers of high-risk AI systems to inform workers’ representatives and affected workers that they will be subject to an AI system prior to putting such a system in place (Article 26(7)). The recitals leave the door open to go beyond mere notice requirements, noting that the Act is without prejudice to worker consultation procedures laid down in EU law (Recital 92) – however existing laws cover consultation procedures in a patchwork manner. The Workers’ Safety and Health Directive requires consultation with workers and/or their representatives on the planning and introduction of new technologies, specifically regarding the consequences of the choice of equipment, the working conditions and the working environment for the safety and health of workers (Article 6(3)(c)). The Directive on informing and consulting employees obliges employers beyond a given size to consult with their employees on decisions likely to lead to substantial changes in work organisation, while leaving the regulation of the practical arrangements to the Member States (Article 4(2)(c)). Consequently, this Directive has the potential to cover a wider scope of AI systems with implications for workers’ rights, besides their safety and health. Nevertheless, it is unclear whether the introduction of AI would fall within Member States’ definition of “substantial changes”. 

The consultation obligation set out in Directive 2002/14/EC has been interpreted by the recently adopted Platform Work Directive to include “decisions likely to lead to the introduction of or to substantial changes in the use of automated monitoring systems or automated decision-making systems” (Article 13(2)). This Directive also regulates in detail the information digital labour platforms need to provide to platform workers, their representatives and national competent authorities in the context of automated monitoring and decision-making systems (Article 9). It is, however, important to keep in mind that this Directive only applies to work organised through a digital labour platform (Article 2(1)(a) and (b)). This includes work performed completely online, including professional tasks such as software development or translation services, or in a hybrid manner combining online communication with a real-world activity, for instance the provisions of transportation services or food delivery (see Recital 5). It therefore remains to be seen to what extent the obligation to consult under Directive 2002/14/EC also applies to regular workspaces.

From a rights perspective, consultations are only the starting point – how they are conducted, and the extent to which the results are taken on board are crucial to ensure their effectiveness. The AI Act leaves the possibility for more favourable legislation for workers in the Union or Member States open (Article 2(11)). Consequently, for instance, whether workers or their representatives have a veto over the introduction of AI systems depends on the national law and collective agreements in place.

Obligations applying to deployers who are public authorities or perform public services

The AI Act creates additional obligations for deployers who are public authorities, which are held to a higher standard. As already explored above, public authorities cannot deploy a high-risk AI system that has not been previously identified and registered as such by a provider in a public database. Further, the Act requires public authorities to conduct a fundamental rights impact assessment (FRIA) prior to the deployment of an AI system identified as high-risk in Annex III (Article 27) and the registration of a high-risk AI system being used in a publicly available database (Article 26(8)).  While these obligations are crucial in ensuring the transparency and accountability of use of an AI system in the workplace, there are important nuances to be taken into account. 

The obligation to conduct a FRIA applies not only to entities governed by public law, but also – crucially – to private entities performing public services, which the AI Act considers to cover entities providing services “linked to tasks in the public interest”, such as in the areas of education, healthcare, social services, housing, and the administration of justice (Recital 96). The list provided is non-exhaustive, opening up the possibility for entities performing other functions to be covered. FRIAs are a unique feature and perhaps the most positive aspect under the AI Act. Unfortunately however, this obligation only applies in the narrow circumstances identified above, meaning that the majority of private employers are not required to assess the impact of the AI system’s use on the fundamental rights of their employees before deployment. Once conducted, there is no obligation on the employer to disclose the full results of the FRIA beyond notifying the national regulator of the outcome, limiting the potential for employee awareness and oversight. 

Beyond conducting a FRIA, the AI Act requires public sector deployers or any entity acting on their behalf to register any high-risk AI systems used in a public database, providing basic information on the AI system in an accessible manner (Article 71), and specifically including a summary of the fundamental rights impact assessment and data protection impact assessment (Annex VIII Section C). On this basis, workers could expect to see a brief summary of any anticipated fundamental rights impacts, as well as any mitigations undertaken by their employer.

Remedies, enforcement and governance

As explained in a previous blog post, the AI Act contains only a limited number of remedies, which are solely available for individuals having been subjected to a high-risk AI system within the meaning of Annex III. These remedies consist of the right to an explanation for a decision taken based on the output of a high-risk AI system, as well as the right to lodge a complaint. 

The AI Act gives individuals subject to a decision based on a high-risk system’s output the right to a clear and meaningful explanation by the deployer of the system (Article 86), building on the right not to be subjected to automated decision-making (ADM) with legal or similar effects on individuals, laid down in the General Data Protection Regulation (GDPR). The GDPR further requires the data controller to inform individuals about the existence of automated decision-making, the logic involved as well as the significance and consequences of such processing (Articles 13(2)(f) and 14(2)(g)). Where GDPR creates a base layer of protection shielding individuals from the serious consequences of automation, the AI Act introduces an additional dimension of protection by entitling individuals to information about consequential decisions taken not solely through automated means, but nonetheless relying on its support. 

The right to a clear and meaningful explanation can be a useful tool for employees to open up the “black box” of an algorithmic management or decision-making system and understand its logic, potentially enabling them to assess whether they have been adversely affected. However, the Act is not clear whether the explanation is to be provided proactively or whether individuals are entitled to receive it only upon request. In the latter case, the burden would be on employees to remain alert to any decisions likely taken with the support of AI systems. Further, as most employers will probably struggle to fully comprehend the logic of the AI system themselves, such explanations may be inaccurate or incomplete and will therefore not always contribute to a better understanding of the situation. Lastly, the explanation – if meaningfully given – is no guarantee of corrective action, which will have to be sought outside of the scope of the AI Act. 

The AI Act creates the right for any individual to lodge a complaint before a national market surveillance authority if they consider any part of the AI Act has been infringed, regardless of whether they have been personally affected or not (Article 85).

For example, an employee could bring a complaint if:

  • They did not receive an explanation for a decision taken based on the output of a performance-monitoring AI system at work;
  • Their public sector employer deployed a high-risk AI system at the workplace without disclosing it in the public database of AI systems; or
  • Their private sector employer failed to give prior notice to the workforce about a high-risk AI system being rolled out at work.

As we have previously analysed, the right to lodge a complaint is limited as it does not include an obligation for a national authority to investigate or to respond. Nevertheless, it is an additional regulatory avenue for individuals suspecting foul play and any violation of the AI Act. 

The AI Act creates several oversight mechanisms to invite sector-specific expertise in the enforcement of the AI Act. Notably, the AI Act provides for the designation of fundamental rights authorities at national level who may request and access documentation created in observance of the obligations of the AI Act in accessible language and format to exercise their mandate (Article 77(1)). In some Member States, those authorities include institutions active in the context of workers’ rights and labour law, such as labour inspectorates or occupational health and safety institutions. These authorities can therefore ask for the necessary information on the deployed AI system to facilitate the exercise of their mandate and protect the rights of workers. 

Finally, the AI Act establishes an Advisory Forum to provide technical expertise and advice with a balanced membership from industry, start-ups, SMEs, civil society and academia. While there is no explicit involvement of social partners on it, the Forum could provide an important platform for stakeholders to specifically bring in the perspectives of workers and their rights.

Conclusion

In conclusion, while the AI Act’s minimum harmonisation approach in the context of employment is a positive step, allowing more favourable laws to apply, the regulation itself has only limited potential to protect workers’ rights – with its main contributions being the restriction of the use of emotion recognition in the workplace, creation of notice obligations and explanation mechanisms. In particular, the obligations of employers deploying high-risk systems come with significant loopholes and flaws. Likewise, workers and their representatives have limited remedies available in the case of AI-induced harm. Potential secondary legislation could strengthen workers’ rights to be meaningfully consulted before the introduction of algorithmic management and decision-making tools. It should furthermore require all employers to consider the fundamental rights impact of those systems and ensure their transparency and explainability to workers and their representatives.

As the AI Act is gradually implemented, important aspects to monitor are the use of notice and  – where applicable under existing EU or national law – consultation mechanisms at the worker level, as well as the interpretation and operationalisation of the right to obtain an explanation. Another crucial area of inquiry will be the extent to which private entities can be considered to be providing public services on a case-by-case basis. It is therefore vital that CSOs and workers’ rights organisations are meaningfully engaged in the AI Act’s implementation and enforcement processes.

Read the PDF version.

The post EU AI Act Brief – Pt. 4, AI at Work appeared first on Center for Democracy and Technology.

]]>