Workers' Rights Archives - Center for Democracy and Technology https://cdt.org/area-of-focus/privacy-data/workers-rights/ Thu, 17 Apr 2025 09:28:33 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://cdt.org/wp-content/uploads/2019/11/cropped-cdt-logo-32x32.png Workers' Rights Archives - Center for Democracy and Technology https://cdt.org/area-of-focus/privacy-data/workers-rights/ 32 32 EU AI Act Brief – Pt. 4, AI at Work https://cdt.org/insights/eu-ai-act-brief-pt-4-ai-at-work/ Mon, 14 Apr 2025 19:44:15 +0000 https://cdt.org/?post_type=insight&p=108334 AI ACT SERIES: CDT Europe has been publishing a series of blog posts and briefing papers on the EU AI Act and what it means for human rights. To receive these briefings in your inbox, don’t forget to subscribe to our AI Bulletin here. Below is the fourth post of the series where we examine the deployment of AI […]

The post EU AI Act Brief – Pt. 4, AI at Work appeared first on Center for Democracy and Technology.

]]>
Graphic for EU AI Act Brief–Pt. 4, AI at Work. Yellow gradient background, black and dark yellow text.
Graphic for EU AI Act Brief–Pt. 4, AI at Work. Yellow gradient background, black and dark yellow text.

AI ACT SERIES: CDT Europe has been publishing a series of blog posts and briefing papers on the EU AI Act and what it means for human rights. To receive these briefings in your inbox, don’t forget to subscribe to our AI Bulletin here. Below is the fourth post of the series where we examine the deployment of AI systems in the workplace and the EU AI Act’s specific obligations aimed at ensuring the protection of workers.

[ PDF version ]

***

In the past years, the use of algorithmic management and decision-making systems in the workplace has become more and more widespread: a recent OECD survey found that over 70% of consulted managers reported that their firms used at least one automated tool to instruct, monitor or evaluate employees. This increase in use is understandably being met with apprehension. A survey conducted this year by the European Commission underscores workers’ overwhelming support for rules regulating the use of AI in the workplace, endorsing the European Trade Union Confederation’s previous calls for a Directive on algorithmic systems in the workplace that would specifically tackle some of the emerging challenges. 

The EU’s AI Act, the first cross-cutting landmark regulation of AI, recognises the risks involved in the deployment of AI systems in the workplace and it creates specific obligations aimed at ensuring the protection of workers through prohibitions and increased safeguards, with varying levels of success. 

Building on the previous explainers in this series, this brief zooms in on the specific aspects of the AI Act that are most relevant in the context of employment and the rights of workers in light of existing EU legislation on the protection of workers. 

This explainer will focus on the obligations of employers using AI systems in the workplace. Under the AI Act taxonomy, employers using AI will qualify as deployers of an AI system, regardless of whether an AI system is developed in-house – in which case they could be considered to be both providers and deployers – or acquired for use in the workplace.

Prohibited AI systems: red lines in the employment context

In line with its risk-based approach, the AI Act prohibits several AI practices which it considers to pose an unacceptable risk – several of which directly or indirectly are relevant to the workplace.  While only a prohibition on the use of emotion recognition systems in the workplace explicitly relates to the employment context, several other prohibited AI systems have the potential to adversely impact the rights of workers, such as biometric categorisation systems or social scoring systems. We explore the prohibitions with the most salient impacts on the workforce below, in order of strength. 

Biometric categorisation – entirely prohibited

The Act prohibits AI systems which categorise individuals based on their biometric data to deduce or infer a series of attributes, including race, political opinions, and trade union membership among others (Article 5(1)(g)). This prohibition captures an employer relying on biometric categorisation to find out whether an individual belongs to a specific trade union, which could lead to negative consequences for that individual worker. This prohibition could similarly be relevant in the context of recruitment, for example if a job advertisement is only shown to certain groups of people based on their prior categorisation.

Emotion recognition – (Mostly) prohibited in employment settings

Acknowledging the well-established unreliability of emotion recognition systems (Recital 44), the AI Act prohibits the placing in the market and use of AI systems that infer emotions from individuals in the workplace, except when such systems are put in place for medical or safety reasons (Article 5(1)(f)).  Emotion recognition under the Act is defined not in terms of an AI system’s capability, but in terms of its purpose, namely “identifying or inferring emotions or intentions of natural persons on the basis of their biometric data”. The Act excludes from the definition systems to recognise physical states, such as pain or fatigue (Recital 18), which are otherwise permitted.  

The guidelines on prohibited AI practices issued by the EU AI Office provide key clarifications on the scope of the prohibition. First, the guidelines apply a broad interpretation of “workplace”, clarifying that the prohibition extends to the recruitment process – in other words, job applicants or candidates are protected even in the absence of a formal employment or contractual relationship. Second, the guidelines clarify that the exception for medical and safety reasons should be interpreted narrowly, with any proposed interventions being required to be (i) responsive to an explicit need, (ii) limited to what is “strictly necessary”, including limits in time, personal application and scale, and (iii) accompanied by sufficient safeguards. Consequently, the guidelines specify that the “medical reasons” exception cannot be relied upon to legitimise  the detection of general aspects of wellbeing, including monitoring of stress levels. Likewise, “safety reasons” pertain only to the protection of life and health, and cannot be relied upon to legitimise the use of emotion recognition for the purposes of protecting property interests, for example to protect against theft or fraud. 

Despite the welcome clarifications above, the guidelines introduce carve-outs not foreseen in the text of the prohibitions itself. Notably, they exclude systems deployed for personal training purposes as long as the results are not shared with human-resources responsible persons and cannot impact the work relationship of the person trained or their professional progression. This carve-out enables employers to require workers to undergo emotion recognition for training purposes – even if the results are not shared, a third-party company contracted to provide such training could inform the employer whether such training was undertaken or not. Moreover, the guidelines state that crowd-control measures in public spaces continue to be allowed even if this means that employees present in the area will be subject to emotion recognition, given that this is not the primary aim of the measure. Consequently, employees working for example at a sports stadium could still be lawfully subject to emotion recognition according to the guidelines.

Social scoring – prohibited on a case-by-case basis

Furthermore, the AI Act prohibits systems used for social scoring of individuals or groups based on their social behaviour or known or inferred characteristics whenever the score leads to detrimental treatment in an unrelated context or to detrimental treatment disproportionate to the social behaviour or its gravity (Article 5(1)(c)). In the workplace context, the latter is likely to be more relevant, and could include situations where a worker is fired or demoted based on their behaviour and inferred personality traits – such as perceived introversion or aloofness – such that treatment is unjustified or disproportionate to the social behaviour itself or its gravity. However, whether or not the practical consequence of a poor social scoring results in disproportionate treatment will likely ultimately turn on the facts of the specific case at hand. In this regard, it is crucial to note that the Act itself notes that the social scoring prohibition does not apply to lawful evaluation practices carried out for a specific purpose (Recital 31), and the guidelines on prohibited practices specifically cite specific employee evaluations as an example of lawful evaluation practices, noting that “they are not per se prohibited, if lawful and undertaken in line with the AI Act and other applicable Union law and national law”. The guidelines therefore signal that the use of social scoring in worker evaluations is not de facto prohibited, while cautioning that it could fall foul of the AI Act if all elements of the prohibition were met. 

Real-time biometric identification – permitted

Finally, the AI Act prohibits real-time remote biometric identification specifically in the context of law enforcement (Article 5(1)(h)), implicitly acquiescing to the lawfulness of its use whenever used for purposes other than law enforcement. Such systems can therefore potentially be lawfully introduced and used by the employer to surveil workers under the AI Act, even as they might be subject to restrictions under the General Data Protection Regulation or other laws.

Limited protections from high-risk systems

The bulk of the AI Act is dedicated to regulating the development and deployment of high-risk AI systems, which are overall permitted but subject to safeguards, ranging from general notice requirements to the availability of effective remedies. 

An AI system can be considered high-risk under the Act if it is listed in Annex III of the Act. This includes systems deployed in employment and self-employment, in particular i) recruitment and selection, ii) promotions and termination, iii) allocation of tasks and monitoring and iv) evaluation of performance (Annex III 4(a)).

As we have commented numerous times, one of the key shortcomings of the Act is that it allows the possibility for an AI system deployed in any of the settings described in Annex III – including those set out above – to escape the high-risk classification if it is considered that a given system does not pose a significant risk of harm to the health, safety or fundamental rights of individuals (Article 6(3)). If a system is not recognised as being high-risk by a provider, most of the AI Act obligations are inapplicable – including those pertaining to deployers. Nevertheless, providers deeming an AI system not to be high-risk despite being covered by Annex III are asked to document this assessment (Article 6(4)), and register their system in a publicly available database (Article 49(2)). The AI Act further requires deployers who are public authorities not to use a high-risk AI system if it has not been listed by a provider in the publicly available database, creating an additional safeguard for their employees (Article 26(8)), but no similar restriction operates for private sector employees.

The high-risk classification is essential for key fundamental rights protections to kick in. High-risk systems are subject to risk management obligations, which include the identification of risks that the high-risk AI system can pose to health, safety or fundamental rights, transparency obligations towards deployers, and guarantees relative to human oversight, among others. 

Deployers of a high-risk AI system – which includes employers – specifically have several key obligations enabling the transparency and accountability of the use of AI systems in the workplace. These obligations vary based on the identity of the deployer.

Obligations applying to all deployers

The AI Act imposes general obligations on deployers, including ensuring some level of human oversight and monitoring the functioning of an AI system. 

Where the workplace is concerned, the AI Act creates a concrete notice obligation for deployers, requiring deployers of high-risk AI systems to inform workers’ representatives and affected workers that they will be subject to an AI system prior to putting such a system in place (Article 26(7)). The recitals leave the door open to go beyond mere notice requirements, noting that the Act is without prejudice to worker consultation procedures laid down in EU law (Recital 92) – however existing laws cover consultation procedures in a patchwork manner. The Workers’ Safety and Health Directive requires consultation with workers and/or their representatives on the planning and introduction of new technologies, specifically regarding the consequences of the choice of equipment, the working conditions and the working environment for the safety and health of workers (Article 6(3)(c)). The Directive on informing and consulting employees obliges employers beyond a given size to consult with their employees on decisions likely to lead to substantial changes in work organisation, while leaving the regulation of the practical arrangements to the Member States (Article 4(2)(c)). Consequently, this Directive has the potential to cover a wider scope of AI systems with implications for workers’ rights, besides their safety and health. Nevertheless, it is unclear whether the introduction of AI would fall within Member States’ definition of “substantial changes”. 

The consultation obligation set out in Directive 2002/14/EC has been interpreted by the recently adopted Platform Work Directive to include “decisions likely to lead to the introduction of or to substantial changes in the use of automated monitoring systems or automated decision-making systems” (Article 13(2)). This Directive also regulates in detail the information digital labour platforms need to provide to platform workers, their representatives and national competent authorities in the context of automated monitoring and decision-making systems (Article 9). It is, however, important to keep in mind that this Directive only applies to work organised through a digital labour platform (Article 2(1)(a) and (b)). This includes work performed completely online, including professional tasks such as software development or translation services, or in a hybrid manner combining online communication with a real-world activity, for instance the provisions of transportation services or food delivery (see Recital 5). It therefore remains to be seen to what extent the obligation to consult under Directive 2002/14/EC also applies to regular workspaces.

From a rights perspective, consultations are only the starting point – how they are conducted, and the extent to which the results are taken on board are crucial to ensure their effectiveness. The AI Act leaves the possibility for more favourable legislation for workers in the Union or Member States open (Article 2(11)). Consequently, for instance, whether workers or their representatives have a veto over the introduction of AI systems depends on the national law and collective agreements in place.

Obligations applying to deployers who are public authorities or perform public services

The AI Act creates additional obligations for deployers who are public authorities, which are held to a higher standard. As already explored above, public authorities cannot deploy a high-risk AI system that has not been previously identified and registered as such by a provider in a public database. Further, the Act requires public authorities to conduct a fundamental rights impact assessment (FRIA) prior to the deployment of an AI system identified as high-risk in Annex III (Article 27) and the registration of a high-risk AI system being used in a publicly available database (Article 26(8)).  While these obligations are crucial in ensuring the transparency and accountability of use of an AI system in the workplace, there are important nuances to be taken into account. 

The obligation to conduct a FRIA applies not only to entities governed by public law, but also – crucially – to private entities performing public services, which the AI Act considers to cover entities providing services “linked to tasks in the public interest”, such as in the areas of education, healthcare, social services, housing, and the administration of justice (Recital 96). The list provided is non-exhaustive, opening up the possibility for entities performing other functions to be covered. FRIAs are a unique feature and perhaps the most positive aspect under the AI Act. Unfortunately however, this obligation only applies in the narrow circumstances identified above, meaning that the majority of private employers are not required to assess the impact of the AI system’s use on the fundamental rights of their employees before deployment. Once conducted, there is no obligation on the employer to disclose the full results of the FRIA beyond notifying the national regulator of the outcome, limiting the potential for employee awareness and oversight. 

Beyond conducting a FRIA, the AI Act requires public sector deployers or any entity acting on their behalf to register any high-risk AI systems used in a public database, providing basic information on the AI system in an accessible manner (Article 71), and specifically including a summary of the fundamental rights impact assessment and data protection impact assessment (Annex VIII Section C). On this basis, workers could expect to see a brief summary of any anticipated fundamental rights impacts, as well as any mitigations undertaken by their employer.

Remedies, enforcement and governance

As explained in a previous blog post, the AI Act contains only a limited number of remedies, which are solely available for individuals having been subjected to a high-risk AI system within the meaning of Annex III. These remedies consist of the right to an explanation for a decision taken based on the output of a high-risk AI system, as well as the right to lodge a complaint. 

The AI Act gives individuals subject to a decision based on a high-risk system’s output the right to a clear and meaningful explanation by the deployer of the system (Article 86), building on the right not to be subjected to automated decision-making (ADM) with legal or similar effects on individuals, laid down in the General Data Protection Regulation (GDPR). The GDPR further requires the data controller to inform individuals about the existence of automated decision-making, the logic involved as well as the significance and consequences of such processing (Articles 13(2)(f) and 14(2)(g)). Where GDPR creates a base layer of protection shielding individuals from the serious consequences of automation, the AI Act introduces an additional dimension of protection by entitling individuals to information about consequential decisions taken not solely through automated means, but nonetheless relying on its support. 

The right to a clear and meaningful explanation can be a useful tool for employees to open up the “black box” of an algorithmic management or decision-making system and understand its logic, potentially enabling them to assess whether they have been adversely affected. However, the Act is not clear whether the explanation is to be provided proactively or whether individuals are entitled to receive it only upon request. In the latter case, the burden would be on employees to remain alert to any decisions likely taken with the support of AI systems. Further, as most employers will probably struggle to fully comprehend the logic of the AI system themselves, such explanations may be inaccurate or incomplete and will therefore not always contribute to a better understanding of the situation. Lastly, the explanation – if meaningfully given – is no guarantee of corrective action, which will have to be sought outside of the scope of the AI Act. 

The AI Act creates the right for any individual to lodge a complaint before a national market surveillance authority if they consider any part of the AI Act has been infringed, regardless of whether they have been personally affected or not (Article 85).

For example, an employee could bring a complaint if:

  • They did not receive an explanation for a decision taken based on the output of a performance-monitoring AI system at work;
  • Their public sector employer deployed a high-risk AI system at the workplace without disclosing it in the public database of AI systems; or
  • Their private sector employer failed to give prior notice to the workforce about a high-risk AI system being rolled out at work.

As we have previously analysed, the right to lodge a complaint is limited as it does not include an obligation for a national authority to investigate or to respond. Nevertheless, it is an additional regulatory avenue for individuals suspecting foul play and any violation of the AI Act. 

The AI Act creates several oversight mechanisms to invite sector-specific expertise in the enforcement of the AI Act. Notably, the AI Act provides for the designation of fundamental rights authorities at national level who may request and access documentation created in observance of the obligations of the AI Act in accessible language and format to exercise their mandate (Article 77(1)). In some Member States, those authorities include institutions active in the context of workers’ rights and labour law, such as labour inspectorates or occupational health and safety institutions. These authorities can therefore ask for the necessary information on the deployed AI system to facilitate the exercise of their mandate and protect the rights of workers. 

Finally, the AI Act establishes an Advisory Forum to provide technical expertise and advice with a balanced membership from industry, start-ups, SMEs, civil society and academia. While there is no explicit involvement of social partners on it, the Forum could provide an important platform for stakeholders to specifically bring in the perspectives of workers and their rights.

Conclusion

In conclusion, while the AI Act’s minimum harmonisation approach in the context of employment is a positive step, allowing more favourable laws to apply, the regulation itself has only limited potential to protect workers’ rights – with its main contributions being the restriction of the use of emotion recognition in the workplace, creation of notice obligations and explanation mechanisms. In particular, the obligations of employers deploying high-risk systems come with significant loopholes and flaws. Likewise, workers and their representatives have limited remedies available in the case of AI-induced harm. Potential secondary legislation could strengthen workers’ rights to be meaningfully consulted before the introduction of algorithmic management and decision-making tools. It should furthermore require all employers to consider the fundamental rights impact of those systems and ensure their transparency and explainability to workers and their representatives.

As the AI Act is gradually implemented, important aspects to monitor are the use of notice and  – where applicable under existing EU or national law – consultation mechanisms at the worker level, as well as the interpretation and operationalisation of the right to obtain an explanation. Another crucial area of inquiry will be the extent to which private entities can be considered to be providing public services on a case-by-case basis. It is therefore vital that CSOs and workers’ rights organisations are meaningfully engaged in the AI Act’s implementation and enforcement processes.

Read the PDF version.

The post EU AI Act Brief – Pt. 4, AI at Work appeared first on Center for Democracy and Technology.

]]>
What Do Workers Want? A CDT/Coworker Deliberative Poll on Workplace Surveillance and Datafication https://cdt.org/insights/what-do-workers-want-a-cdt-coworker-deliberative-poll-on-workplace-surveillance-and-datafication/ Thu, 06 Mar 2025 05:01:00 +0000 https://cdt.org/?post_type=insight&p=107630 This report is also authored by Wilneida Negrón and Lindsey Schwartz. Executive Summary In today’s rapidly evolving workplace, new technology holds the promise of increasing productivity and giving managers and employees alike improved ways to measure impact, but the proliferation of invasive monitoring and data collection—the “datafication” of workers—poses significant risks to workers’ health, safety, […]

The post What Do Workers Want? A CDT/Coworker Deliberative Poll on Workplace Surveillance and Datafication appeared first on Center for Democracy and Technology.

]]>
This report is also authored by Wilneida Negrón and Lindsey Schwartz.

Graphic for a CDT and Coworker.org report, entitled "What Do Workers Want?" Illlustration of 4 panels of a multi-racial working class; the top left shows a bearded Sikh man driving a car; top right shows an older Asian woman on a laptop, bottom right shows a Black man holding a scanner at a packing warehouse; bottom left shows a Latina woman wearing a call center headset; a circular vignette of a security camera is at the center.
Graphic for a CDT and Coworker.org report, entitled “What Do Workers Want?” Illlustration of 4 panels of a multi-racial working class; the top left shows a bearded Sikh man driving a car; top right shows an older Asian woman on a laptop, bottom right shows a Black man holding a scanner at a packing warehouse; bottom left shows a Latina woman wearing a call center headset; a circular vignette of a security camera is at the center.

Executive Summary

In today’s rapidly evolving workplace, new technology holds the promise of increasing productivity and giving managers and employees alike improved ways to measure impact, but the proliferation of invasive monitoring and data collection—the “datafication” of workers—poses significant risks to workers’ health, safety, privacy, and rights. The Center for Democracy & Technology (CDT) and Coworker.org collaborated on a project that explored workers’ perspectives on workplace surveillance through a unique Deliberative Polling approach developed by Professor James Fishkin, founder of Stanford University’s Deliberative Democracy Lab. This process empowers workers to articulate their needs and preferences regarding workplace data collection practices by providing them with resources to educate themselves on the subject and engage each other in informed discussions about this critical issue.

The goals of this project were to:

  1. Identify the rules and standards regarding workplace datafication that employees support when given the opportunity to learn about and discuss the topic in a neutral setting.
  2. Assess how the deliberative process influences workers’ views and priorities regarding datafication.
  3. Evaluate whether increased access to information and peer discussion enhances worker engagement in advocacy for their rights.

Methodology

The project employed a Deliberative Polling methodology to assess workers’ opinions on workplace surveillance. It began with a national public opinion poll of 1,800 workers to identify the types of surveillance that most concern workers (which would then serve as the topics of the Deliberative Poll) and test argument persuasiveness. This was followed by the development of policy proposals and briefing materials containing background information on each topic as well as arguments for and against each proposal. These proposals and briefing materials were refined through a pilot session with 10 workers.

The main Deliberative Poll involved 186 workers who participated in the deliberations (22 in person and 164 online), 170 of whom completed the final post-deliberation survey. The deliberations consisted of three sessions focusing on four topics: monitoring work-from-home employees, location tracking, productivity monitoring, and data rights. Participants discussed proposals in small groups and posed questions to a balanced panel of experts. The process aimed to measure both participants’ final opinions on each proposal as well as shifts in opinion resulting from their informed deliberations.

Results

In the final post-deliberation survey, respondents showed strong support for proposals that would grant workers a right to greater transparency regarding employers’ surveillance and data collection practices, prohibit off-clock surveillance, limit location tracking, and bar employers from engaging in productivity monitoring that would harm workers’ mental or physical health.

Respondents’ views appeared to shift in a number of ways between the pre- and post-deliberation surveys. After deliberations, participants became less likely to support proposals that the covered forms of surveillance should “always” or “never” be allowed and generally became more likely to support more nuanced proposals. Additionally, support for the data rights proposals and the proposals to prohibit productivity monitoring that harms workers’ mental or physical health gained significant additional support in the post-deliberation survey.

The deliberation participants also answered a series of general questions gauging participants’ sentiments and beliefs on technology and the workplace. Here too, there were noticeable shifts in the final post-deliberation survey. Specifically, after deliberations, workers expressed both greater interest and greater confidence in their ability to influence their employers’ actions — a promising finding suggesting that the very act of discussing workplace policy issues makes them better positioned to engage and organize.

Recommendations and conclusion

Moving forward, researchers should explore deliberation-centered methodologies further, both to determine workers’ organic views on key workplace policy issues and as a potential engagement and organizing tool. Policymakers should recognize the urgent need for a regulatory framework addressing the datafication of workers. By centering the voices of employees in this discourse, we can better protect their rights and foster workplaces and labor markets that promote dignity, agency, and respect.

Read the full report.

The post What Do Workers Want? A CDT/Coworker Deliberative Poll on Workplace Surveillance and Datafication appeared first on Center for Democracy and Technology.

]]>
Joint Civil Society Statement on Colorado Senate Bill 24-205 https://cdt.org/insights/joint-civil-society-statement-on-colorado-senate-bill-24-205/ Tue, 10 Dec 2024 19:42:57 +0000 https://cdt.org/?post_type=insight&p=106718 Companies increasingly use AI-driven decision systems to make crucial decisions that alter the course of Coloradans’ lives and careers, often without their knowledge, despite ample evidence that many such systems are deeply biased and flawed. Colorado Senate Bill 24-205 represents a welcome step toward much-needed transparency and accountability for such systems. However, more is needed […]

The post Joint Civil Society Statement on Colorado Senate Bill 24-205 appeared first on Center for Democracy and Technology.

]]>
Companies increasingly use AI-driven decision systems to make crucial decisions that alter the course of Coloradans’ lives and careers, often without their knowledge, despite ample evidence that many such systems are deeply biased and flawed. Colorado Senate Bill 24-205 represents a welcome step toward much-needed transparency and accountability for such systems. However, more is needed to protect Colorado’s consumers and workers.

The undersigned labor, consumer, civil rights, privacy, and other public interest groups urge policymakers to maintain and strengthen the law’s protections. It’s also critical that the law builds on—and does not undermine—existing civil rights and consumer protections under Colorado law.

We urge policymakers to retain the bill’s strongest existing provisions, including:

  • Broad definition of covered systems, making it harder for companies to evade the law;
  • Notice to consumers subjected to AI-driven decisions about the use and purpose of the system;
  • Impact assessments that test AI decision systems for discrimination risks and document the AI decision system’s purpose, intended uses, data used and produced, performance, and post-deployment monitoring;
  • A right to an explanation of the principal reasons behind decisions and a right to appeal such decisions to a human decision-maker; and
  • Giving the Attorney General authority to issue rules interpreting and clarifying the law.

Policymakers should also strengthen the law and further protect Coloradans by:

  • Building on existing civil rights protections by prohibiting the sale or use of discriminatory AI decision systems;
  • Expanding the law’s transparency provisions so that consumers understand why companies are using AI decision systems and what and how these tools measure, including requiring explanations to be actionable;
  • Strengthening impact assessment provisions to require companies to test AI decision systems for validity and the risk that they violate consumer protection, labor, civil rights, and other laws;
  • Eliminating the many loopholes that exclude numerous consumers, workers, and companies from the law’s protections and obligations, as well as unnecessary and overbroad rebuttable presumptions and affirmative defenses that allow companies to escape accountability; and 
  • Strengthening enforcement by giving consumers and local district attorneys the right to seek redress in court when companies fail to comply with the law.

Colorado has an opportunity to lead the nation with innovative policy that places common-sense guardrails on the development and use of AI and automated decision-making systems. We are pleased to see Colorado taking steps toward careful AI regulation, but with other states looking to Colorado’s law as a model for their own AI laws, it is essential that stakeholder collaboration continues. We are eager to continue working with lawmakers to craft AI legislation that both protects the rights and privacy of Colorado residents and encourages technological innovation.

Signed:

ACLU of Colorado

AFT-Colorado

Colorado AFL-CIO

Colorado Fiscal Institute

Teamsters Local 455

Towards Justice

American Association of People with Disabilities

Center for American Progress
Center for Democracy & Technology

Consumer Federation of America 

Consumer Reports

Electronic Privacy Information Center

Tech Equity Action

The post Joint Civil Society Statement on Colorado Senate Bill 24-205 appeared first on Center for Democracy and Technology.

]]>
Screened Out: The Impact of Digitized Hiring Assessments on Disabled Workers https://cdt.org/insights/screened-out-the-impact-of-digitized-hiring-assessments-on-disabled-workers/ Wed, 20 Nov 2024 15:58:21 +0000 https://cdt.org/?post_type=insight&p=106167 This report is also authored by Henry Claypool and Wilneida Negrón. Companies have incorporated hiring technologies, including AI-powered assessments and other automated employment decision systems (AEDSs), into various stages of the hiring process across a wide range of industries. While proponents argue that these technologies can aid in identifying suitable candidates and reducing bias, researchers […]

The post Screened Out: The Impact of Digitized Hiring Assessments on Disabled Workers appeared first on Center for Democracy and Technology.

]]>
This report is also authored by Henry Claypool and Wilneida Negrón.

Graphic for CDT Research report, entitled "Screened Out: The Impact of Digitized Hiring Assessments on Disabled Workers." A multi-panel color illustration includes a wheelchair user typing, a person with headphones facing an error on a laptop, a close-up of a person with a hearing aid, and a person with glasses. Geometric shapes and icons connect these panels, highlighting hiring assessments and discrimination disabled people face.
Graphic for CDT Research report, entitled “Screened Out: The Impact of Digitized Hiring Assessments on Disabled Workers.” A multi-panel color illustration includes a wheelchair user typing, a person with headphones facing an error on a laptop, a close-up of a person with a hearing aid, and a person with glasses. Geometric shapes and icons connect these panels, highlighting hiring assessments and discrimination disabled people face.

Companies have incorporated hiring technologies, including AI-powered assessments and other automated employment decision systems (AEDSs), into various stages of the hiring process across a wide range of industries. While proponents argue that these technologies can aid in identifying suitable candidates and reducing bias, researchers and advocates have identified multiple ethical and legal risks that these technologies present, including discriminatory impacts on members of marginalized groups. This study examines some of the impacts of modern computer-based assessments (“digitized assessments”) — the kinds of assessments commonly used by employers as part of their hiring processes — on disabled job applicants.

The findings and insights in this report aim to inform employers, policymakers, advocates, and researchers about some of the validity and ethical considerations surrounding the use of digitized assessments, with a specific focus on impacts on people with disabilities.

Methodology

We utilized a human-centered qualitative approach to investigate and document the experiences and concerns of a diverse group of participants with disabilities. Participants were asked to complete a series of digitized assessments, including a personality test, cognitive tests, and an AI-scored video interview, and were interviewed about their experiences. Our study included participants who identified as low vision, people with brain injuries, autistic people, D/deaf and/or hard of hearing people, those with intellectual or developmental disabilities, and those with mobility differences. We also included participants with diverse demographic backgrounds in terms of age, race, and gender identity.

The study focused on two distinct groups: (1) individuals who are currently working in, or intend to seek, hourly jobs, and (2) attorneys and law students who have sought or are likely to seek lawyer jobs. By studying these groups, we aimed to understand potential impacts of digitized assessments on workers with roles that require different levels of education and experience.

Findings

Disabled workers felt discriminated against and believed the assessments presented a variety of accessibility barriers. Contrary to the claims made by developers and vendors of hiring technologies that these kinds of assessments can reduce bias, participants commonly expressed that the design and use of assessments were discriminatory and perpetuated biases (“They’re consciously using these tests knowing that people with disabilities aren’t going to do well on them, and are going to get self-screened out”).

Participants felt that the barriers they grappled with stemmed from assumptions made by the designers in how assessments were presented, designed, or even accessed. Some viewed these design choices as potentially reflective of an intent to discriminate against disabled workers. One participant stated that it “felt like it was a test of, ‘how disabled are you?’” Not only that, participants generally viewed the assessments as ineffective for measuring job-relevant skills and abilities.

Participants were split on whether these digitized assessments could be modified in a way that would make them more fair and effective. Some participants believed the ability to engage in parts of the hiring process remotely and asynchronously could be useful during particular stages, if combined with human supervision and additional safeguards. Most, however, did not believe that it would be possible to overcome the inherent biases against individuals with disabilities in how assessments are used and designed. As one participant put it “We, as very flawed humans, are creating even more flawed tools and then trying to say that they are, in fact, reducing bias when they’re only confirming our own already held biases.”

Given the findings of this study, employers and developers of digitized assessments need to re-evaluate the design and implementation of assessments in order to prevent the perpetuation of biases and discrimination against disabled workers. There is a clear need for an inclusive approach in the development of hiring technologies that accounts for the diverse needs of all potential candidates, including individuals with disabilities.

Recommendations

Below we highlight our main recommendations for developers and deployers of digitized assessments, based on participants’ observations and experiences. Given the harm these technologies may introduce, some of which may be intractable, the following recommendations set out to reduce harm rather than eliminate it altogether.

Necessity of Assessments: Employers should first evaluate whether a digitized assessment is necessary, and whether there are alternative methods for measuring the desired skills with a lower risk of discrimination. If employers select to use digitized assessments, they should ensure that the assessments used are fair and effective; that they measure skills or abilities directly relevant to the specific job, and that they can do so accurately.

Accessibility: Employers must ensure assessments adhere to existing accessibility guidelines, like the Web Content Accessibility Guidelines (WCAG) or initiatives of the Partnership on Employment and Accessible Technologies (PEAT), and that the selected assessments accommodate and correctly assess the skills of disabled workers with various disabilities. 

Implementation: For effective, fair, and accessible assessments, employers can take additional steps to potentially reduce biases by implementing significant human oversight in all assessment processes, using assessments to supplement, not replace, comprehensive candidate evaluations, and being transparent about when and how assessments are used.

Read the report.

Read the plain language version.

The post Screened Out: The Impact of Digitized Hiring Assessments on Disabled Workers appeared first on Center for Democracy and Technology.

]]>
Report – Regulating Robo-Bosses: Surveying the Civil Rights Policy Landscape for Automated Employment Decision Systems https://cdt.org/insights/report-regulating-robo-bosses-surveying-the-civil-rights-policy-landscape-for-automated-employment-decision-systems/ Tue, 23 Jul 2024 04:01:00 +0000 https://cdt.org/?post_type=insight&p=104972 Introduction In December 2023, the Center for Democracy & Technology (CDT), in collaboration with a broad range of national civil rights and workers’ rights organizations, published the Civil Rights Standards for 21st Century Employment Selection Procedures (the “Civil Rights Standards” or simply the “Standards”), a detailed set of policy recommendations regarding the methods and tools […]

The post Report – Regulating Robo-Bosses: Surveying the Civil Rights Policy Landscape for Automated Employment Decision Systems appeared first on Center for Democracy and Technology.

]]>
Introduction

In December 2023, the Center for Democracy & Technology (CDT), in collaboration with a broad range of national civil rights and workers’ rights organizations, published the Civil Rights Standards for 21st Century Employment Selection Procedures (the “Civil Rights Standards” or simply the “Standards”), a detailed set of policy recommendations regarding the methods and tools that today’s employers use to recruit and assess workers.[1] The key impetus for the Civil Rights Standards was employers’ increasing use of automated employment decision systems (AEDSs) to evaluate employees and make employment decisions.

The rise of AEDSs underscores the degree to which antidiscrimination regulation has failed to keep pace with companies’ recruitment, hiring, and personnel management practices in recent decades. Workers subjected to AEDSs are at an extreme information disadvantage, with little insight into how they are assessed or whether they face a risk of an unfair or discriminatory decision. This is deeply concerning because there is scant evidence that AEDSs are more effective than simpler and more transparent employment assessments, but considerable evidence that AEDSs can discriminate against candidates from protected groups.[2]

The Standards sought to provide advocates, policymakers, and workers alike with a roadmap on how to address these risks. The Standards made policy recommendations in five categories:[3]

  • Notice and explanation: Require companies to provide concise disclosures to candidates about the key features of any AEDS they use, publish detailed summaries of all AEDS audits, and maintain records to ensure relevant materials are available if an AEDS leads to discrimination.
  • Auditing: Ensure that independent auditors test AEDSs for both discrimination risks and job-relatedness both before deployment and at least annually thereafter.
  • Nondiscrimination: Require employers and vendors to take proactive steps to minimize potential causes of discriminatory outcomes in their selection tools, use the least discriminatory tools available, explore accommodations and more accessible alternative selection methods, and refrain from using certain tools that pose a particularly high risk of discrimination.
  • Job-relatedness: Require companies to conduct detailed validity studies to ensure a selection procedure is the least discriminatory valid method of measuring a candidate’s ability to perform essential job functions.
  • Oversight and accountability: Allow candidates to raise concerns about a selection procedure, appeal its results, or opt out of its use altogether; and ensure robust enforcement for discriminatory AEDSs by making vendors and employers jointly responsible for resulting harms.

These recommendations provided “a concrete alternative to recent proposals that would set very weak notice, audit, and fairness standards for automated tools.”

The pace of AEDS legislation and policy proposals continued to increase in 2023 and into 2024. Nationally, at least eleven bills were pending at the end of 2023 purporting to target AEDS-driven discrimination. At least seven more bills across six states followed in the first weeks of 2024.

Although the increased legislative attention to AEDSs is a welcome development, much of the proposed legislation falls short of what is needed to address the risks that AEDSs pose. This report surveys the current policy landscape in the year-plus since the Standards’ publication by analyzing legislation introduced or enacted in the subsequent months. Its goal is to help policymakers and advocates understand the structural approaches to AEDS regulation embodied by current legislation and evaluate how they do and do not incorporate the Standards’ recommendations. That evaluation, in turn, provides a roadmap for needed improvements in legislation to help prevent AEDSs from giving rise to increased discrimination in employment decisions.


[1] Civil Rights Standards for 21st Century Employment Selection Procedures (2022), https://cdt.org/wp-content/uploads/2022/12/updated-2022-12-05-Civil-Rights-Standards-for-21st-Century-Employment-Selection-Procedures.pdf (Civil Rights Standards). [https://perma.cc/D26W-LZNV]

[2] See generally, e.g., Hilke Schellmann, The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now (2024); Olga Akselrod & Cody Venzke, How Artificial Intelligence Might Prevent You From Getting Hired, ACLU, Aug. 23, 2023, https://www.aclu.org/news/racial-justice/how-artificial-intelligence-might-prevent-you-from-getting-hired [https://perma.cc/MP65-AZDN]; Lydia X.Z. Brown, et al., Algorithm-driven Hiring Tools: Innovative Recruitment or Expedited Disability Discrimination?, CDT, Dec. 3, 2020, https://cdt.org/insights/report-algorithm-driven-hiring-tools-innovative-recruitment-or-expedited-disability-discrimination/ [https://perma.cc/MBW7-YJC6]; Jeffrey Dastin, Insight – Amazon scraps secret AI recruiting tool that showed bias against women, Reuters, October 10, 2018, https://www.reuters.com/article/idUSKCN1MK0AG/. [https://perma.cc/EHR9-TD3Y]

[3] These categories come from the Civil Rights Principles for Hiring Assessment Technologies (Civil Rights Principles), which The Leadership Conference on Civil and Human Rights published in 2020 with input and endorsements from CDT and more than 20 other civil rights and workers’ rights organizations. Civil Rights Principles for Hiring Assessment Technologies (2020), https://civilrights.org/resource/civil-rights-principles-for-hiring-assessment-technologies/. [https://perma.cc/Q2LC-WPXE]

Read the full report.

The post Report – Regulating Robo-Bosses: Surveying the Civil Rights Policy Landscape for Automated Employment Decision Systems appeared first on Center for Democracy and Technology.

]]>
Colorado’s Artificial Intelligence Act is a Step in the Right Direction. It Must be Strengthened, Not Weakened. https://cdt.org/insights/colorados-artificial-intelligence-act-is-a-step-in-the-right-direction-it-must-be-strengthened-not-weakened/ Wed, 22 May 2024 16:04:37 +0000 https://cdt.org/?post_type=insight&p=104149 On Friday, Colorado broke new ground in the fight to bring transparency and accountability to AI-driven decision systems, which increasingly make crucial decisions that can alter the course of our lives and careers.  A newly passed law, Colorado Senate Bill 24-205 (SB 205), will equip Coloradans with some basic information and safeguards when companies use […]

The post Colorado’s Artificial Intelligence Act is a Step in the Right Direction. It Must be Strengthened, Not Weakened. appeared first on Center for Democracy and Technology.

]]>
On Friday, Colorado broke new ground in the fight to bring transparency and accountability to AI-driven decision systems, which increasingly make crucial decisions that can alter the course of our lives and careers. 

A newly passed law, Colorado Senate Bill 24-205 (SB 205), will equip Coloradans with some basic information and safeguards when companies use AI to make high-stakes decisions about them, such as whether a worker gets a job, a consumer qualifies for a loan or lease, a patient gets medical coverage, or a student is admitted to a school. Right now, companies often make AI-driven decisions in these crucial spheres without informing consumers or workers that they are doing so. This lack of transparency obscures errors and biases, and prevents civil rights and consumer protection laws from working effectively; Colorado’s new law is thus an important basic step for AI accountability.

Industry groups conducted a concerted veto campaign the week before Governor Jared Polis signed the bill, and the Governor’s signing statement repeated some of those criticisms, suggesting that the bill should be watered down before it goes into effect. That would be a mistake. For the reasons outlined below, SB 205 is both manageable for companies and an important first step to help workers and consumers understand how AI systems might affect them. While the bill falls short of the protections that CDT and other public interest groups have called for, Colorado legislators should defend their work and ensure the full adoption of the law.

1. What SB 205 does

SB 205 ensures consumers receive basic but essential disclosures about AI-driven decisions

SB 205’s disclosure and explanation provisions would help alleviate the near-monopoly on information related to AI-driven decisions that businesses currently enjoy, and often exploit, to the detriment of consumers and workers. AI developers would have to publish a very basic statement summarizing what AI systems they sell and how they test them for bias. Companies that use AI systems to help decide whether a person gets a job or a house would have to tell them they are using AI and what its purpose is, provide a “plain language description” of it, give the person a basic explanation if it rejects them, offer them an opportunity to correct incorrect personal information, and in some cases, appeal.

Finally, companies whose AI systems interact with consumers (including, but not limited to, the AI decision systems that are the subject of the rest of the bill) must “ensure the disclosure to each consumer who interacts with the AI system that the consumer is interacting with an AI system.” In other words, companies must tell you at the outset when you are speaking with a machine rather than a human.

SB 205 requires companies to do simple due diligence before marketing or using AI systems that can alter the course of consumers’ lives and careers

Under SB 205, deployers of AI decision systems would have to conduct annual impact assessments, including assessing whether an AI decision system creates a risk of algorithmic discrimination. Deployers must also describe the steps they take to mitigate those discrimination risks–though the bill does not require them to actually implement those steps before using the AI system.

Beyond that, the impact assessment is really a recordkeeping obligation: the impact assessment must include “overviews,” “statements,” or “descriptions” of the AI decision system’s purpose, intended uses, data used and produced, performance, and post-deployment monitoring. Contrary to public interest advocates’ recommendation, the impact assessment need not be conducted by an independent third party. While this reduces the burden on businesses, it also raises the risk of impact assessments that are hampered by conflicts of interest.

Public interest advocates have called for AI systems to have stronger auditing requirements than those outlined in SB 205, and even industry-driven proposals such as the Better Business Bureau’s Principles for Trustworthy AI and the Future of Privacy Forum’s Best Practices for AI-driven hiring technologies have more detailed auditing or assessment requirements than SB-205. SB 205’s impact assessment provision is better described as a requirement that companies do basic due diligence and retain documentation provided by vendors, rather than a true impact assessment. Nevertheless, that due diligence and recordkeeping is valuable, as it will ensure companies that deploy AI systems better understand the risks and benefits of the systems they use.

(SB 205 also requires developers and deployers to take “reasonable care” to prevent algorithmic discrimination. As I will explain in a future blog post, however, this does not create significant additional rights or obligations because companies already have an absolute duty under civil rights laws to avoid making discriminatory decisions in nearly all the contexts that the bill covers.)

2. What SB 205 Doesn’t Do

In the days before SB 205 was signed into law, industry groups sent letters and issued statements mischaracterizing key elements of the law. Governor Polis’s own signing statement echoed some of this messaging, including points that suggest a profound misunderstanding of the basic tenets of civil rights laws. These arguments—which seem to be a prelude to efforts to weaken a bill that, in fact, needs strengthening—all ring hollow.

SB 205 doesn’t change civil rights laws

Governor Polis’s signing statement for SB 205 included a troubling mischaracterization of how the bill interacts with existing civil rights laws. The signing statement said:

Laws that seek to prevent discrimination generally focus on prohibiting intentional discriminatory conduct. Notably, this bill deviates from that practice by regulating the results of Al system use, regardless of intent, and I encourage the legislature to reexamine this concept as the law is finalized before it takes effect in 2026.

It’s crucial to know that laws seeking to prevent discrimination don’t generally focus on intentional conduct. The Supreme Court long ago rejected this argument for Title VII (the most influential federal employment discrimination law), holding in 1971 that an employer can be liable for discrimination if it assesses workers using criteria that have a discriminatory impact on members of a protected class. Congress codified this disparate impact theory into the text of Title VII in 1991. Colorado’s antidiscrimination laws also recognize disparate impact discrimination in employment, housing, and other decision settings that SB 205 would cover.

The signing statement’s mischaracterization of antidiscrimination laws is not only badly wrong but dangerous. A rule requiring a showing of intent to discriminate arguably would make AI-driven discrimination impossible to prove. After all, can an AI system even have intent? Would courts look to the intent of the deployer or the tool creator, and what evidence could ever support a successful case? The main concern about algorithmic discrimination systems is not that developers and deployers will nefariously engage in a conscious effort to screen out consumers or workers from vulnerable groups. Rather, it is that they will recklessly market or use AI decision systems that are deeply biased or error-prone due to flaws in their design, training, testing, or implementation.

Recognizing this, SB 205’s definition of algorithmic discrimination does not impose a new, higher standard of preventing algorithmic discrimination on companies that make or use AI decision systems: it reflects a central tenet of our civil rights laws, that courts should also look to the disparate impact of a practice or system in determining whether it violates the law. Revising SB 205 to add a new requirement of discriminatory intent would overturn decades of civil rights precedent and weaken longstanding definitions of discrimination in the context of AI-driven decisions. That certainly would not benefit workers or consumers.

SB 205 does not impose complex or unusual obligations and will not burden small businesses

Business and tech industry lobbying groups sent letters to Governor Polis urging him to veto SB 205, mainly claiming that it would impose undue burdens, especially on small businesses. These arguments are flawed.

First, and as described further above, the obligations that SB 205 imposes are not complex.  The law requires large developers and deployers to disclose basic information on their AI decision systems that is already in their possession and perform due diligence that companies seeking to comply with civil rights laws should already be doing as a best practice.

Second, opponents falsely state that the bill would require “online platforms” to “disclose data used to train their AI systems and services on their website.” In fact, the bill merely requires companies that use AI to make life-altering decisions to post a simple “statement summarizing … the nature, source, and extent of the information” that the company collects and uses in making those decisions. The bill’s other transparency requirements (summarized in the first section of this Insight) similarly require companies to provide basic information already in their possession.

Third, the bill contains a broad exemption allowing companies to withhold any information that they consider a trade secret. Consumer advocates advised that this exemption is unnecessary; the bill does not call for companies to reveal source code, training data, or any other “secret sauce” that could plausibly be considered a trade secret. Regardless, its inclusion underscores the hollowness of objections to the bill’s transparency requirements.

Finally, the bill exempts small businesses (defined as companies with fewer than 55 full-time employees) from most of the modest obligations it places on AI deployers. Such an exception for small businesses is not “narrow”: nearly half of private-sector employees work for small businesses. Exempting them from many of the bill’s already-modest requirements is a broad carve-out, not a narrow one–especially since Colorado’s civil rights laws cover all companies, regardless of how many employees they have.

3. What Happens Next

SB 205 takes effect in February 2026. Word has it that a task force will examine potential changes to the bill before it takes effect, and policymakers elsewhere will almost certainly look to SB 205 as a model for legislation in their own states. Those looking to amend or adapt SB 205 should look past overstated claims about the bill and prioritize the needs of the voters they’re in office to protect. In part because of the short time frame taken to negotiate SB 205, labor and consumer voices were largely absent during its development. Going forward, policymakers must hold firm on SB 205’s foundational protections and obligations, and the additional protections sought by consumers, workers, and public interest groups that help represent their interests should take center stage.

In a statement released Saturday, Consumer Reports lays out some of the improvements that need to be made:

There are several loopholes that ought to be closed and provisions that must be updated over the course of Colorado’s next legislative session. For example, the bill exempts AI technology that performs “narrow procedural task[s]” from its definition of high-risk AI. This term is undefined, and companies may argue that all manner of high-stakes decisions – screening out resumes, scoring college applicants – are “narrow procedural tasks.” The bill’s trade secret protections are overbroad. Companies should not be able to unilaterally withhold crucial information or hide evidence of discrimination by claiming that such information is a trade secret. The enforcement provisions must be strengthened.

I’ll add a couple more:

  • Pre-decision notice provisions should be expanded to ensure that disabled workers and consumers who face potential accessibility barriers receive more detailed disclosures before a decision is made so that they have an opportunity to request accommodation or be assessed through an alternative process.
  • Right now, SB 205 would allow companies to avoid compliance with the law if a federal standard includes “substantially similar” requirements. This is a vague standard that companies could exploit to evade the law. Companies should only be exempted from following Colorado’s law if there’s a federal law that preempts SB 205; they should not be able to simply pick and choose which laws and standards they comply with.

Colorado policymakers should work with public interest advocates to address these issues and ensure that the bill’s impact lives up to its groundbreaking potential.

The post Colorado’s Artificial Intelligence Act is a Step in the Right Direction. It Must be Strengthened, Not Weakened. appeared first on Center for Democracy and Technology.

]]>
CDT and Consumer Reports Speak out for Colorado’s AI Bias Bill https://cdt.org/insights/cdt-and-consumer-reports-speak-out-for-colorados-ai-bias-bill/ Fri, 17 May 2024 21:07:24 +0000 https://cdt.org/?post_type=insight&p=104074 Today the Center for Democracy & Technology and Consumer Reports published a statement welcoming the passage of Colorado’s Senate Bill 205, a bill that would establish basic safeguards for the use of AI in high-stakes decisions affecting consumers and workers, such as decisions about access to housing, lending, and employment.  SB 205 would require companies […]

The post <strong>CDT and Consumer Reports Speak out for Colorado’s AI Bias Bill</strong> appeared first on Center for Democracy and Technology.

]]>
Today the Center for Democracy & Technology and Consumer Reports published a statement welcoming the passage of Colorado’s Senate Bill 205, a bill that would establish basic safeguards for the use of AI in high-stakes decisions affecting consumers and workers, such as decisions about access to housing, lending, and employment. 

SB 205 would require companies to assess high-risk AI tools for the risk of discrimination, and would give consumers and workers the right to know when and how AI is being used to make consequential decisions about them. 

“This bill lays an important foundation for Colorado to build on. Right now, consumers have no idea when potentially biased or error-prone AI software is used in decisions about whether they get insurance, medical treatment, or screened out of a job. This legislation would shine some much-needed sunlight on high-risk artificial intelligence. There’s still work to do to ensure strong enforcement and to close some remaining loopholes. We look forward to Governor Polis signing SB 205, and working with the Governor and Colorado legislators in the future,” said Grace Gedye, policy analyst with Consumer Reports.

“Workers’ rights and consumer advocates have long called for legislation that would bring transparency and accountability to the shadowy world of AI-driven decisions,” said Matt Scherer, Senior Policy Counsel at the Center for Democracy & Technology. “Majority Leader Rodriguez and Representatives Rutinel and Titone took stakeholder input seriously before bringing SB 205 to the floor, and the bill on Governor Polis’s desk reflects their thoughtfulness and diligence. We look forward to the Governor signing this much-needed baseline legislation and working with civil society and other stakeholders to make sure that SB 205’s impact matches its spirit.”

Consumer Reports recently published an AI policy guide that outlines its key positions and recommendations for policymakers.

The post <strong>CDT and Consumer Reports Speak out for Colorado’s AI Bias Bill</strong> appeared first on Center for Democracy and Technology.

]]>
Press Release: Over Two Dozen Labor Unions, Civil Rights Groups, and Public Interest Advocates Endorse New York’s BOT Act https://cdt.org/insights/press-release-over-two-dozen-labor-unions-civil-rights-groups-and-public-interest-advocates-endorse-new-yorks-bot-act/ Thu, 16 May 2024 21:38:00 +0000 https://cdt.org/?post_type=insight&p=104751 (ALBANY, NY) – Today, a coalition of more than two dozen organizations announced support for New York Senate Bill 7623/Assembly Bill A.9315, the Bossware and Oppressive Technologies (BOT) Act. This bill, sponsored by Senator Brad Hoylman-Sigal and Assembly Member George Alvarez, would provide crucial protections to workers in the face of the threats posed by electronic surveillance and […]

The post Press Release: Over Two Dozen Labor Unions, Civil Rights Groups, and Public Interest Advocates Endorse New York’s BOT Act appeared first on Center for Democracy and Technology.

]]>
(ALBANY, NY) – Today, a coalition of more than two dozen organizations announced support for New York Senate Bill 7623/Assembly Bill A.9315, the Bossware and Oppressive Technologies (BOT) Act. This bill, sponsored by Senator Brad Hoylman-Sigal and Assembly Member George Alvarez, would provide crucial protections to workers in the face of the threats posed by electronic surveillance and automated management (or bossware) systems and automated employment decision tools (AEDTs). 

Bossware systems have been shown to threaten workers’ health and safety in addition to their privacy. AEDTs have been demonstrated to show persistent bias–and often not to work at all.  Moreover, workers are often unaware when these technologies are being used and rarely have the opportunity to challenge unfair or discriminatory decisions that are made using them. The BOT Act would help level the playing field by increasing transparency and protecting workers from exploitative and harmful uses of these technologies.

The post Press Release: Over Two Dozen Labor Unions, Civil Rights Groups, and Public Interest Advocates Endorse New York’s BOT Act appeared first on Center for Democracy and Technology.

]]>
Politico – Are These States About to Make a Big Mistake on AI? https://cdt.org/insights/politico-are-these-states-about-to-make-a-big-mistake-on-ai/ Tue, 30 Apr 2024 14:37:00 +0000 https://cdt.org/?post_type=insight&p=104674 This op-ed – authored by CDT’s Matt Scherer and Grace Gedye, policy analyst at Consumer Reports – first appeared in Politico on April 30, 2024. A portion of the text has been pasted below. At first glance, these bills seem to lay out a solid foundation of transparency requirements and bias testing for AI-driven decision […]

The post Politico – Are These States About to Make a Big Mistake on AI? appeared first on Center for Democracy and Technology.

]]>
This op-ed – authored by CDT’s Matt Scherer and Grace Gedye, policy analyst at Consumer Reports – first appeared in Politico on April 30, 2024. A portion of the text has been pasted below.

At first glance, these bills seem to lay out a solid foundation of transparency requirements and bias testing for AI-driven decision systems. Unfortunately, all of the bills contain loopholes that would make it too easy for companies to avoid accountability.

For example, many of the bills would cover only AI systems that are “specifically developed” to be a “controlling” or “substantial” factor in a high-stakes decision. Cutting through the jargon, this would mean that companies could completely evade the law simply by putting fine print at the bottom of their technical documentation or marketing materials saying that their product wasn’t designed to be the main reason for a decision and should only be used under human supervision.

Sound policy would also address the fact that we often have no idea if a company is using AI to make key decisions about our lives, much less what personal information and other factors the program considers.

Solid regulation would require businesses to clearly and directly tell you what decision an AI program is being used to make, and what information it will employ to do it. It would also require companies to provide an explanation if their AI system decides you aren’t a good fit for a job, a college, a home loan or other important benefits. But under most of these bills, the most a company would have to do is post a vague notice in a hidden corner of their website.

Read the full op-ed in Politico.

The post Politico – Are These States About to Make a Big Mistake on AI? appeared first on Center for Democracy and Technology.

]]>
CDT Joins UC Berkeley Labor Center and Others in Letter to CPPA Urging Strong Workers’ Rights and Transparency in Upcoming Rules on Automated Decisionmaking Technologies https://cdt.org/insights/cdt-joins-uc-berkeley-labor-center-and-others-in-letter-to-cppa-urging-strong-workers-rights-and-transparency-in-upcoming-rules-on-automated-decisionmaking-technologies/ Mon, 26 Feb 2024 21:31:13 +0000 https://cdt.org/?post_type=insight&p=102716 The Center for Democracy & Technology (CDT) joined 34 labor, civil rights, and civil society organizations in comments to the California Privacy Protection Agency (CPPA) regarding its forthcoming rules on Automated Decisionmaking Technologies and Risk Assessments. The letter urges the CPPA to ensure its rules: Read the full comments.

The post CDT Joins UC Berkeley Labor Center and Others in Letter to CPPA Urging Strong Workers’ Rights and Transparency in Upcoming Rules on Automated Decisionmaking Technologies appeared first on Center for Democracy and Technology.

]]>
The Center for Democracy & Technology (CDT) joined 34 labor, civil rights, and civil society organizations in comments to the California Privacy Protection Agency (CPPA) regarding its forthcoming rules on Automated Decisionmaking Technologies and Risk Assessments.

The letter urges the CPPA to ensure its rules:

  • Protect workers’ rights and dignity,
  • Provide transparency regarding employers’ use of automated decisionmaking, and
  • Give workers the same agency as consumers with respect to their data.

Read the full comments.

The post CDT Joins UC Berkeley Labor Center and Others in Letter to CPPA Urging Strong Workers’ Rights and Transparency in Upcoming Rules on Automated Decisionmaking Technologies appeared first on Center for Democracy and Technology.

]]>