Elections & Democracy Archives - Center for Democracy and Technology https://cdt.org/area-of-focus/elections-democracy/ Fri, 18 Apr 2025 18:27:35 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://cdt.org/wp-content/uploads/2019/11/cropped-cdt-logo-32x32.png Elections & Democracy Archives - Center for Democracy and Technology https://cdt.org/area-of-focus/elections-democracy/ 32 32 Adaptation and Innovation: The Civic Space Response to AI-Infused Elections https://cdt.org/insights/adaptation-and-innovation-the-civic-space-response-to-ai-infused-elections/ Thu, 13 Mar 2025 04:01:00 +0000 https://cdt.org/?post_type=insight&p=107864 With case studies also by: Laura Zommer and Kian Vesteinsson Introduction AI avatars delivered independent news about Venezuela’s contested election, allowing journalists to protect their identity and avoid politically motivated arrest. Voters in the United Kingdom could cast their ballots for an AI avatar to hold a seat in Parliament. A deepfake video showed United States […]

The post Adaptation and Innovation: The Civic Space Response to AI-Infused Elections appeared first on Center for Democracy and Technology.

]]>
With case studies also by: Laura Zommer and Kian Vesteinsson

Graphic for CDT report, entitled "Adaptation and Innovation: The Civic Space Response to AI-Infused Elections." Illustration of a transparent ballot box, surrounded by a swirling "information environment" of papers, social media posts, and ballots, hovering above earth. Ominous digital thunderstorms (clouds and lightning) float around the ballot box.
Graphic for CDT report, entitled “Adaptation and Innovation: The Civic Space Response to AI-Infused Elections.” Illustration of a transparent ballot box, surrounded by a swirling “information environment” of papers, social media posts, and ballots, hovering above earth. Ominous digital thunderstorms (clouds and lightning) float around the ballot box.

Introduction

AI avatars delivered independent news about Venezuela’s contested election, allowing journalists to protect their identity and avoid politically motivated arrest. Voters in the United Kingdom could cast their ballots for an AI avatar to hold a seat in Parliament. A deepfake video showed United States President Joe Biden threatening to impose sanctions on South Africa if the incumbent African National Congress won.

These are a few of the hundreds of ways generative AI was used during elections in 2024, a year that was touted as “the year of elections” and described as the moment in which newly widespread AI tools could do lasting damage to human rights and democracy worldwide. Though technology and security experts have described deepfakes as a threat to elections since at least the mid to late 2010s, the concentrated attention in 2024 was a reaction to the AI boom in the preceding year. In September 2023, a leading parliamentary candidate in Slovakia lost after a fake audio smearing him was released two days before the election, prompting speculation that the deepfake had changed the election results. At the beginning of the year, OpenAI’s ChatGPT set a record as the “fastest-growing consumer application in history.” 

Though 2024 ended with debates over the extent to which the risks AI posed to elections were overstated, in one way the consequences were clear: the technology changed the way stakeholders around the world did their work. Governments from Brazil to the Philippines passed new laws and regulations to govern the use of generative AI in elections. The European Commission published guidelines for how large companies should protect the information environment ahead of the June 2024 elections, including by labeling AI-generated content. US election administrators adopted new communication tactics that were tailored to an AI-infused information environment.

Political campaigns and candidates adopted AI tools to create advertisements and help with voter outreach. Candidates in Indonesia paid for a service that used ChatGPT to write speeches and develop campaign strategies. In India, candidates used deepfake audio and video of themselves to enable more personalized outreach to voters. Germany’s far right AfD party ran anti-immigrant ads on Meta platforms, some of which incorporated AI-altered images.

Social media platforms and AI developers implemented some election integrity programs, despite recent cuts to trust and safety teams. Twenty-seven technology companies signed the AI Elections Accord, a one-year commitment to addressing “deceptive AI election content” through improved detection, provenance, and other efforts. Google restricted the Gemini chatbot’s responses to election-related queries, and OpenAI announced that ChatGPT would redirect users to external sources when users asked about voting ahead of certain elections. Google and Jigsaw worked with media, civil society, and government partners on public media literacy ahead of the European Union elections, including about generative AI. 

In anticipation of AI tools accelerating or increasing threats to the information environment, civic space actors changed their work, too. This report looks at their contributions to a resilient information environment during the 2024 electoral periods through three case studies: (I) fact-checking collectives in Mexico, (II) decentralization and coordination among civil society in Taiwan, and (III) AI incident tracking projects by media, academics, and civil society organizations. 

The case studies highlight a range of approaches to building resilient information environments. They show the ways artificial intelligence complicates that work, as well as how it can be used to support resilience building efforts. The mix of approaches — from fact-checking bots on WhatsApp to cataloging hundreds of deepfakes — tap into information resilience from different angles.

The Mexico case study focuses on the development and tactics of fact-checking collectives, especially in the context of a hostile media environment. The case study also considers the role of AI-generated content in the 2024 election and how WhatsApp and AI are used in fact-checking work.

The Taiwan case study also examines a collaborative but decentralized civil society model. Unlike the Mexican case, however, Taiwan is subject to a prolific amount of Chinese government-linked disinformation campaigns. The case study considers the roles of research into influence operations, fact-checking or information literacy programming, and government policies to counter misinformation. 

The third case study looks at how civil society organizations, journalists, and academics tracked the use of generative AI in elections throughout 2024, both in the US and globally. Their work was an important contribution to the current public understanding of how AI was used and offers lessons for improving research and policy in the future, including challenges in data collection and how to conduct well-balanced research on such a high-profile subject. The interviews CDT conducted for this case study also give a snapshot of expert thinking on the impact that generative AI had on elections in 2024.

Though the case studies span different political contexts and types of interventions, common themes emerged. Organizations benefited from complementary or collaborative work with peer groups. They also used AI to bolster their own work. Civic space actors contended with funding and capacity constraints, insufficient access to information from companies, difficulty detecting and verifying AI-generated content, and the politicization of media resilience work, including fact-checking.

Finally, the case studies emphasize that the issue of AI in elections is not temporary. Civic space actors have been addressing the risks and exploring the opportunities AI presents for years — long before the media and policy attention of 2024. These groups will continue to be invaluable resources and partners for public and private actors in 2025 and beyond. 

And their work will be urgently needed. The end of companies’ commitments under the AI Elections Accord and a global political environment that is increasingly hostile to work relating to elections, fact-checking, and disinformation research mark an absence of leadership on the most pressing threats to information resilience. To that end, the report concludes with recommendations for how companies and civic space actors can continue to support information resilience by fostering collaboration, developing company policies, and strengthening transparency and data access.

Read the full report.

The post Adaptation and Innovation: The Civic Space Response to AI-Infused Elections appeared first on Center for Democracy and Technology.

]]>
With Outcome of CISA Election Security Review Looming, Agency Must Protect Critical Infrastructure  https://cdt.org/insights/with-outcome-of-cisa-election-security-review-looming-agency-must-protect-critical-infrastructure/ Wed, 05 Mar 2025 21:08:23 +0000 https://cdt.org/?post_type=insight&p=107688 On Friday, February 14th, acting Executive Director of the Cyber and Infrastructure Security Agency (CISA) Bridget Bean issued a memo to agency staff announcing that all election security work would be paused pending an internal review in order to refocus on the agency’s core mission. The memo also stated that funding would be cut for […]

The post With Outcome of CISA Election Security Review Looming, Agency Must Protect Critical Infrastructure  appeared first on Center for Democracy and Technology.

]]>
On Friday, February 14th, acting Executive Director of the Cyber and Infrastructure Security Agency (CISA) Bridget Bean issued a memo to agency staff announcing that all election security work would be paused pending an internal review in order to refocus on the agency’s core mission. The memo also stated that funding would be cut for the Election Infrastructure Information Sharing and Analysis Center (EI-ISAC), a DHS-funded organization that provides crucial cybersecurity assistance to state and local election officials to harden the nation’s elections systems against cyberattacks. 

Tomorrow, March 6th, marks CISA’s self-imposed deadline to conclude its review and send its findings to the White House. It remains unclear if the memo will be made public, nor whether it will provide any measure of transparency regarding the programs that will — and will not — continue. 

If CISA is serious about focusing on its core mission, the agency must continue its cybersecurity, physical security, and foreign threat information sharing work. Failure to do so would undermine U.S. national security, jeopardize the safety of election officials, and further diminish U.S. standing on the global stage.  

Cybersecurity 

As the bipartisan leadership of the National Association of Secretaries of State (NASS) recently explained, CISA provides “valuable” services that “many state and local election officials regularly utilize” to defend against cybersecurity threat actors, including nation-states and cybercriminal organizations. 

Protecting the cybersecurity of state and local elections infrastructure is vital to the United States’ national interest and security. DHS has designated election infrastructure, including polling places, voter registration databases, and voting machines, as a critical infrastructure subsector since 2017. U.S. election infrastructure is a prize target of foreign governments, whose attacks have escalated in scale, complexity, and brazenness. During the 2024 election, foreign adversaries targeted state and local elections offices using a variety of techniques, including probes of network defenses, distributed denial of service (DDoS) attacks, and ransomware operations. These attacks seek to polarize the electorate, denigrate the integrity of our elections, and incite political violence, including specifically at election officials, who have experienced escalating death threats and intimidation. 

Federal efforts have been crucial in identifying, analyzing, and responding to foreign cyberattacks. CISA, for instance, alerted local election officials in Coffee County, Georgia that its county government network was targeted by Iranian actors. Coffee County election officials acted swiftly to disconnect from the state voter registration system, preventing the attack from accessing data. CISA — and the EI-ISAC that it funds — offer a large range of free services that help counties like Coffee County, GA defend against cyber intrusion. These include support from cyber experts at the agency in conducting vulnerability scans and penetration testing, coordination on incident response, access to declassified intelligence reports, and a vast information sharing network. 

Since its inception, the EI-ISAC has grown to include over 3,700 state and local election offices, and has distributed sophisticated sensors to monitor for system intrusions to more than 1,000 elections officers around the country. CISA’s free services also include access to the .gov top-level domain (TLD) and “has made it available at no cost to election offices and other qualifying government organizations.” The .gov TLD is a crucial trust indicator that helps voters identify their elections website as an authentic  government website and obtain accurate information about the time, place, and manner of voting. Authorities have identified dozens of fake election websites set up by foreign adversaries to mislead voters and prevent them from voting. 

Cybersecurity services from CISA and the EI-ISAC are irreplaceable. As Republican Secretary of the Commonwealth of Pennsylvania Al Schmidt said, CISA has “a national and global perspective when it comes to cyber security risks and all the rest that each individual state can’t do on its own.” For many underserved counties around the U.S., the cybersecurity assistance that CISA provides is often the only source of network hardening assistance available — not only for elections administrators, but for all county government offices on the network. For instance, in Washington state, 15 county governments receive “Endpoint Security and Malicious Domain Blocking and Reporting” tools from CISA that secure their network defenses across the county government network. Removing free access to these and other cyber defenses would make local governments susceptible to ransomware and other attacks that could impact not only elections but emergency services, schools, and more. 

Physical Security

CISA not only protects the cybersecurity of elections offices, but their physical security as well — an essential resource as almost 40% of election officials have reported receiving threats of intimidation, while more than half fear for their safety. CISA provides resources like physical security assessments of election facilities and coordinates federal efforts to detect, analyze, and respond to physical threats to election infrastructure as they emerge. In 2024, CISA and the EI-ISAC’s information sharing and incident response teams warned election officials about white powder envelopes (and worked with USPS and the FBI to remove some envelopes from the mail stream) that were targeted at election offices in at least 15 states. They shared intelligence that ballot boxes would be targeted with attack, and provided guidance on securing and monitoring them; assisted with response to fires set in ballot dropboxes; and alerted officials ahead of Election Day to plans for wide-spread bomb threats from foreign adversaries seeking to upend voting operations. As a result, and despite over 100 bomb threats around the country by Russian-linked actors, voting operations were minimally impacted. 

CISA’s Mandate and Capacity

Protecting the cyber and physical security of elections infrastructure aligns with the vision to “deliver a more focused provision of services for elections security activities” as laid out in Executive Director Bean’s February 14th memo. CISA’s enabling legislation directs the agency to “coordinate a national effort to secure and protect against critical infrastructure risks” and to “provide analyses, expertise, and other technical assistance to critical infrastructure owners and operators.” Because election infrastructure is one form of critical infrastructure, providing cybersecurity and physical security assistance, in addition to coordinating threat information sharing with state and local election officials, falls squarely in this mandate. 

Continuing this work will require staffing the agency with cybersecurity and physical security advisors (CSAs and PSAs), as well as the ten regional election security advisors who were reportedly fired from the agency. These staff acted as direct liaisons to election officials to conduct testing, coordinate response, and more. With over 10,000 election jurisdictions around the country, a depleted federal cybersecurity workforce will be overwhelmed with requests. This is particularly the case for requests for physical security assistance offered by CISA. According to DHS’ Office of the Inspector General, “[e]ven though CISA had almost 140 PSAs in the field in 2024, the demand for services occasionally outpaced staff capacity. In one region, the high demand caused delays delivering CISA’s assessments and other services.“  

CISA’s Decision Should Be Transparent

While the Agency’s deadline to conclude it’s current election security review is tomorrow, it remains unclear if the outcome of that review will be made available to the public. 

If the agency permanently reduces or ends vital election security work, it should — at the very least — publicly disclose the details of this decision. This should include a clear explanation of the Agency’s rationale, transparency about the scope of its personnel and programmatic cuts, and its expectations as to how state and local election officials will fill the resulting security gaps. Election officials are scrambling to understand any changes to the help they can expect from the federal government. They need this information as soon as possible, as many states have local and special elections upcoming — including in Florida, where special elections will fill 2 vacated U.S. House seats in just 4 weeks. According to Marion County Supervisor of Elections Wesley Wilcox, who will administer one of those elections, there is “nothing else like” the EI-ISAC’s situation room, which allows election officials to report cyber attacks so others can block them in real time. “When we do this special election here in four weeks, there’s a very real chance that there won’t be a situation room.” Without transparency about the scope of CISA’s decisions, election officials won’t even know what options are available to them.  

As the bipartisan leadership of NASS has said, “CISA’s prioritized services help election entities defend against these national security threats.” Cutting support for the EI-ISAC and eliminating CISA’s work to protect the cyber and physical security of election infrastructure would weaken America’s election defenses and make it easier for America’s enemies to cripple critical infrastructure, obstruct voting, mobilize violence, and undermine America’s influence on the global stage. CISA’s leadership should make clear that such work remains core to CISA’s mission and will resume upon completion of the ongoing review. 

The post With Outcome of CISA Election Security Review Looming, Agency Must Protect Critical Infrastructure  appeared first on Center for Democracy and Technology.

]]>
Center for Democracy & Technology’s Submission to the United Nations Special Rapporteur for Freedom of Opinion and Expression https://cdt.org/insights/center-for-democracy-technologys-submission-to-the-united-nations-special-rapporteur-for-freedom-of-opinion-and-expression/ Wed, 15 Jan 2025 15:41:23 +0000 https://cdt.org/?post_type=insight&p=107051 The Center for Democracy & Technology (CDT) welcomes the opportunity to contribute to the upcoming thematic report on freedom of expression and elections in the digital age by the United Nations special rapporteur for freedom of opinion and expression. Protecting freedom of expression is critical to ensuring access to the ballot. New technologies can exacerbate […]

The post <strong>Center for Democracy & Technology’s Submission to the United Nations Special Rapporteur for Freedom of Opinion and Expression</strong> appeared first on Center for Democracy and Technology.

]]>
The Center for Democracy & Technology (CDT) welcomes the opportunity to contribute to the upcoming thematic report on freedom of expression and elections in the digital age by the United Nations special rapporteur for freedom of opinion and expression.

Protecting freedom of expression is critical to ensuring access to the ballot. New technologies can exacerbate risks to free expression but overbroad regulatory and corporate interventions may do the same, particularly without adequate transparency, robust testing, and independent data access to promote oversight and accountability. 

CDT addresses the following topics in its submission:

  • Specific risks posed by government efforts to regulate AI-generated content and the need for further research into labeling;
  • Companies’ election integrity measures, including cases where insufficient or inconsistent enforcement of policies and practices pose barriers to individuals’ right to the ballot and associated expression, particularly those belonging to historically marginalized group; 
  • The ways in which limited transparency and independent access to data inhibit research and policymaking, including as research about the information space is increasingly politicized; and
  • Recommendations for how companies and other stakeholders can better support election integrity while respecting freedom of opinion and expression.

Read the full submission here.

The post <strong>Center for Democracy & Technology’s Submission to the United Nations Special Rapporteur for Freedom of Opinion and Expression</strong> appeared first on Center for Democracy and Technology.

]]>
Misinformation Doesn’t Stop When Polls Close: 3 Things to Watch after Election Day https://cdt.org/insights/misinformation-doesnt-stop-when-polls-close-3-things-to-watch-after-election-day/ Tue, 05 Nov 2024 17:50:57 +0000 https://cdt.org/?post_type=insight&p=106162 American elections are the safest and most secure they have ever been, but misinformation seeking to undermine the legitimacy of our democracy is being pushed into mainstream discourse on social media. In the days leading up to the election, bad actors are boosting long-standing narratives with a rising wave of new anecdotal claims of fraud […]

The post Misinformation Doesn’t Stop When Polls Close: 3 Things to Watch after Election Day appeared first on Center for Democracy and Technology.

]]>
American elections are the safest and most secure they have ever been, but misinformation seeking to undermine the legitimacy of our democracy is being pushed into mainstream discourse on social media. In the days leading up to the election, bad actors are boosting long-standing narratives with a rising wave of new anecdotal claims of fraud from unverified and often malicious sources. Four years ago, similar claims, coupled with Donald Trump’s refusal to accept the results of the 2020 election, inspired the violent insurrection at the U.S. Capitol on January 6, 2021.

As a result, the Associated Press recently found that 4-in-10 Americans believe there will be political violence in the post-election period. Indeed, the post-election period poses a heightened risk of political violence. The Department of Homeland Security identified “perceptions of voter fraud” as the top potential trigger for political violence during this election and the Intelligence Community has warned that foreign efforts to undermine trust in the election and incite violence will continue after polls close. 

Those risks may be particularly heightened around some key dates between Election Day on November 5th and Inauguration Day on January 20th next year. As we enter the post-election period, here are three major risks to keep in mind.

1. The post-election period is at higher risk of political violence fueled by mis- and disinformation  than the pre-Election Day period because of the potential for disputed presidential and down-ballot election results. Election Day and the days that immediately follow, as well as a few key dates in December, are at the greatest risk for online and offline disruptions. The most sensitive dates are:

  • November 5 and the following few weeks when ballots are counted and changing vote tallies may appear to show different candidates in the lead. . Many factors dictate how long it will take to have final results, including whether there are close margins, and the number of mail-in ballots. Disputes over results will likely arise during this time. 
  • December 11, which is the deadline for appointing state electors of the Electoral College. The deadline is a requirement in this election, following the bipartisan passage of the Electoral Count Reform Act in 2022.
  • December 17, when electors vote in each state. 
  • January 6, when Congress counts the electoral votes.
  • January 20, when the new President is inaugurated.
Timeline of key events leading up to the election from September to January. Source: Timeline made by authors.
Timeline of key events leading up to the election from September to January. Source: Timeline made by authors.

2. Expect mis- and disinformation to spike around these key dates. We can anticipate the typical shape of the harmful mis- and disinformation narratives that will likely be amplified during this period. Harmful narratives often fall into two buckets: election denial and candidate-specific. 

Election denial narratives relate to election results and election administration issues and can include: 

  • Claims that shifting vote tallies imply election fraud. Reminding people not to expect final election results on November 5 – and that vote totals, including who is leading, will change in the following days – is an important way to protect against the impact of voter fraud narratives. As election expert Rick Hasen pointed out last week, however, there has been little coverage of the potential “blue shift” this year, or tendency for election results to change after Election Day as more ballots are counted. 
  • Narratives that the election is rigged, including via voter fraud, election official misconduct, and other election administration barriers. False claims may resurface after Election Day to bolster narratives that the election was somehow unfair or illegitimate, especially in swing states or wherever there are tight races. The reality is that voting is secure and fraud is rare. Nevertheless, allegations that voting machines are changing votes – a claim that has circulated for many years – have already emerged, as have claims (originating from a Russian influence operation) that election officials are ripping up ballots. Other examples of old narratives that could be recycled after Election Day are unfounded complaints that secure elections do not have sufficient safeguards, as was claimed by proponents of the court-rejected Georgia hand-count rule; conspiracies about natural disasters (for instance that the hurricane relief response in North Carolina and Georgia were designed to tilt the election); claims that the election was stolen through legal channels (for instance by illegally suppressing votes during legally mandated voter roll maintenance); and allegations of noncitizen voting, including related to administrative errors in Arizona. New claims about problems with election administration, such as whether certain ballots should or should not have been counted, would likely follow similar patterns.
  • False and misleading narratives about the transition of power and the Electoral College. Claims that Presidential elections in the U.S. are rigged by the Electoral College are commonplace. In part, this is because in several recent elections, the winner of the electoral college did not win the popular vote. Narratives that the election could be overturned by faithless electors or that states may intentionally fail to meet statutory deadlines in order to submit a different slate of electors might also emerge. These coincide with claims that the deadlines for the transition to power could be intentionally delayed or altered, especially if the winner is of a different party than the incumbent. In addition to possible attempts to interfere with the role of state electors or election certification, mis- or disinformation about the process could further contribute to belief in electoral fraud and raise the risk of political violence.

Candidate-specific narratives, meanwhile, support or detract from a candidate:

  • Claims about plots against a candidate or narratives that otherwise paint them as a victim. So far this election cycle, popular narratives of this type include false claims about the “deep state” planning the Donald Trump assassination attempts and theories about an alleged third attempt in October. Trump has also claimed that investigations by the Department of Justice, FBI, and state Attorneys Generals constitute a “witch hunt” despite no evidence of political motivation. During the post-election period, these sorts of narratives increase the chances that a candidate’s supporters will violently defend them in the name of fighting an alleged injustice or attack, as happened on January 6 and is already being used as a mobilization tool among far right groups
  • Shocking allegations against a candidate. There have been numerous mis- and disinformation narratives in this category during the period ahead of Election Day. In one recent example, Tim Walz was falsely accused of sexual misconduct. The story and the video from the purported accuser were found to be part of a Russian influence operation, and the video was artificially generated. While allegations during the voting period might be intended to influence voters’ choices or turnout, claims of criminality or of acting against the interests of the American people could be used during the post-election period to argue that a candidate is legally unqualified to assume office. Such personal accusations create fodder for individuals, as well as organized groups and militias, that fashion themselves as “patriotic” for purportedly defending laws that the system fails to enforce.  

3. Offline disruptions and online mis- and disinformation feed into each other. While it is difficult to prove the impact of influence operations on electoral outcomes, the overall mis- and disinformation environment has clear consequences for the election environment. For example:

  • The information environment may fuel election violence. The Department of Homeland Security identifies “perceptions of voter fraud” as the main risk for mobilizing election violence, and warned about potential ballot box arsons – which have since occurred in the lead-up to November 5. As we saw in the lead-up to January 6, increased mis- and disinformation, especially about voter fraud, will raise the offline threat environment. Threats of violence against election workers and polling places can also cause fear among voters and disrupt normal election procedures. 
  • Misinformation can interfere with election officials’ ability to do their job. Earlier this month, the Oregon Secretary of State Elections Division temporarily stopped answering their phone line because they were inundated with out-of-state calls, including threats, from people who saw false claims on social media. These are not the first interference attempts that we’ve seen this year. Election officials have received violent threats, ballot drop boxes have been set aflame in three states, and white powder envelopes have been sent to election offices in at least 16 states. Bad actors may seek to use claims of election fraud to mobilize support for attacks against election offices during the vote counting and certification process. As a result, election administrators have been preparing contingency plans should violent protests emerge this year; from setting up fencing perimeters around counting facilities to stocking Narcan for election workers if fentanyl is discovered in mail ballots. 

The post Misinformation Doesn’t Stop When Polls Close: 3 Things to Watch after Election Day appeared first on Center for Democracy and Technology.

]]>
EU Tech Policy Brief: October 2024 https://cdt.org/insights/eu-tech-policy-brief-october-2024/ Mon, 04 Nov 2024 20:36:31 +0000 https://cdt.org/?post_type=insight&p=106148 Welcome back to the Centre for Democracy & Technology Europe‘s Tech Policy Brief. This edition covers the most pressing technology and internet policy issues under debate in Europe and gives CDT’s perspective on the impact to digital rights. To sign up for CDT Europe’s AI newsletter, please visit our website. Do not hesitate to contact […]

The post EU Tech Policy Brief: October 2024 appeared first on Center for Democracy and Technology.

]]>
Welcome back to the Centre for Democracy & Technology Europe‘s Tech Policy Brief. This edition covers the most pressing technology and internet policy issues under debate in Europe and gives CDT’s perspective on the impact to digital rights. To sign up for CDT Europe’s AI newsletter, please visit our website. Do not hesitate to contact our team in Brussels: Silvia Lorenzo Perez, Laura Lazaro Cabrera, Aimée Duprat-Macabies, David Klotsonis, and Giulia Papapietro.

👁 Security, Surveillance & Human Rights

CDT Europe Leads Coalition to Combat Spyware Abuse Across the EU 

On 1 October 2024, during the Tech and Society Summit (TSS), CDT Europe officially launched a Spyware Coordination Group composed of 16 leading civil society and journalist organisations from all over the EU focused on safeguarding democracy, transparency, and accountability in relation to spyware technologies. This initiative aims to combat the growing misuse of spyware technologies in the EU, and advocate for stronger regulations to protect fundamental rights and ensure respect for the rule of law. United in their commitment to protecting democratic institutions and civil society, members of the Coordination Group will work tirelessly to ensure that the new EU institutions take necessary measures to regulate and prevent abuse of spyware technologies in the EU.

Photograph of members from the Spyware Coordination Group at the Tech and Society Summit in Brussels.
Photograph of members from the Spyware Coordination Group at the Tech and Society Summit in Brussels.

Strengthening Global Efforts Against Commercial Spyware

The issue of spyware is not only being debated at the EU level: on 8 October 2024, the U.S. Department of State hosted its first commercial spyware-focused Human Rights Council side event. CDT Europe’s Security, Surveillance and Human Rights Program Director Silvia Lorenzo Perez spoke at the event, emphasising that modern spyware is not just a tool for law enforcement, but represents a fundamental shift that undermines our democratic values and violates the very principles upon which the European Union is built. She also commended the U.S. Government’s leadership in combating the abuse of commercial spyware through diplomatic efforts such as the U.S.-led Joint Statement, and encouraged the U.S. to intensify diplomacy towards the EU institutions to secure commitments from the European Commission, Parliament, and Council.

Push for Stronger Spyware Oversight in Slovakia and Greece

CDT Europe, alongside 11 organisational members of the Spyware Coordination Group, addressed the European Parliament with serious concerns about the procurement, use, and regulation of spyware technologies in Slovakia and Greece. In a joint letter, the coalition highlights the alarming developments in both countries, where spyware tools like Pegasus and Predator have been linked to violations of privacy and fundamental rights. The letter urges the European Parliament to take immediate action to ensure transparency, accountability, and adherence to rule of law principles, emphasising the need for robust legislative frameworks to protect privacy and freedom of expression.

Recommended read: Human Rights Watch, UK Court Accepts Case About Saudi Spyware Use

💬 Online Expression & Civic Space

CDT Europe at the Tech and Society Summit

At the Tech and Society Summit, CDT Europe’s Online Expression team played a key role in two critical discussions: First, Our Secretary General Asha Allen participated in a roundtable, “Making EU laws work for people: best practices for engaging with civil society”, emphasising the vital role of civil society in identifying harms and proposing actionable policy solutions. This session created an invaluable space for exchanging lessons learned and best practices related to civil society participation in the policymaking process and the enforcement of EU laws.

In a separate high-level roundtable, CDT Europe joined discussions on crafting an effective, rights-respecting EU digital enforcement strategy. Here, participants reached a consensus on the need to address pervasive digital harms by adopting a holistic, society-centred approach, rather than relying solely on individual regulations.

Enhancing Transparency with the Digital Services Act for Stronger Platform Accountability

Our Research and Policy Officer David Klotsonis recently shared key insights with Open Government Partnership (OGP) members on the Digital Services Act (DSA) and its role in promoting accountability in the digital space. David emphasised that annual risk assessments required of Very Large Online Platforms and Search Engines are essential to proactively identifying potential harms, and central to fostering transparency and safeguarding user trust. He also pointed to the importance of Digital Services Coordinators, whose timely appointment and adequate resourcing are vital for meaningful oversight and compliance at the national level. This dialogue with OGP members reinforced the value of collaboration in driving effective, accountable digital governance. You can watch the recording of the webinar on OGP’s YouTube channel.

Workshop on Prosocial Tech Design Governance

On 8 October, the Council on Technology and Social Cohesion and Search for Common Ground hosted a workshop that gathered policymakers, academics, and civil society leaders to examine technology’s role in supporting social cohesion and human rights. Key takeaways included the need for algorithmic accountability, with the DSA serving as a framework to mitigate harmful, profit-driven designs that amplify divisive content, in particular by leveraging risk assessments under the DSA’s Article 34 to address the monetisation of such content. Participants also discussed child protection efforts and the data privacy concerns around age verification, as the EU looks to further bolster the online protection of minors in the coming mandate.

Recommended read: Daphne Keller published an opinion piece in Lawfare, The Rise of the Compliant Speech Platform.

⚖ Equity and Data

Feedback to French Authority on GDPR Guidance for AI

CDT provided feedback to the French Data Protection Authority (Commission nationale de l’informatique et des libertés, or “CNIL”) on recently released factsheets that are intended to guide application of the EU’s General Data Protection Regulation (GDPR) to AI systems and models. We reiterated the limits of relying on “legitimate interests” as a valid legal basis for using data to train AI systems, particularly when conducting web scraping to source that data. CDT similarly called for protection of data subject rights in the AI ecosystem, highlighting the current obstacles individuals face in accessing sufficient information about the processing of their data and enforcement of their rights.

General Purpose AI Models and the Code of Practice Process

As part of our ongoing involvement in the Code of Practice process for general-purpose AI (GPAI) models — set to guide providers’ compliance with the AI Act’s rules governing GPAI models — we published a brief outlining the precedent-setting potential of the Code of Practice process, as well as the importance of civil society engagement and fundamental rights advocacy in the process. Active civil society  participation will be crucial to ensure a robust interpretation of the GPAI rules in the AI Act, and to promote high levels of transparency in GPAI models thorough risk mapping as well as development of robust mitigations and safeguards.

Addressing AI Governance Challenges in Democratic Elections

Photograph of Asha Allen, Secretary-General of CDT Europe, speaking at POLITICO Live's "AI & Elections: Are Democracies Ready?" event.
Photograph of Asha Allen, Secretary-General of CDT Europe, speaking at POLITICO Live’s “AI & Elections: Are Democracies Ready?” event.

On 14 October, our Secretary General Asha Allen spoke at POLITICO Live’s “AI & Elections: Are Democracies Ready?” event, where she shared insights on the state of AI governance and its implications for democratic processes. During the event, Asha and the other panellists discussed the relevance of AI in democratic processes, emphasising that more research is essential to fully understand how AI-generated content might impact the online space and individuals’ rights to participate in democratic debate without interference or discrimination. While the AI Act and DSA are a welcomed step forward, the impact of these laws in mitigating the risks of AI-generated disinformation during elections is yet to be determined. Asha also highlighted the need for tech platforms to fulfil their due diligence obligations and to comply with the EU legislative framework. If you missed it, you can rewatch the panel on YouTube.

Recommended read: La Quadrature du Net, French Family Welfare Algorithm Challenged in Court by 15 Organisations.

📌 Hearings to confirm the incoming European Commissioners

From 4 November to 12 November, the European Parliament is holding hearings to confirm the incoming European Commissioners. CDT Europe is closely monitoring these proceedings and will publish analyses of the nominees’ responses regarding digital rights. As part of this process, nominees have submitted written responses outlining their visions, priorities, and approaches to the portfolios they are set to manage. These answers provide valuable insights into how the new Commission might address some of the most pressing issues facing the European Union today. While the written responses reflect promising commitments in some areas, there are still questions that the Parliament should raise during the hearings to ensure that the final agenda aligns with the EU’s values of privacy, democracy and fundamental rights. We have written an in-depth article outlining these questions and delving into the nominees’ commitments related to our three key programs: Security and Surveillance, Online Expression and Civic Space, and Equity and Data.

⏫ Upcoming Events

Democracy Alive Summit: On 6 November, the day after the U.S. elections, CDT Europe’s Laura Lazaro Cabrera will participate in the Democracy Alive Summit organised by the European Movement International (EMI). Laura will discuss the challenges caused by AI in time of election, and what can be done to combat disinformation and manipulation. If you wish to attend, you can register by filling out this form.

Paris Peace Forum: On 12 November, CDT Europe’s Silvia Lorenzo Perez will attend spyware-focused sessions at this year’s Paris Peace Forum. Those include multistakeholder meetings: one on the Pall Mall Process, organised by the French and UK governments, and one organised by Access Now, the CyberPeace Institute, Freedom House, and the Paris Peace Forum.

Webinar on Trusted Flaggers in the DSA: On 21 November, CDT Europe is co-organising a webinar on Trusted Flaggers. By bringing together institutions, regulators, and civil society organisations, we aim to deepen participants’ understanding of the legal text, and share insights on what the vetting process looks like in practice, what can practically be expected, and what potential benefits are for CSOs interested in applying. This is a closed-door event; however, if you believe that your participation would add valuable insight to the discussion, or are interested in applying to be a Trusted Flagger, please feel free to reach out to eu@cdt.org.

The post EU Tech Policy Brief: October 2024 appeared first on Center for Democracy and Technology.

]]>
Report – Hated More: Online Violence Targeting Women of Color Candidates in the 2024 US Election https://cdt.org/insights/report-hated-more-online-violence-targeting-women-of-color-candidates-in-the-2024-us-election/ Wed, 02 Oct 2024 04:01:00 +0000 https://cdt.org/?post_type=insight&p=105717 Report also authored by Müge Finkel, Director, Ford Institute for Human Security, GSPIA, University of Pittsburgh. With contributions bySteven Finkel, Daniel Wallace Professor of Political Science, University of PittsburghFirat Duruşan, PhD, Center for Computational Social Sciences, Koç UniversityErdem Yörük, Director, Center for Computational Social Sciences, Koç UniversityIsik Topçu, MA, Center for Computational Social Sciences, Koç […]

The post Report – Hated More: Online Violence Targeting Women of Color Candidates in the 2024 US Election appeared first on Center for Democracy and Technology.

]]>
Graphic for a CDT and University of Pittsburgh report, entitled "Hated More: Online Violence Targeting Women 
of Color Candidates in the 2024 US Election." Black text on a light grey background.
Graphic for a CDT and University of Pittsburgh report, entitled “Hated More: Online Violence Targeting Women of Color Candidates in the 2024 US Election.” Black text on a light grey background.

Report also authored by Müge Finkel, Director, Ford Institute for Human Security, GSPIA, University of Pittsburgh.

With contributions by
Steven Finkel, Daniel Wallace Professor of Political Science, University of Pittsburgh
Firat Duruşan, PhD, Center for Computational Social Sciences, Koç University
Erdem Yörük, Director, Center for Computational Social Sciences, Koç University
Isik Topçu, MA, Center for Computational Social Sciences, Koç University
Melih Can Yardi, MA, Center for Computational Social Sciences, Koç University
Amanda Zaner, MID, GSPIA, University of Pittsburgh


Introduction

Women, and women of color in particular, face numerous challenges when running for political office in the U.S. These include attacks they are subject to in various online spaces that, like their peers, they must use to campaign and promote their work. These attacks often aim to undermine and prevent women’s participation in politics. Previous research by CDT found that women of color Congressional candidates in the 2020 U.S. election were more likely to be subjected to violent and sexist abuse, and mis- and disinformation on X/Twitter compared to other candidates. These forms of abuse might contribute to the underrepresentation of women of color in politics, and may also undermine the effectiveness of the US democratic system in reflecting the interest and priorities of all voters in policy-making. 

In this research brief, we turn to the 2024 U.S. elections to examine the nature of offensive speech and hate speech that candidates running for Congress are subject to on the social media platform X (formerly Twitter), which remains an important forum for political candidates. More specifically, we compare the levels of offensive speech and hate speech that different groups of Congressional candidates are targeted with based on race and gender, with a particular emphasis on women of color. We also examine these factors for U.S. Vice President Kamala Harris as a woman of color and presidential candidate.

For purposes of this research, we identified all tweets posted between May 20 and August 23, 2024 that mentioned any candidate running for Congress with one or more accounts on X (a total of 1031 candidates). This resulted in a dataset of over 800,000 tweets. Additionally, we examined tweets that mentioned Harris during this period. Using several fine-tuned language models, we identified tweets that contained offensive speech or hate speech (see the methods section in the report for more details) about the candidate. We define offensive speech as words or phrases that demean, threaten, insult, or ridicule a candidate. We define hate speech as a subset of offensive speech where specific reference is made to someone’s identity including race, gender, sexual orientation, or religion. Our findings show that women of color and African American women candidates in particular are subject to more offensive speech overall, and specifically to more hate speech, than other candidates.

[CONTENT WARNING  – Some of the examples in this report include profanity and threats that some may find offensive or triggering.]

Read the full report.

The post Report – Hated More: Online Violence Targeting Women of Color Candidates in the 2024 US Election appeared first on Center for Democracy and Technology.

]]>
CDT’s Future of Speech Online 2024 Event Spotlights AI, Elections & Speech https://cdt.org/insights/cdts-freedom-of-speech-online-2024-event-spotlights-ai-elections-speech/ Thu, 19 Sep 2024 15:29:18 +0000 https://cdt.org/?post_type=insight&p=105639 Since 2017, the Center for Democracy & Technology has partnered with Stand Together Trust to host experts from around the country and representing a range of perspectives for an event examining the Future of Speech Online (FOSO) – a gathering, including leaders from government, civil society, industry and academia, to examine how free expression is […]

The post CDT’s Future of Speech Online 2024 Event Spotlights AI, Elections & Speech appeared first on Center for Democracy and Technology.

]]>
Since 2017, the Center for Democracy & Technology has partnered with Stand Together Trust to host experts from around the country and representing a range of perspectives for an event examining the Future of Speech Online (FOSO) – a gathering, including leaders from government, civil society, industry and academia, to examine how free expression is being shaped by technology. Against the backdrop of a record-number of global elections and in the advent of an AI age, FOSO 2024 fittingly spotlighted “AI, Elections, & Speech.” Leading voices on a range of issues participated in a two-day event (September 16 & 17) full of discussions on how to preserve free expression and proactively protect election integrity. 

Keynote speaker Renee DiResta kicked-off the event, pointing out how a fragmented social media landscape and isolated information silos have created an environment conducive to spreading rumors. Echo chambers of mis- and disinformation create splintered realities that can undermine the integrity of elections. Despite these challenges, DiResta remained optimistic about society’s ability to create systems and design solutions that preserve democracy and support free speech.

“What’s At Stake?” was both an urgent question and fitting title of the first FOSO panel which featured CDT’s own Kate Ruane and Tim Harper, Factchequeado’s Laura Zommer, Dangerous Speech Project’s Cathy Buerger, and The Leadership Conference on Civil and Human Rights’ David Toomey. Harper pointed out how generative AI can create new ways of spreading existing narratives and expressed concerns about AI’s ability to hypertarget individuals with misleading information. He highlighted CDT’s new report which showed how AI chatbots could impact the right to vote and overall election integrity for voters with disabilities. Zommer observed that these trends are not unique to the U.S., but happening at a global scale. The panelists noted how efforts like counterspeech, fact checking, and reporting misinformation can help improve the quality of our information environment. 

A fireside chat focused on the major role companies have in protecting elections closed out FOSO’s first day. The conversation, moderated by CDT CEO Alexandra Reeve Givens, featured Meta’s Roy Austin and Microsoft’s Ginny Badanes. Austin and Badanes addressed some of the anticipated harms that AI-generated deepfakes can have on elections. They also noted some benefits of AI, especially as a tool to facilitate content moderation at an unprecedented scale, and pointed to watermarking and image labeling of AI-generated content as a potential way to combat deepfakes. The panelists emphasized how technology companies are talking more publicly about these issues and colloborating to combat the coordinated inauthentic behavior that continues to pollute online environments.

During the second day of FOSO, discussions around the infrastructure of the information ecosystem continued. Internet Sans Frontières’ Julie Owono, WITNESS’s Sam Gregory, and Wikimedia Foundation’s Costanza Sciubba Caniglia participated in “Infrastructure of Truth,” a panel discussion moderated by the National Press Club’s Beth Francesco. The panelists agreed that short term solutions, like fact-checking, which are currently used to combat misinformation are not sustainable in their current form. Gregory called for “a mindset shift” to building systems that address content moderation at scale. Owono observed a need to refocus from attempting to ensure every piece of content is “right,” to building societal trust in  large-scale content moderation systems. 

The recent Supreme Court case decisions in Murthy v. Missouri and Netchoice v. Paxton were the focus of the next panel, titled “Free Speech on the Ballot.” CDT’s Becca Branum led a conversation with Knight First Amendment Institute’s Alex Abdo, Lawyers’ Committee’s David Brody, and National Coalition Against Censorship’s Lee Rowland. The conversation explored how these decisions left much unanswered about how the government should approach speech on private platforms. As CDT’s Branum noted in an earlier blog post, “there is much work to do to understand the intricacies of how the First Amendment protects social media platforms and their users.”

The concluding panel of FOSO, “Post Mortems: Researcher Access to Data and Oversight Mechanisms to Study the Election,” addressed this concerning trend. Rebekah Tromble and Brandon Silverman, both of George Washington’s Institute for Data, Democracy & Politics, Atlantic Council’s Rose Jackson, and Center for Studies on Freedom of Expression’s Agustina Del Campo, examined ethics around data access and its importance to preserving election integrity. Panelists cited Meta’s decision to shutter CrowdTangle as a key example of the impact corporate decisions can have on civil society’s ability to monitor elections in the U.S. and abroad. (CDT has advocated for Meta to reinstate this open access tool.) 

This year’s FOSO event provided a forum to discuss both emerging problems and potential solutions to some of democracy’s most pressing issues. Greater access, increased collaboration, and clearer policies, were just some of the paths forward that arose in the conversations. As the sociotechnical landscape continues to evolve, there’s work to be done to preserve democratic values and protect civil liberties. “We are not passive observers,” DiResta asserted in her keynote address. “The power to shape our future is in our hands.” 

Missed the event? Links to stream FOSO 2024 below.
Day 1 Recording
Day 2 Recording

The post CDT’s Future of Speech Online 2024 Event Spotlights AI, Elections & Speech appeared first on Center for Democracy and Technology.

]]>
Report — Rules of the Road: Political Advertising on Social Media in the 2024 U.S. Election https://cdt.org/insights/report-rules-of-the-road-political-advertising-on-social-media-in-the-2024-u-s-election/ Thu, 19 Sep 2024 04:01:00 +0000 https://cdt.org/?post_type=insight&p=105618 Report also authored by Laura Kurek, CDT Intern, Ph.D. student, University of Michigan School of Information. CDT interns Ebie Quinn (Harvard Law School) and Saanvi Arora (UC Berkeley) also contributed research. In 2006, just two years after Facebook was founded, some college students running in student body elections reached out to the young company with […]

The post Report — Rules of the Road: Political Advertising on Social Media in the 2024 U.S. Election appeared first on Center for Democracy and Technology.

]]>
CDT report, entitled "Rules of the Road: Political Advertising on Social Media in the 2024 U.S. Election." White document on a grey background.
CDT report, entitled “Rules of the Road: Political Advertising on Social Media in the 2024 U.S. Election.” White document on a grey background.

Report also authored by Laura Kurek, CDT Intern, Ph.D. student, University of Michigan School of Information. CDT interns Ebie Quinn (Harvard Law School) and Saanvi Arora (UC Berkeley) also contributed research.

In 2006, just two years after Facebook was founded, some college students running in student body elections reached out to the young company with an idea: what if it were possible to target campaign messages to college students on Facebook at a specific university using paid ads on the platform? Just like that, the era of online political advertising was born.

For a decade, online political advertising quietly gained traction with large and small political campaigns alike. Political campaigns now invest heavily in their online presence and have a clear interest in growing their audience – and targeting constituents – using paid advertising.

The governance of social media platforms, including online political advertising, has become more contentious over time. Starting in 2016, high-impact geopolitical events tied to strategically targeted online political advertisements led Congress and intelligence agencies to investigate the tools and services that technology companies were offering political advertisers. Social media companies have since faced consistent pressure to account for their content moderation policies and practices and for how they collect and process user data, especially to target ads. Concerns about COVID-19 misinformation and the emergence of content moderation as a “culture war” talking point have made this issue even more salient.

In response, social media companies’ political advertising policies have continued to evolve. Companies have responded by restricting the types of personal characteristics that advertisers can use to target political ads, implementing residency and authenticity requirements to prevent foreign interference, and creating advertising databases containing all political ads served on their platforms to provide greater transparency into the paid content users see. Developments like the insurrection on January 6th, 2021, the onset of the Ukraine and Israel-Gaza wars, and the emergence of generative AI have further shaped platform political advertising policies. In parallel, legislators and regulatory agencies around the world have sought to regulate online political advertising, though federal legislators in the U.S. have failed to agree on an approach.

This brief seeks to demystify how social media platforms define and govern online political advertising in the United States, specifically focusing on the ways in which policies at seven different companies—Google’s YouTube, Meta, Microsoft’s LinkedIn, Reddit, Snap, TikTok, and X (formerly Twitter)—have changed in the last four years. We review each company’s policies in depth, identifying 13 different components of these policies, and highlight areas where policies have changed since 2020 and 2022. We then compare these policies and isolate major trends across the industry.

Read the full report.

The post Report — Rules of the Road: Political Advertising on Social Media in the 2024 U.S. Election appeared first on Center for Democracy and Technology.

]]>
Brief – Generating Confusion: Stress-Testing AI Chatbot Responses on Voting with a Disability https://cdt.org/insights/brief-generating-confusion-stress-testing-ai-chatbot-responses-on-voting-with-a-disability/ Mon, 16 Sep 2024 04:01:00 +0000 https://cdt.org/?post_type=insight&p=105472 [ PDF version ] Introduction Even as the “year of elections” draws to a close, the United States’ elections loom. From cyberattacks to mis- and disinformation spread on social media by foreign and domestic actors, digital technology has impacted the discourse, information environment, and perceived legitimacy of American elections in recent cycles. In 2024, the […]

The post Brief – Generating Confusion: Stress-Testing AI Chatbot Responses on Voting with a Disability appeared first on Center for Democracy and Technology.

]]>
[ PDF version ]

Introduction

Even as the “year of elections” draws to a close, the United States’ elections loom. From cyberattacks to mis- and disinformation spread on social media by foreign and domestic actors, digital technology has impacted the discourse, information environment, and perceived legitimacy of American elections in recent cycles. In 2024, the growth in popularity and availability of chatbots powered by artificial intelligence (AI) introduces a new and largely untested vector for election-related information and, as our research found, misinformation. 

Many communities are concerned that digitally available misinformation will impact the ability of community members to vote, including the disability community. However, up until this point, there has been little research done surrounding the integrity of the online information environment for voters with disabilities, and even less focus on the quality and integrity of information relating to voting with a disability that one can receive from a generative AI chatbot. 

Voters, both with and without disabilities, may use chatbots to ask about candidates or ask practical questions about the time, place, and manner of voting. An inaccurate answer to a simple question, such as how to vote absentee, could impede the user’s exercise of  their right to vote. There are numerous opportunities for error, including potentially misleading information about eligibility requirements, instructions for how to register to vote or request and return one’s ballot, and the status of various deadlines – all of which may vary by state. Similarly, misleading or biased information about voting rights or election procedures, including the role of election officials and what accessibility measures to expect, could undermine voters’ confidence in the election itself. Both of these concerns – diminishing an individual’s ability to or likelihood of voting, and reducing perceptions of election integrity – can be amplified for voters with disabilities, particularly considering that the laws surrounding accessible voting are even more complex and varied than those regulating voting more generally. 

This report seeks to understand how chatbots, given the range of ways they interact with the electoral environment, could impact the right to vote and election integrity for voters with disabilities. In doing so, we tested five chatbots on July 18th, 2024: Mixtral 8x7B v0.1, Gemini 1.5 Pro, ChatGPT-4, Claude 3 Opus, and Llama 2 70b. Across 77 prompts, we found that:

  • 61% of responses had at least one type of insufficiency. Over one third of answers included incorrect information, making it the most common problem we observed. Incorrect information ranged from relatively minor issues (such as broken web links to outside resources) to egregious misinformation (including incorrect voter registration deadlines and falsely stating that election officials are required to provide curbside voting).
  • Every model hallucinated at least once. Each one provided inaccurate information that was entirely constructed by the model, such as describing a law, a voting machine, and a disability rights organization that do not exist.
  • A quarter of responses could dissuade, impede, or prevent the user from exercising their right to vote. Every chatbot gave multiple responses to this effect, including inaccurately describing which voting methods are available in a given state, and all five did so in response to prompts about internet voting and curbside voting.
  • Two thirds of responses to questions about internet voting were insufficient, and 41% included incorrect information. Inaccuracies about internet voting ranged from providing incorrect information about assistive technology, to erroneously saying electronic ballot return is available in states where it is not (like Alabama) and, inversely, that it is not available in states where it is (like Colorado and North Carolina).  
  • Chatbots are vulnerable to bad actors. They often rebuffed queries that simulated use by bad actors, but in some cases responded helpfully, providing information about conspiracy theories and arguments for why people with intellectual disabilities should not be allowed to vote.
  • Responses often lacked necessary nuance. Chatbots did not provide crucial caveats about when polling places would be fully accessible, and misunderstood key terms like curbside and internet voting.
  • When asked to provide authoritative information, a positive use case for chatbots, almost half of answers included incorrect information. The scope of inaccuracies included incorrect webpage names and links and a recommendation for users to seek assistance from an organization that does not exist. This is particularly concerning because using chatbots as a starting point for finding other sources of information is an important and frequently recommended use case.
  • Outright bias or discrimination were exceedingly rare, and models often used language that was expressly supportive of disability rights.

Read the full report.

Check out the full Chatbot Responses on Disability Rights and Voting Dataset as a .CSV file (download.).

The post Brief – Generating Confusion: Stress-Testing AI Chatbot Responses on Voting with a Disability appeared first on Center for Democracy and Technology.

]]>
Helping Election Officials Combat Misinformation in 2024: An Updated Course from CDT and CTCL https://cdt.org/insights/helping-election-officials-combat-misinformation-in-2024-an-updated-course-from-cdt-and-ctcl/ Wed, 21 Aug 2024 15:12:29 +0000 https://cdt.org/?post_type=insight&p=105295 Election officials face an increasingly demanding set of responsibilities, from protecting their systems from cybersecurity threats to managing and responding to emergencies. At the same time, election officials increasingly report facing threats and harassment, often motivated by mis- and disinformation, which may be contributing to increasing turnover rates. These challenges are compounded by limited budgets […]

The post Helping Election Officials Combat Misinformation in 2024: An Updated Course from CDT and CTCL appeared first on Center for Democracy and Technology.

]]>
Election officials face an increasingly demanding set of responsibilities, from protecting their systems from cybersecurity threats to managing and responding to emergencies. At the same time, election officials increasingly report facing threats and harassment, often motivated by mis- and disinformation, which may be contributing to increasing turnover rates. These challenges are compounded by limited budgets for training staff on new and emerging threats. 

Despite these headwinds, election officials have become the most trusted sources of authoritative information about elections in the country. Their voices are crucial for tamping down misinformation about voting procedures, but they must first be equipped with the necessary skills and strategies.

That’s why this year CDT partnered with the Center for Technology and Civic Life to develop “Combatting Election Misinformation,” a course designed to help election officials navigate and respond to misinformation.

The course, originally created ahead of the 2020 election, relaunched in June of 2024. Much has changed since the original training was developed in 2020: the rollout of the COVID-19 vaccine and its attendant social cleavages, the Stop the Steal movement and January 6th insurrection, and the introduction of widely available generative AI. 

Many of the goals for this course remain the same as in 2020, as election officials are a key actor in upholding election integrity, providing authoritative information about the time, place, and manner of elections, and countering voter suppression misinformation. The course is intended to support election officials in those roles by teaching terms and concepts related to information operations, helping them identify and respond to mis-, dis-, and malinformation, and preparing them to respond with a defensive communications strategy.

The changes to the course reflected the changing election information environment ahead of November. Generative AI tools, like chatbots and image generators, have the potential to increase the scale and sophistication of false and misleading content online. Deepfakes are a major concern, as illustrated by the robocall that impersonated President Biden and told New Hampshire residents not to vote in January’s primary. Some AI image generators produce photorealistic images, such as Joe Biden in a hospital bed and ballot box theft in Venezuela, that could be used to support false or misleading narratives. OpenAI recently released a report detailing attempts by Iranian operatives to use Chat-GPT to develop messaging about US presidential candidates for use in an online influence operation. Generative AI can also make microtargeting easier, opening the door to hyperlocal disinformation that targets particular demographic groups, such as minority language communities. While these technologies have not yet caused the massive disruptions that some predicted, the risks and examples that have been observed so far remain serious.

Given the urgency of these concerns, CDT was pleased to partner with CTCL to train election officials across the country. In the past, we have teamed up with CTCL to contribute to courses on cybersecurity and post-election audits. We hope that this latest course will give election officials an additional set of tools to make sure that elections continue to become more secure, fair, and trusted.

The post Helping Election Officials Combat Misinformation in 2024: An Updated Course from CDT and CTCL appeared first on Center for Democracy and Technology.

]]>