Student Privacy Archives - Center for Democracy and Technology https://cdt.org/area-of-focus/privacy-data/student-privacy/ Mon, 28 Apr 2025 18:48:57 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://cdt.org/wp-content/uploads/2019/11/cropped-cdt-logo-32x32.png Student Privacy Archives - Center for Democracy and Technology https://cdt.org/area-of-focus/privacy-data/student-privacy/ 32 32 Banning Kids from Social Media Remains a Bad and Unconstitutional Idea https://cdt.org/insights/banning-kids-from-social-media-remains-a-bad-and-unconstitutional-idea/ Wed, 05 Feb 2025 14:43:52 +0000 https://cdt.org/?post_type=insight&p=107322 Today, the Senate Commerce Committee is set to mark up the “Kids Off Social Media” Act of 2025 (“KOSMA”). The bill would prohibit children under the age of 13 from accessing certain social media services and would make changes to the Children’s Internet Protection Act that will increase surveillance in schools and disproportionately harm low […]

The post Banning Kids from Social Media Remains a Bad and Unconstitutional Idea appeared first on Center for Democracy and Technology.

]]>
Today, the Senate Commerce Committee is set to mark up the “Kids Off Social Media” Act of 2025 (“KOSMA”). The bill would prohibit children under the age of 13 from accessing certain social media services and would make changes to the Children’s Internet Protection Act that will increase surveillance in schools and disproportionately harm low income children and their families that rely on school-provided devices to access the internet. The 2025 version bill is nearly identical to a version the Committee passed in 2024. It was a bad idea then, and it’s a bad idea now. The Senate Commerce Committee should not advance this legislation without significant changes to protect children’s constitutional rights and students’ access to education.

Today, CDT joined other digital rights, civil liberties, and civil rights organizations, including ACLU, New America’s Open Technology Institute, EFF, and Fight for the Future, in a letter (also on ACLU’s website) detailing our concerns with the ways that KOSMA would harm children. 

Most worrying among them is the ban on young people accessing social media services. Millions of young people are currently using these services, including common apps like Facebook Messenger, YouTube, Pinterest, and Snap to connect with peers, family, friends, and other trusted people. Banning access to these platforms would be an extraordinary infringement on young people’s First Amendment rights and would represent a large expansion of government authority over who can access which online services. Courts considering similar restrictions on children’s access to social media at the state level have found they likely violate the First Amendment. 

Children have a right to express themselves and connect with others online, including via social media. KOSMA is an unconstitutional violation of those rights.

The bill’s expansion of the Children’s Internet Protection Act (CIPA) is also deeply concerning. KOSMA will codify invasive and unproven software in schools. Although CIPA’s monitoring requirement was never intended to authorize surveillance of children and families, research has shown that schools have interpreted CIPA’s past requirements to require AI-driven, persistent monitoring of students, despite the bill being enacted long before this technology even existed. This misinterpretation has given tech companies the opportunity to sell invasive and unproven online activity monitoring technologies to be used against kids in schools. The language in the “Eyes on the Board” section of KOSMA would reinforce that a school’s ability to access E-Rate funding, which provides discounts for internet services, depends on compliance with language that education agencies have misinterpreted as requiring the installation and use of AI-powered spyware to surveil what students are doing online during and outside of school hours. 

Under the threat of lost E-Rate funding, schools have also turned to surveillance tech companies for content filtering technologies that are known to be overly restrictive of students’ ability to access critical information, even for things like schoolwork, leading to impairments in students’ ability to complete their coursework. CDT polling revealed that almost three quarters of teachers and students whose school uses this technology report that a student had trouble completing assignments because of it. Teachers also reported that this impacts marginalized students more, with about one-third agreeing that content associated with gender expansive students is more likely to be filtered or blocked.

Finally, as written, KOSMA would disproportionately impact students who rely on school-provided internet and devices to access online services. Reinforcing the misconception that it is mandatory to use these surveillance and filtering technologies on school-provided devices will exacerbate the digital divide by limiting these students’ ability to access certain sites outside of the classroom. 

Read the full letter.

The post Banning Kids from Social Media Remains a Bad and Unconstitutional Idea appeared first on Center for Democracy and Technology.

]]>
Out of Step: Students, Teachers in Stride with EdTech Threats While Parents Are Left Behind https://cdt.org/insights/out-of-step-students-teachers-in-stride-with-edtech-threats-while-parents-are-left-behind/ Wed, 15 Jan 2025 05:02:00 +0000 https://cdt.org/?post_type=insight&p=106824 Since the Center for Democracy & Technology (CDT) began polling school stakeholders in 2020 about their experiences with educational data and technology (edtech) in classrooms, the sheer number of edtech products and use cases has skyrocketed. Many of the tools being proactively implemented in K–12 schools across the country and adopted by kids in their […]

The post Out of Step: Students, Teachers in Stride with EdTech Threats While Parents Are Left Behind appeared first on Center for Democracy and Technology.

]]>
Layers of glitch effects, in shades of blue, purple, and pink. White text: "Out of Step." Green text: "Students, Teachers in Stride with EdTech Threats While Parents Are Left Behind."
Layers of glitch effects, in shades of blue, purple, and pink. White text: “Out of Step.” Green text: “Students, Teachers in Stride with EdTech Threats While Parents Are Left Behind.”

Since the Center for Democracy & Technology (CDT) began polling school stakeholders in 2020 about their experiences with educational data and technology (edtech) in classrooms, the sheer number of edtech products and use cases has skyrocketed. Many of the tools being proactively implemented in K–12 schools across the country and adopted by kids in their personal capacity are well intended; however, some of these tools have had unintended consequences such as privacy violations and negative effects on students from historically marginalized communities.

To continue tracking the impacts of edtech tools in the classroom and at home, CDT surveyed 1,028 parents of students in grades 6–12, 1,316 students in grades 9–12, and 1,006 teachers of grades 6–12 to understand their opinions on and experiences with student privacy, emerging technologies, parent engagement, school policies related to gender expansive students, content filtering and blocking software, student activity monitoring, and generative artificial intelligence (AI). Any subgroup n-sizes that differ from the total sample size are denoted throughout this report.

The definitions of these various edtech issues as shown to survey respondents are also denoted throughout the body of the report, and other key terms are included on page 22. This research builds on CDT’s extensive body of quantitative and qualitative research, which is referenced on page 23. For additional details about the survey findings in this report, please reference the comprehensive slide deck.

Read the full report.

Explore the slide deck on the research findings.

The post Out of Step: Students, Teachers in Stride with EdTech Threats While Parents Are Left Behind appeared first on Center for Democracy and Technology.

]]>
Brief – Unique Civil Rights Risks for Immigrant K-12 Students on the AI-Powered Campus  https://cdt.org/insights/brief-unique-civil-rights-risks-for-immigrant-k-12-students-on-the-ai-powered-campus/ Wed, 15 Jan 2025 05:01:00 +0000 https://cdt.org/?post_type=insight&p=106899 Ongoing public discourse has sparked renewed questions about the intersection of immigration and K-12 schools. Recent statements indicate that there will be a focus on immigrant children in schools by the incoming presidential administration, including efforts to block undocumented children from attending public school and take immigration enforcement actions on school grounds. State leaders are […]

The post Brief – Unique Civil Rights Risks for Immigrant K-12 Students on the AI-Powered Campus  appeared first on Center for Democracy and Technology.

]]>
Unique Civil Rights Risks for Immigrant K-12 Students on the AI-Powered Campus. White document on a light grey background.
Unique Civil Rights Risks for Immigrant K-12 Students on the AI-Powered Campus. White document on a light grey background.

Ongoing public discourse has sparked renewed questions about the intersection of immigration and K-12 schools. Recent statements indicate that there will be a focus on immigrant children in schools by the incoming presidential administration, including efforts to block undocumented children from attending public school and take immigration enforcement actions on school grounds. State leaders are taking similar interest in the issue, with some publicly announcing plans to challenge Plyler v. Doe’s constitutional right to an education for undocumented students and notices sent home to parents regarding their plans to “[stop] illegal immigration’s impact” on schools.

I. Introduction

Immigrant students are protected from discrimination on the basis of national origin in school under Title VI of the Civil Rights Act of 1964. National origin discrimination occurs when someone is harassed, bullied, or otherwise treated differently “stemming from prejudice or unfounded fears about their national origin (including the country or part of the world they or their family members were born in or are from, their ethnicity or perceived ethnic background, and/or the language they speak).” This brief focuses on the unique civil rights considerations for immigrant students and how schools can fulfill these legal obligations when it applies to their use of data and technology. Specifically, it: 

  • Defines who immigrant students are and how they may be present in the U.S.;
  • Analyzes the unique circumstances and risks that immigrant students face in schools;
  • Identifies the ways in which data and technology could run afoul of immigrant students’ civil rights; and
  • Provides recommendations to school leaders to ensure their use of data and technology is consistent with civil rights laws and supports the success of all students.

Although this brief focuses on non-citizen immigrants because of the unique legal risks and vulnerabilities they face, it is important to note that other groups, like immigrants who become U.S. citizens, English Learners, and those who are merely perceived to have been born outside of the U.S., are also protected from national origin discrimination. 

Read the full brief.

The post Brief – Unique Civil Rights Risks for Immigrant K-12 Students on the AI-Powered Campus  appeared first on Center for Democracy and Technology.

]]>
Brief – Education Leaders’ Guide to Complying with Existing Student Privacy and Civil Rights Laws Amidst an Evolving Immigration Landscape https://cdt.org/insights/brief-education-leaders-guide-to-complying-with-existing-student-privacy-and-civil-rights-laws-amidst-an-evolving-immigration-landscape/ Wed, 15 Jan 2025 05:01:00 +0000 https://cdt.org/?post_type=insight&p=106902 With immigration enforcement likely to intensify this year, it is critical that school administrators comply with existing privacy and civil rights laws with respect to the data they collect and the technology that they use. CDT research suggests that some schools are currently using data and technology to play a role in immigration enforcement. For […]

The post Brief – Education Leaders’ Guide to Complying with Existing Student Privacy and Civil Rights Laws Amidst an Evolving Immigration Landscape appeared first on Center for Democracy and Technology.

]]>
Education Leaders’ Guide to Complying with Existing Student Privacy and Civil Rights Laws Amidst an Evolving Immigration Landscape. White document on a light grey background.
Education Leaders’ Guide to Complying with Existing Student Privacy and Civil Rights Laws Amidst an Evolving Immigration Landscape. White document on a light grey background.

With immigration enforcement likely to intensify this year, it is critical that school administrators comply with existing privacy and civil rights laws with respect to the data they collect and the technology that they use. CDT research suggests that some schools are currently using data and technology to play a role in immigration enforcement. For example, 17 percent of teachers report that their school has shared student information with immigration enforcement in the past school year. 

Further, despite Immigration and Customs Enforcement’s (ICE) traditional policy of refraining from enforcement actions on K-12 school campuses, school officials should recognize that this is a norm – not a prohibition – and that schools need to be prepared to address potential enforcement on campus.

This document will provide background on how immigration enforcement may affect K-12 schools, and offers recommendations for how schools can meet long-standing legal obligations that remain unchanged regardless of increased enforcement activity. 

Read the full brief.

The post Brief – Education Leaders’ Guide to Complying with Existing Student Privacy and Civil Rights Laws Amidst an Evolving Immigration Landscape appeared first on Center for Democracy and Technology.

]]>
Report – In Deep Trouble: Surfacing Tech-Powered Sexual Harassment in K-12 Schools https://cdt.org/insights/report-in-deep-trouble-surfacing-tech-powered-sexual-harassment-in-k-12-schools/ Thu, 26 Sep 2024 04:01:00 +0000 https://cdt.org/?post_type=insight&p=105609 Executive Summary Generative artificial intelligence (AI) tools continue to capture the imagination, but increasingly the technology’s damaging potential is revealing itself. An often problematic use of generative AI is in the creation and distribution of deepfakes online, especially because the vast majority contain sexually explicit intimate depictions. In the past school year (2023-2024), the rise […]

The post Report – In Deep Trouble: Surfacing Tech-Powered Sexual Harassment in K-12 Schools appeared first on Center for Democracy and Technology.

]]>
CDT report, entitled "In Deep Trouble: Surfacing Tech-Powered Sexual Harassment in K-12 Schools." Illustration of a cell phone and social media and messaging posts, floating amongst a dark and choppy body of water.
CDT report, entitled “In Deep Trouble: Surfacing Tech-Powered Sexual Harassment in K-12 Schools.” Illustration of a cell phone and social media and messaging posts, floating amongst a dark and choppy body of water.

Executive Summary

Generative artificial intelligence (AI) tools continue to capture the imagination, but increasingly the technology’s damaging potential is revealing itself. An often problematic use of generative AI is in the creation and distribution of deepfakes online, especially because the vast majority contain sexually explicit intimate depictions. In the past school year (2023-2024), the rise of generative AI has collided with a long-standing problem in schools: the act of sharing non-consensual intimate imagery (NCII). K-12 schools are often the first to encounter large-scale manifestations of the risks and harms facing young people when it comes to technology, and NCII, both deepfake and authentic, is no exception. Over the past year, anecdotes of children being the perpetrators and victims of deepfake NCII have been covered by major news outlets, elevating concerns about how to curb the issue in schools. But just how widespread is NCII really? And how well equipped are schools to handle this challenge?

The Center for Democracy & Technology (CDT) conducted surveys of public high school students and public middle and high school parents and teachers from July to August 2024 to understand the prevalence of deepfakes, NCII, and related issues in K-12 schools. CDT’s research contributes to better understanding these issues within the U.S. educational context, as research has not yet been publicly published that both quantifies the rising prevalence of deepfakes and NCII in K-12 schools and reflects the perspectives of teachers, parents, and students.

In short, concerns over the widespread nature of NCII, both authentic and deepfake, in public K-12 schools across the country are well-founded:

  • NCII, both authentic and deepfake, is a significant issue in K-12 public schools: Students and teachers report substantial amounts of NCII, both authentic and deepfake, depicting individuals associated with their school being shared in the past school year (2023-2024), with the primary perpetrators and victims being students.
  • Female and LGBTQ+ students are the most alert to the impact of NCII: Students and teachers report that female students are more often depicted in deepfake NCII that is shared by their classmates, and both female and LGBTQ+ students say that they have lower levels of confidence in their schools’ ability to prevent and respond to the increasing threat of deepfake NCII.
  • Schools are not doing enough to prevent students from sharing NCII: Very few teachers report that their schools have policies and procedures that proactively address the spread of authentic and deepfake NCII. Instead, schools reactively respond once there has been an incident at their school. This unfortunately leaves many students and parents in the dark and seeking answers from schools that are ill-equipped to provide them. 
  • When schools do respond, they focus heavily on imposing serious consequences on perpetrators without providing support to victims of NCII: Both students and teachers report perpetrators receiving harsh penalties, including expulsion, long-term suspension, and referrals to law enforcement. But students and teachers say that schools provide few resources for victims of NCII, like counseling or help removing damaging content from social media.
  • While stakeholders inside the school building, like students and teachers, report that NCII in all its forms is a significant issue in K-12 schools, parents find themselves out of the loop: Parents are significantly less aware of these threats or the harms that they pose. At the same time, parents agree that more education of students is needed and feel they should play a primary role in providing it. 

Although addressing NCII, both authentic and deepfake, will require a long-term, multistakeholder approach, one thing is clear – NCII has a significant effect on students, and schools need to do more now to protect them from its harms and create a learning environment that is free from sexual harassment. Efforts to do so should center on bolstering prevention measures, improving victim support, and engaging parents.

Read the full report.

Explore the slide deck.

The post Report – In Deep Trouble: Surfacing Tech-Powered Sexual Harassment in K-12 Schools appeared first on Center for Democracy and Technology.

]]>
The Intersection of Parental Rights Proposals and Edtech: Liberty and Freedom or Civil Rights Erosion https://cdt.org/insights/the-intersection-of-parental-rights-proposals-and-edtech-liberty-and-freedom-or-civil-rights-erosion/ Fri, 16 Aug 2024 16:53:25 +0000 https://cdt.org/?post_type=insight&p=105273 by CDT Intern Hank Elmajian CDT research shows that parents want to be more involved in decision-making about how their school uses data and technology (EdTech) to educate their child, but schools fall short in providing opportunities for parents to give meaningful input. At the same time, parents are demanding more visibility and rights to […]

The post The Intersection of Parental Rights Proposals and Edtech: Liberty and Freedom or Civil Rights Erosion appeared first on Center for Democracy and Technology.

]]>
by CDT Intern Hank Elmajian

CDT research shows that parents want to be more involved in decision-making about how their school uses data and technology (EdTech) to educate their child, but schools fall short in providing opportunities for parents to give meaningful input. At the same time, parents are demanding more visibility and rights to determine how school administrators interact with and administer all educational services, not just those related to edtech. Recently, this demand has culminated in the introduction, and sometimes the passage, of federal and state parental rights bills.  

CDT wanted to understand if these proposals to expand parental rights include increased opportunities for parental oversight when it comes to determining whether and how technology is used in schools. We analyzed a federal parents rights bill as well as 30 bills from the 2022-23 and 2023-24 sessions of state legislatures with a variety of partisan compositions. These bills resulted in a range of outcomes, including those that were vetoed, died in committee, are currently in committee, and became law.

Most bills that purport to increase parental rights provided limited opportunities to expand parental oversight to mitigate the harmful effects of edtech platforms. Instead, many propose parental rights that undermine student privacy through the use of technology that tracks student demographics, particularly related to gender expansive students (e.g. transgender, intersex, and non-binary students). 

Limited Focus on EdTech Transparency and Student Activity Monitoring

Of the bills and laws reviewed, the majority did not specifically address ethical data and privacy practices within edtech. To the extent they were addressed, the legislative proposals reflected three general trends: 

  • Reinforce existing rights of parents to access and control student data; 
  • Increase in control of monitoring on school-issued devices; and
  • Mandate timely notification of data breaches.

Reinforce existing rights of parents to access and control student data

As summarized below, a few states, as well as the federal parental rights bill, reinforced a parent’s right to opt out of surveys collecting data on sensitive, non-academic topics like, political and religious affiliations, legally recognized relationships, and family income. This reiterates a parental right provided by the federal Protection of Pupil Rights Amendment (PPRA) that has been in place since 1974. In addition to emphasizing this long-standing right, states have expanded the type of information collection for which parents can opt out to include biometric information that could be “used for the purpose of electronically identifying that person with a high degree of certainty.” 

In addition to reinforcing opt out rights, many parental rights proposals reiterated a parent’s right to inspect, review, and correct their child’s education records; this long-standing right (again established in 1974) was initiated by the Family Educational Rights and Privacy Act (FERPA). Although these are largely not new rights afforded to parents, it is possible that including them in parental rights proposals could lead to more parents choosing to exercise them, making it even more important that schools follow all applicable student privacy laws.

Increase in control of monitoring on school-issued devices 

A few proposals recognized the harms that arise from online monitoring of school-issued devices and have enacted laws to try and minimize the risks. Under these bills, a school district or technology provider could not electronically monitor or access the location tracking features of a school-issued device; audio or visual receiving, transmitting, or recording features of a school-issued device; and student interactions with a school-issued device, including keystrokes and web-browsing activity. 

These bills include exceptions allowing the monitoring or access of one or more of these features of school-issued devices in certain circumstances. For example, Ohio state legislators would impose a requirement to notify parents when a banned feature of the school-issued device was monitored/accessed and which exception was triggered, like when monitoring is granted through judicial order or when monitoring is deemed reasonably necessary to respond to an imminent threat to life or safety. Rhode Island and Minnesota, on the other hand, would only require this notification if there is an imminent threat to life or safety.  

Mandate timely notification of data breaches

One significant proposal that is only included in the federal parental rights bill is the right of parents “to timely notice of any major cyberattack against their child’s school that may have compromised student or parent information.” This would grant parents a degree of transparency they do not currently have. This is especially relevant as in 2023, 954 data breaches were reported in U.S. schools and colleges, resulting in 4.3 million records being exposed. This trend seems destined to keep on increasing as it has over the past decade, and thus states should include similar provisions within their proposals.

Parental Rights that Undermine Privacy of Gender Expansive Students

Where bills that aim to increase parental rights do address student privacy issues, and the relevant technology that collects individual-level student information, they specifically target gender expansive students (e.g., transgender, intersex, and non-binary students).[1] They do so in two main ways: 

  • Forced disclosure or “outing” policies of students; and
  • Criminalizing support of gender expansive students.

Forced Disclosure or “Outing” Policies

Forced disclosure or “outing” policies require schools to notify parents if a child asks to be addressed using a name/pronoun different from the name/pronoun in the school-maintained education record, regardless of whether the student gives their permission. These proposals primarily affect gender expansive students. This not only implicates these students’ privacy, but also the technology in which this information is maintained, which is almost certainly managed through a web-based student information system. Some of the proposals utilize broad language, resulting in instances where notification would be required if a school employee merely suspects a gender/pronoun change

This broad scope, in conjunction with school monitoring, could result in a gender expansive student being outed to their family without their consent even if they are just curious and exploring their gender identity. This can occur when student activity monitoring software flags queries related to gender and transitioning; for example, under a South Carolina bill, any school employee who suspects or learns that a student is questioning the gender they were assigned at birth would have to notify the parents of the students, so any flags generated by student activity monitoring that indicate that a student is gender expansive would result in notification being sent home. Previous CDT research shows that nineteen percent of all students whose school uses student activity monitoring report that they or someone they know has been outed. To the extent these policies result in gender expansive students being treated differently by outing them, they could implicate civil rights protections and result in a claim of disparate treatment under Title IX, which protects against discrimination on the basis of sex.

Criminalizing Support

Related to forced disclosure policies are proposals that would punish teachers and school administrators for refusing to “out” gender expansive students to their families. This could occur if they do not notify parents when their child requests a name/pronoun change, even allowing parents to bring a cause of action against a school administrator or teacher if they refuse to comply with the name and/or pronoun set for the student by the parent. These proposals would make it incredibly difficult for teachers or school administration to provide support to gender expansive students as it would subject them to adverse employment actions or even civil litigation if they refuse to disclose a requested name/pronoun change to parents. 

Given the ubiquity of educational records in today’s schools, the existence of an electronic paper trail and how it could be used to enforce these bills would give rise to a very real concern for many school employees. Oftentimes this trail can be as simple as email or other electronic communications between the teacher and the student where the student is addressed by their preferred name/pronouns. As these policies would tend to create hostile learning environments for gender expansive students, they could implicate civil rights protections and result in a claim of hostile learning environment under Title IX.

Conclusion

The increased legislative attention on parental rights in K-12 schools has not focused on ethical data practices in edtech but instead on student privacy (and related technology) of gender expansive students. Parents want to play a greater role in informing whether and how data and technology are used in schools, so policymakers should heed those voices instead of putting forth policies that would violate certain students’ privacy and potentially their civil rights.

Bills and Laws that Increase Parental Rights

FindingState Bill or Law
Reinforce existing rights of parents to access and control of student data1. California: AB801, 2023 Session, § 22584(d)(3)(A) (2023)
2. Colorado: HCR 23-1004, 74th Gen. Assem., § 32(2)(b)(III) (2023)
3. Hawaii: HB1715, 32nd Legislature, § (a)(7)(K) (2024); HB1715 § (a)(4) (2024)
4. Idaho: SB1102, 67th Legislature, § 33-6001(6) (2023);
5. *Illinois: Student Online Personal Protection Act, ch.105, Ill. Comp. Stat. Ann., § 33(c)(1)(2)(3) (2021) 
6. *Iowa: Ia. Legis. Serv. Ch. 91, Sec. 15 § 279.79(1) (2023); 
7. Maine: LD1953, 131st Legislature, Sec. 26 § 4(A)(5)(j) (2023); LD1953, Sec. 26 § 3(A)(4) (2023); 
8. Massachusetts: SB280, 193rd Gen. Ct., § 34K(a)(3) (2023) 
9. Montana: SB337, 68th Legislature, Sec. 5  § 40-6-701(2)(b)(k) (2023);
10. Nebraska: LB374, 108th Legislature, Sec. 3 § 4 (2023);
11. New Jersey: A531, 221st Legislature, § 6a(6)(k) (2023); A531, § 5a(5) (2023)
12. *North Carolina: Parents’ Bill of Rights, North Carolina Laws S.L. 2023-106, § 115C-76.25(a)(10)(11) (2023); Parents’ Bill of Rights, § 114A-10(4) (2023)
13. *Oklahoma: O.S. § 25-2003(7)(q) (2023); 
14. Pennsylvania: HB 319, General Assembly of Pennsylvania, § 7 (2) (2023);
15. Virginia: HB1260, 2024 Session, § 22.1-1.1(6)(7) (2024); HB1260 § 22.1-1.1 (5) (2024); 
16. Federal: HR 5, Title II, § 202(d) Parental Notification, (2023) 
Increase in control of monitoring on school issued devices1. *Minnesota: Minn. Stat. Ch. 69 § 13.32(14)(a) (2022); 
2. Ohio: SB29, 135th Gen. Ass., § 3319.326 – §3319.327 (2024);
3. Rhode Island: H7046, 2024 Session, § 16-114-2(a)(b)(c)(d) (2024)
Parental Rights that Undermine Privacy of Gender Expansive Students1. *Idaho: Idaho Laws Ch. 314, Sec. 2 § 67-590B (2024); 
2. *Indiana: Ind. Legis. Serv. P.L. 248-2023, Chap. 7.5 § 2(a)(b) (2023);
3. *Iowa: Ia. Legis. Serv. Ch. 91, Sec. 14 § 279.78(2)(3)(4) (2023); 
4. *Louisiana: La. Sess. Law Serv. Act 680, § 2122 (2024); 
5. Montana: SB337, 68th Legislature, Sec. 1 § 1(e) (2023); 
6. New Mexico: HB296, 56th Legislature, §§ 3, 4 (2024); 
7. *North Carolina: Parents’ Bill of Rights, North Carolina Laws S.L. 2023-106, § 115C-76.45(a)(5) (2023); 
8. South Carolina: S274, 125th Gen. Ass., § 59-32-35(C) (2023); 
9. Virginia: SB37, 2024 Session, § 22.1-273.5 (2024); 
10. Wisconsin: AB510, 2023 Session, § 48.9865(3)(e) (2023);
11. Federal: HR 5, Title I, § 104(L), (2023); HR 5, Title IV, § 401

*Connotes a state law instead of a bill


[1] Bills and Laws that Increase Parental Rights

The post The Intersection of Parental Rights Proposals and Edtech: Liberty and Freedom or Civil Rights Erosion appeared first on Center for Democracy and Technology.

]]>
Brief – Unintended Consequences: Consumer Privacy Legislation and Schools https://cdt.org/insights/brief-unintended-consequences-consumer-privacy-legislation-and-schools/ Thu, 04 Apr 2024 21:54:11 +0000 https://cdt.org/?post_type=insight&p=103181 [ PDF Version ] The United States needs to enact comprehensive privacy legislation that limits the collection, use, and sharing of personal information to protect everyone, including children. Although such a bill has yet to be enacted at the federal level, state and federal legislators have proposed, and in some states enacted, legislation that limits […]

The post Brief – Unintended Consequences: Consumer Privacy Legislation and Schools appeared first on Center for Democracy and Technology.

]]>
[ PDF Version ]

The United States needs to enact comprehensive privacy legislation that limits the collection, use, and sharing of personal information to protect everyone, including children. Although such a bill has yet to be enacted at the federal level, state and federal legislators have proposed, and in some states enacted, legislation that limits the ways that companies can collect and use individuals’ data. Such legislation also often expands individuals’ rights to access and manage data about them held by companies. If not carefully crafted, however, privacy and child safety laws can inadvertently undermine the ability of schools and their vendors to carry out important educational functions.

Schools, and in turn the vendors they use (for services like managing student records and hosting educational content), have different data needs and uses than non-education private sector companies or non-profits. Quality data is required to support the core functions of schools including class assignments, transportation, nutrition, and even school funding. School operations can be actively hamstrung by an ill-suited law. Policymakers can, however, create a coherent legal regime that protects everyone’s privacy and safety while ensuring seamless education operations.

Existing Data Laws for Children and Education

A complex legal regime already governs data in an education context, making it important to consider how new laws will interact with these existing frameworks. These authorities include the Family Educational Rights and Privacy Act (FERPA), the Children’s Online Privacy Protection Act (COPPA), the Individuals with Disabilities Education Act (IDEA), and a host of state student privacy laws. 

These laws provide specific protections for a wide range of student data and how schools and companies must handle that data. For instance, FERPA addresses schools’ handling of education records and personally identifying information (PPI) of students, affording specific rights to parents to inspect and correct student records, including information maintained by vendors and third parties acting on behalf of the school. IDEA addresses, among other things, special confidentiality concerns for students with disabilities and their families. 

Federal education privacy laws like FERPA and IDEA create a floor for student privacy that can then be supplemented by additional state laws. Many states have enacted laws that impose additional obligations on education agencies, such as creating breach notification procedures and limiting the types of information that can be collected about a student. At least 128 state student privacy laws in effect today govern educational agencies and their vendors, providing an ever-widening range of additional protections to supplement federal student privacy laws.

Additionally, COPPA requires parental consent prior to certain operators of websites and online services collecting data about children under the age of 13. While not technically a student privacy law, COPPA can impact edtech companies. While the Federal Trade Commission (FTC) has long been clear that COPPA does not impose obligations on schools, it limits when a school can consent on behalf of a parent, requiring companies to obtain parents’ verifiable consent for any data collection that is not exclusively for educational purposes. 

While these frameworks are incomplete and should be improved, those improvements should be made intentionally with an eye to supporting students and school communities. These benefits are unlikely to result from bills that are targeted to other sectors but inadvertently impact education.

Inadvertent Detrimental Effects of General Privacy and Child Safety Laws on Education

Although drafters of privacy and child safety laws that are targeted at the private sector or non-education nonprofits often seek to exempt the education sector, educational institutions may end up being inadvertently covered. This oversight can impact schools’ ability to provide education to their communities, whether by limiting their ability to support students, limiting their ability to obtain core data required to provide critical services, or forcing schools to spend resources complying with additional conflicting or confusing frameworks. This inadvertent coverage can happen in a number of ways:

  • Bills that do not account for vendors providing services to schools, such as a February 2022 version of the Kids Online Safety Act (KOSA 2022), can require vendors to adhere to different standards for data than the school itself (for example, a right to deletion that might obligate a company that holds an education agency’s data to comply with a deletion request that the education agency itself would have the discretion to decline). Such different standards can create inconsistencies in how student data is handled and limit a school’s ability to rely on their vendors to handle data as expected in an educational context. Additionally, bills without clear treatment of vendors may also create legal complexity and inconsistency for schools, as they are ultimately responsible for student data, even if it is held by vendors, which is untenable if vendors are expected to follow different regulations than the school.
  • Bills that do not account for private schools can leave those schools with a legal framework not designed for the broader educational context. As an example, private schools may still be impacted by a bill that tries to account for education contexts by exempting any data covered by or entities subject to FERPA, because FERPA’s scope is limited to schools that accept federal funding, leaving out most private K-12 schools. 
  • Occasionally bills do not differentiate between private sector actors like companies and public sector actors like schools, such as the Online Privacy Act, which would thus require schools to abide by the same consumer frameworks as private companies, which can limit their ability to provide an effective education.

Legal frameworks that inadvertently cover schools or their vendors can negatively impact how schools deliver educational services. Some requirements can create legal challenges for schools, while some can more directly affect students’ educational experiences.

  • Data deletion: Many consumer data privacy laws, such as the proposed American Data Privacy Protection Act (ADPPA), give consumers the right to request or require that a “covered entity” delete any data about the consumer they hold. That requirement makes sense when a consumer wants to delete, for instance, an advertising profile about themselves. It makes much less sense when a parent wants to delete their child’s disciplinary history from their education record (FERPA already provides the parent the right to correct the record if they feel it is wrong).

    Consequently, these laws must be carefully drafted to ensure that schools are able to maintain their records as necessary to perform their role of educating students. ADPPA protects consumers by outlining data rights they have when data about them is held by “covered entities.” ADPPA, as introduced in Congress, takes care to exempt “governmental entities,” which would include schools, allowing them to maintain control of their records. However, an earlier discussion draft which does not include this exemption would have interfered with schools’ record keeping requirements. The updated version actually goes further than exempting schools themselves though; it also exempts people and entities that manage data on behalfof governmental entities like schools. This is crucial in an education context where schools rely heavily on edtech vendors in their technology ecosystems. Without this further exception, a vendor could be required to comply with, for instance, a parent’s request to delete their child’s transcripts, thus undercutting the reliability of educational records.
  • Correction: Consumer laws sometimes give consumers the right to correct data about them. As mentioned above, FERPA protects this right as well, giving parents and students the ability to contest inaccuracies in students’ educational records. However, under FERPA, a correction request typically goes through the school, and schools are able to determine whether a correction is warranted. If a consumer law is not drafted to ensure such requests go to the school, but rather enables parents and students to go directly to vendors employed by the school, it could prevent the school from determining whether the correction is valid and, if so, ensuring that the correction is done appropriately and accurately. Although many bills require the requesting consumer to prove the record is incorrect, allowing parents to request a change directly with a vendor rather than through the school could create significant confusion, or potentially allow for students to change grades or otherwise alter their academic record without the school’s awareness or involvement.
  • Profiling: Some laws place restrictions on profiling users under a certain age, where profiling generally means using the user’s past actions or other information about the user to make decisions about how to interact with or present information to the user in the future. Some of these profiling laws protect people in certain age ranges, generally under 13. Without appropriate carve outs for schools, both public and private, these restrictions could apply to many students in K-12 schools. However, some systems used by schools generate profiles of students that schools use to inform their instructional and educational practices. For example, schools may analyze data to personalize student learning in a number of ways, including allowing for individualized project-based learning or personalizing student goals. Disallowing profiling would render these systems ineffective, essentially removing a tool from the toolbox of schools that are aiming to support their most at-risk students.

Students, Especially LGBTQ+, Disapprove of Increased Parental Access To Online Activity

Many recent state and federal online child safety laws propose varying levels of parental access to their children’s online activities, assuming that more parental control will keep kids safer. However, though our research indicates that parents are already implementing measures to supervise what their children do online and would like additional controls, students do not share this perspective. This is even more pronounced among LGBTQ+ students, who are more likely to experience abuse, neglect, and homelessness if their parents are unsupportive.

Approximately half of students overall report that they would be comfortable with their parents being able to see a report of all of their online activity at school – similar to what their school’s student activity monitoring system captures. This drops to just 35 percent for LGBTQ+ students, compared to 55 percent among their non-LGBTQ+ peers. 

Students express even less support for their parents being able to see a report of their online activity wherever they are – only 42 percent of students said they would be comfortable with this. Again, LGBTQ+ students report being less comfortable than their non-LGTBQ+ peers with their parents having this ability (24 percent vs. 49 percent who would be comfortable). In line with these views, 67 percent of students said they would be likely to turn off their parents’ ability to see their online activity if they could, and LGBTQ+ students would be even more likely at 74 percent.

As previously stated, parents play an active role in supervising their children’s online activity, but they agree that older students deserve more privacy and less oversight than younger children. Just over 90 percent of parents agree that it is important for them as a parent to see everything their child is looking at and doing online from 3-8th grades, but that drops to 83 percent for students in 9-12th grades.

Given these findings, it is imperative to think about whether state and federal online child safety laws would actually keep students “safe.” The majority of students express not feeling comfortable with increased parental access to their online activity and data, and this sentiment is even more pronounced among LGBTQ+ students. This raises questions about whether parental access would cause a chilling effect and hamper kids’ freedom of speech and expression.

Drafting Legislation that Minimizes Unintended Consequences to the Education Sector

Policymakers should think carefully about whether and how educational institutions are implicated by the privacy and safety bills they draft. If policymakers do not intend to include the education sector, they can take a number of different approaches. 

  • Exempt organizations by class or statutory framework: This approach would entail exempting organizations by class, such as schools and vendors providing services to them (which would then be governed by existing legal frameworks like FERPA and IDEA, as described above). Legislators would have to create a robust definition of schools and vendors to avoid some of the unintended consequences detailed previously. 
  • Exempt by activity: Another approach that could be used to exempt the education sector would be to exempt data by purpose or activity. This would mean exempting data that is acquired and used for a legitimate educational purpose from provisions such as the right to delete (this language might mirror the “school official exception” language in FERPA that allows schools to outsource certain functions to vendors when there is a “legitimate educational interest in the education records”). This approach could allow for schools and their vendors to engage in activities like profiling if they have a legitimate educational reason to do so. 
  • Exempt by existing legal framework: Another approach to exempting schools is to exempt any data already covered by FERPA, as in the North Carolina Consumer Privacy Act. This approach has the advantage of covering both schools themselves and any vendors when they are handling FERPA-protected data. However, as noted previously, most private schools do not receive federal funding and are therefore not governed by FERPA. In this case, private schools and their vendors would not be exempted, and legislators would have to address them specifically, likely through a direct definitional carve out as there is not a similar legislation framework to FERPA that addresses private school data.

Conclusion

Regardless of how legislators and policymakers choose to approach and account for schools, it is critical to the functioning of our education system that they do so. Student data can be a great tool for improving education delivery and supporting students, but also contains highly sensitive personal information about young people that is worthy of well-designed protections. Policymakers need to ensure that schools can leverage that data effectively even as they take strides to provide much needed protections to consumers and their data.

[ PDF Version ]

The post Brief – Unintended Consequences: Consumer Privacy Legislation and Schools appeared first on Center for Democracy and Technology.

]]>
Report – Up in the Air: Educators Juggling the Potential of Generative AI with Detection, Discipline, and Distrust  https://cdt.org/insights/report-up-in-the-air-educators-juggling-the-potential-of-generative-ai-with-detection-discipline-and-distrust/ Wed, 27 Mar 2024 04:01:00 +0000 https://cdt.org/?post_type=insight&p=102954 Educators are having a very different experience with generative artificial intelligence (AI) since the 2022-23 school year came to a close. K-12 schools have now had the opportunity to take a breath and regroup to determine how to get a grip on the explosion of generative AI in the classroom – after the education sector […]

The post Report – Up in the Air: Educators Juggling the Potential of Generative AI with Detection, Discipline, and Distrust  appeared first on Center for Democracy and Technology.

]]>
CDT report entitled “Up in the Air: Educators Juggling the Potential of Generative AI with Detection, Discipline, and Distrust.” Illustration of an “AI-generated apple” with a parachute flying through an open sky, and “AI-generated” schoolwork, book, pencil & eraser falling behind. Note: this illustration was created solely by a human.
CDT report illustration of an “AI-generated apple” with a parachute flying through an open sky, and “AI-generated” schoolwork, book, pencil & eraser falling behind. Note: this illustration was created solely by a human.

Educators are having a very different experience with generative artificial intelligence (AI) since the 2022-23 school year came to a close. K-12 schools have now had the opportunity to take a breath and regroup to determine how to get a grip on the explosion of generative AI in the classroom – after the education sector was caught off guard when ChatGPT burst abruptly onto the scene during the last school year. 

To understand how teachers are currently interacting with and receiving support on this technology, the Center for Democracy & Technology (CDT) conducted a nationally representative survey of middle and high school teachers in November and December 2023. This research builds on previous CDT findings that highlighted how schools were failing to enact and/or share policies and procedures on generative AI and how, as a result, teachers lacked clarity and guidance, were more distrustful of students, and reported that students were getting in trouble due to this technology. 

This school year, teachers report some welcome movement towards more guidance and training around generative AI – but also areas that are cause for concern:

  • Familiarity, training, and school policymaking on generative AI in schools has increased, but the biggest risks remain largely unaddressed. Teachers report that both they and students have made increasing use of generative AI, and a majority indicate their schools now have a policy in place and provide training to teachers on generative AI. However, schools are providing teachers with little guidance on what responsible student use looks like, how to respond if they suspect a student is using generative AI in ways that are not allowed, and how to detect AI-generated work.
  • Teachers are becoming heavily reliant on school-sanctioned AI content detection tools. A majority of teachers report using school-endorsed AI content detection tools, despite research showing that these tools are ineffective. The proliferation of AI content detection tools could lead to negative consequences for students – given their known efficacy issues and teachers reporting low levels of school guidance on how to respond if they suspect a student has used generative AI in ways they should not.
  • Student discipline due to generative AI use has increased. Although schools are still in the process of setting generative AI policies, and the technology has been in use longer, more teachers report students experiencing disciplinary consequences than last school year. Historically marginalized students, like students with disabilities and English learners, are at particular risk for disciplinary action.
  • Teacher distrust in their students’ academic integrity remains an issue and is more pronounced in schools that ban generative AI. A majority of teachers still report that generative AI has made them more distrustful of whether their students’ work is actually theirs, and teachers at schools who ban the technology say they are even more distrustful. This is especially concerning because teachers from schools who ban generative AI are more likely to report student(s) at their school experiencing disciplinary action.

Read the full report.

Read the slide deck on the research findings.

The post Report – Up in the Air: Educators Juggling the Potential of Generative AI with Detection, Discipline, and Distrust  appeared first on Center for Democracy and Technology.

]]>
Brief – Late Applications: Disproportionate Effects of Generative AI-Detectors on English Learners https://cdt.org/insights/brief-late-applications-disproportionate-effects-of-generative-ai-detectors-on-english-learners/ Mon, 18 Dec 2023 18:13:21 +0000 https://cdt.org/?post_type=insight&p=101992 [ PDF Version ] CDT recently released legal research on the application of civil rights laws to uses of education data and technology, including AI. As the use of generative AI increases both inside and outside the classroom, one group of students at particular risk of unequal treatment are those who are not yet able […]

The post Brief – Late Applications: Disproportionate Effects of Generative AI-Detectors on English Learners appeared first on Center for Democracy and Technology.

]]>
[ PDF Version ]

CDT recently released legal research on the application of civil rights laws to uses of education data and technology, including AI. As the use of generative AI increases both inside and outside the classroom, one group of students at particular risk of unequal treatment are those who are not yet able to communicate fluently or learn effectively in English – that is, English Learner (EL) students. Research indicates that so-called AI detectors are disproportionately likely to falsely flag the writing of non-native English speakers as AI-generated, putting them at greater risk for being disciplined for cheating in school. Schools need to be aware of this potential disparity and take steps to ensure it does not result in violating the civil rights of EL students. 

Who Are EL Students?

Nationally, English learners (ELs) are the fastest growing student population, accounting for 10 percent of the overall student population in 2019, with 81 percent of public schools serving at least one EL student. While some EL students are immigrants themselves, most are actually the U.S.-born children of immigrants. Both face unique challenges in school. For example, non-U.S. born ELs who enter the K-12 system as high schoolers are under immense pressure to graduate on time while also reaching English language proficiency; they may also have entered the U.S. without their family, meaning that they bear significant burdens such as unstable housing and the obligation to work to support themselves. 

The goal for all ELs is to reach English proficiency– once they achieve this, they are reclassified and no longer considered ELs. This reclassification process makes ELs a dynamic student group who are more difficult than other vulnerable student populations to properly track. By 12th grade, ELs make up only 4 percent of the total population of students, down from 16 percent in kindergarten. Even after reclassification, however, studies have historically suggested that EL students still struggle – “sizable proportions of the reclassified students, while able to keep pace in mainstream classrooms in the early elementary school years, later encountered difficulties in middle and high school,” with some ending up having to repeat a grade. Data out of California shows ELs lagging behind their peers academically, from test scores to grades to graduation rates. However, some advocates are optimistic that ELs, with the right support and tracking, are closing this gap.

Generative AI, EL Students, and the Risk of Disproportionate Discipline

EL students already are at higher risk for school discipline. The risk of suspension for a student with EL status is 20 percent higher than a non-EL student.[1] Moreover, approximately three quarters of EL students are native Spanish speakers, and Hispanic students are overrepresented in alternative schools, where students are typically placed due to disciplinary issues and where they tend to have less access to support staff like counselors and social workers. CDT research also found that Hispanic students are more likely than non-minority students to use school-issued devices, and thus more likely to be subject to continuous monitoring by student activity monitoring software, which can lead to even higher rates of discipline.

The increased use of chatbots such as ChatGPT threatens to exacerbate the discipline disparity for EL students. Generative AI has become a contentious topic in the education sector. Concerns about academic dishonesty are high, with 90 percent of teachers reporting that they think their students have used generative AI to complete assignments. As CDT has previously reported, student accounts suggest that generative AI is actually primarily used for personal reasons rather than to cheat, and that certain populations, such as students with disabilities, are more likely to use the technology and more likely to have legitimate accessibility reasons for doing so. Still, disciplinary policies are cropping up across the country to penalize student use of generative AI and are sometimes accompanied by newly acquired programs that purport to detect the use of generative AI in student work. 

For EL students, this could be uniquely problematic. A recent study out of Stanford University shows that AI-detectors are very likely to falsely flag the writing of non-native English speakers as AI-generated, and that there is significant disparity in false flags for non-native English speakers versus native speakers. The study was conducted using the test of English as a foreign language (TOEFL) done by eighth graders. Detectors were “near perfect” in evaluating essays written by U.S. born writers, but falsely flagged 61.22 percent of TOEFL essays written by non-native English speakers as AI-generated (particularly troubling as this is a test that would, by its nature, not ever be administered to native English speakers in the first place). All seven AI detectors that the study tested unanimously but falsely identified 18 of the 91 TOEFL student essays (19 percent) as AI-generated and a remarkable 89 of the 91 TOEFL essays (97 percent) were flagged by at least one of the detectors. James Zou, who conducted the study, said of its results: “These numbers pose serious questions about the objectivity of AI detectors and raise the potential that foreign-born students and workers might be unfairly accused of or, worse, penalized for cheating.” 

Like students with disabilities, there might be legitimate uses of generative AI that could benefit EL students in ways that might make them more likely users, and thus even more likely to be disciplined under new school policies. According to some EL educators, generative AI “can potentially address some of the pressing needs of second language writers, including timely and adaptive feedback, a platform for practice writing, and a readily available and dependable writing assistant tool.” Some say that generative AI could benefit both students and teachers in the classroom, by providing students with engaging and personalized language learning experiences, while allowing teachers to “help students improve their language skills in a fun and interactive way, while also exposing them to natural-sounding English conversations.”

Civil Rights Considerations

These concerns about disproportionate flagging and discipline are not just a matter of bad policy. Where students belonging to a protected class are being treated differently from others because of their protected characteristics, civil rights alarm bells sound. The Civil Rights Act of 1964 (the Act) generally prohibits state-sponsored segregation and inequality in crucial arenas of public life, including education. Title VI of the Act protects students from discrimination on the basis of, among other attributes, race, color, and national origin, and was enacted to prevent (and in some cases, mandate action to actively reverse) historical racial segregation in schools. ELs are protected from discrimination under Title VI on the basis of both race and national origin, and are entitled to receive language services and specialized instruction from their school in the “least segregated” manner possible. Under the circumstances described above, EL students arguably experience unlawful discrimination under the theories of disparate treatment, disparate impact, or hostile learning environment as a result of false flagging.

  1. Disparate impact and disparate treatment. Disparate impact occurs where a neutral policy is applied to everyone, but primarily members of a protected class experience an adverse effect. Disparate impact does not require intentional discrimination. Disparate treatment requires a showing of intent to treat a student differently (at least in part because of their protected characteristics) and can occur either where a neutral policy is selectively enforced against students belonging to a protected class, or where the policy explicitly targets that protected group. Here, an education agency’s generative AI and discipline policy might be over-enforced against EL students, due to the sheer disproportionality of false flags for non-native English speakers suggested by the Stanford study. Where an education agency is aware of these high error rates and consequent adverse effects for a protected group of students but nonetheless chooses to deploy the technology, it arguably meets requirements for a disparate impact or even a disparate treatment claim. 
  2. Hostile learning environment. A hostile learning environment occurs where a student — or group of students — experiences severe, pervasive, or persistent treatment that interferes with the student’s ability to participate in or benefit from services or activities provided by the school. For EL students, having their work frequently flagged for cheating by AI detectors and dealing with the accusation, investigation, and discipline that results, might create such an environment. Education agencies are tasked with the general obligation of ensuring a nondiscriminatory learning environment for all students. This obligation extends to responsibility for the conduct of third parties, such as vendors or contractors, with which the agency contracts, even if the conduct was not solely its own.

Recommendations

Given the known inadequacies of AI detectors and the clear potential for disproportionate adverse effects on marginalized groups of students such as EL learners, education agencies should at minimum consider taking the following steps.

Contemplate necessity of use

Assess whether the use of this technology will be helpful in accomplishing the stated goal and should be used at all. As a starting point, the goal of deploying these technologies is to prevent academic dishonesty. Educators are skilled professionals who are tasked with understanding their students’ skills and challenges. More traditional mechanisms for cheating, such as purchasing essays online or having them written by a friend or family member, are often easy to identify for an educator familiar with that student’s work and skill level. Given the known error rates of AI detectors, there is nothing to suggest that these technologies could or should be used to supplant a teacher’s professional judgment in determining whether a piece of writing was actually the student’s own work. 

Provide training regarding reliability  

Ensure educators understand: (i) the success and error rates of AI detectors, and the disproportionate error rate for non-native English speakers; (ii) that AI detectors should not supplant an educator’s professional judgment; and (iii) that AI detector flags are not reliable as concrete proof of academic dishonesty. At most,  if they use AI detectors at all, educators should recognize they can only be one piece of a broader inquiry for identifying potential academic dishonesty.  

Provide students an appeal process to challenge flags 

To the extent that schools use AI detectors, they must put in place significant procedural protections especially given the known error rates. Among the checks and balances that should be in place following a flag by an AI detector is the opportunity for implicated students to respond and advocate for themselves. Understand, however, that there are likely to be equity concerns with this process as well, as some students may not be as equipped as others (depending on grade level, English proficiency, etc.) to even understand the allegations or refute them.  

Conclusion

As schools grapple with rapidly emerging technologies, it is understandable that the response may include adopting innovative technologies of their own to combat undesired uses. However, it remains vital to stay vigilant of the potential pitfalls of these technologies and ensure that the protection of civil rights for all students in the classroom is a key priority.

[ PDF Version ]

The post Brief – Late Applications: Disproportionate Effects of Generative AI-Detectors on English Learners appeared first on Center for Democracy and Technology.

]]>
From Our Fellows: A Perspective on Query Recommendation in Search Engines https://cdt.org/insights/a-perspective-on-query-recommendation-in-search-engines/ Wed, 13 Dec 2023 15:17:02 +0000 https://cdt.org/?post_type=insight&p=101896 By Sucheta Soundarajan, Associate Professor, Syracuse University, and CDT Non-Resident Fellow Disclaimer: The views expressed by CDT’s Non-Resident Fellows and any coauthors are their own and do not necessarily reflect the policy, position, or views of CDT. Online search engines have become important tools for individuals seeking information. However, it has been known for several […]

The post From Our Fellows: A Perspective on Query Recommendation in Search Engines appeared first on Center for Democracy and Technology.

]]>
By Sucheta Soundarajan, Associate Professor, Syracuse University, and CDT Non-Resident Fellow

Disclaimer: The views expressed by CDT’s Non-Resident Fellows and any coauthors are their own and do not necessarily reflect the policy, position, or views of CDT.

Online search engines have become important tools for individuals seeking information. However, it has been known for several years that results (or the ordering of results) returned by these search engines may exhibit socially harmful forms of bias: for example, in a variation on a classic example given by Bolukbasi et al., a query for “computer science student” may produce disproportionately more search results corresponding to men than women, or may rank search results corresponding to men higher than those corresponding to women. This sort of systemic bias can stem from a number of sources, including underlying bias in the data used to generate these results. 

Modern search engines use so-called word embeddings to mathematically represent words and phrases. A word embedding is, effectively, a numerical representation of a word or phrase, and is learned by observing which words tend to appear in close proximity in search results. Pairs of words that commonly appear close to one another in search results like web pages or articles are near one another in the word embedding space. When a document search is performed on some query (word or phrase), a document ranks higher if it has words from the query or if it has words that are close to the query words in the embedding space. For example, synonyms like “autumn” and “fall”, or related words like “brother” and “sister,” often appear in similar contexts, and so will be near each other in the embedding space. Because a greater proportion of computer science students and practitioners are male than are female, male-related words (such as `him,’ `his,’ etc.) more frequently appear near computer science-related words (such as `computer,’ `technology’, etc.) than do female-related words (such as `her,’ `hers,’ etc.). Those male-related words are closer to the computer science-related words in the embedding space, and so search results with male-related words will score more highly with respect to a computer science query than will search results with female-related words. Such bias has potentially major societal implications, particularly in areas like hiring, as existing prejudices are then reinforced. 

This problem can be addressed in different ways. One method debiases the embedding itself. In this approach, specific biases (e.g., gender or racial biases) are directly and automatically addressed: for instance, gender-related words might be shifted so that they are equidistant in the embedding space from profession-related words. Another method re-ranks search results with respect to some fairness criterion: for example, one might require that an equal proportion in the top 100 results be male and female-coded. 

In our work, we instead consider the problem of balanced query recommendation, in which an algorithm suggests less or oppositely-biased alternatives to a query. As a hypothetical example, note that the terms `secretary’ and `administrative assistant’ are often used interchangeably. However, because of sexist connotations, men may be unlikely to use the term `secretary’ to refer to themselves; in contrast, the term `administrative assistant’ may be more likely to return less gender-biased results. Our approach was originally motivated by conversations with an academic administration recruiter who recounted her experiences with searching for job candidates online: when searching for individuals with a particular qualification, she noticed that the returned results were primarily white men. Deeper investigation suggested that women and non-white candidates tended to use different keywords to reflect the same type of qualifications. In such cases, a recruiter searching for one term may wish to know of similar but less biased keywords. Additionally, job candidates selecting keywords for their resumes may wish to know whether their choice of keyword is encoding some sort of bias.

Our work presents BalancedQR, an approach for recommending balanced query keywords. BalancedQR works on top of an existing search algorithm. It uses word embedding to identify terms related to the original query and then measures the bias and relevance of those identified terms. Bias can be computed in whatever method is appropriate for the context: for example, if searching for candidate profiles on a hiring website, one could examine the fraction of male and female profiles that are returned; if searching for news articles, one could use external annotations of platform bias. 

Initial tests on data using data from Reddit and Twitter produced interesting results: for instance, terms such as `longing’ and `sorrow’ were more likely to be found in posts on /r/AskMen, while /r/AskWomen posts were more likely to use `grief’ and `sadness.’  In this experiment, bias was measured by examining which subreddit a particular post/comment came from. A user searching for, e.g., `grief’ on Reddit would disproportionately receive posts on /r/AskWomen, while similar concepts (‘longing’ and ‘sorrow’) would return more posts from /r/AskMen. In this example, BalancedQR would recommend `loneliness,’ which would produce results with high relevance but very little bias, as results were returned more equally across both subreddits. Similar results were seen for political bias on political subreddits when searching for the term `rioting’ vs. `protests’ (the former of which was disproportionately represented on r/Republicans, and the latter of which is BalancedQR’s recommendation, which produces high-relevance, low-bias results). 

There are a number of important use cases for BalancedQR. For instance, BalancedQR could be implemented as a browser plug-in or as part of a search engine’s recommended queries, and potentially reduce echo chambers and information segregation. In the context of hiring, it could be used to reduce gender bias at the search stage. In our future work, we look forward to conducting user studies to observe how recommendations produced by BalancedQR (or other alternatives) are used in practice. Additional work on developing automated bias metrics—particularly those that measure bias across multiple intersectional dimensions—would also be of practical significance to the implementation of BalancedQR.

The post From Our Fellows: A Perspective on Query Recommendation in Search Engines appeared first on Center for Democracy and Technology.

]]>