Civic Tech Research Archives - Center for Democracy and Technology https://cdt.org/area-of-focus/equity-in-civic-tech/civic-tech-research/ Thu, 08 May 2025 16:53:28 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://cdt.org/wp-content/uploads/2019/11/cropped-cdt-logo-32x32.png Civic Tech Research Archives - Center for Democracy and Technology https://cdt.org/area-of-focus/equity-in-civic-tech/civic-tech-research/ 32 32 Out of Step: Students, Teachers in Stride with EdTech Threats While Parents Are Left Behind https://cdt.org/insights/out-of-step-students-teachers-in-stride-with-edtech-threats-while-parents-are-left-behind/ Wed, 15 Jan 2025 05:02:00 +0000 https://cdt.org/?post_type=insight&p=106824 Since the Center for Democracy & Technology (CDT) began polling school stakeholders in 2020 about their experiences with educational data and technology (edtech) in classrooms, the sheer number of edtech products and use cases has skyrocketed. Many of the tools being proactively implemented in K–12 schools across the country and adopted by kids in their […]

The post Out of Step: Students, Teachers in Stride with EdTech Threats While Parents Are Left Behind appeared first on Center for Democracy and Technology.

]]>
Layers of glitch effects, in shades of blue, purple, and pink. White text: "Out of Step." Green text: "Students, Teachers in Stride with EdTech Threats While Parents Are Left Behind."
Layers of glitch effects, in shades of blue, purple, and pink. White text: “Out of Step.” Green text: “Students, Teachers in Stride with EdTech Threats While Parents Are Left Behind.”

Since the Center for Democracy & Technology (CDT) began polling school stakeholders in 2020 about their experiences with educational data and technology (edtech) in classrooms, the sheer number of edtech products and use cases has skyrocketed. Many of the tools being proactively implemented in K–12 schools across the country and adopted by kids in their personal capacity are well intended; however, some of these tools have had unintended consequences such as privacy violations and negative effects on students from historically marginalized communities.

To continue tracking the impacts of edtech tools in the classroom and at home, CDT surveyed 1,028 parents of students in grades 6–12, 1,316 students in grades 9–12, and 1,006 teachers of grades 6–12 to understand their opinions on and experiences with student privacy, emerging technologies, parent engagement, school policies related to gender expansive students, content filtering and blocking software, student activity monitoring, and generative artificial intelligence (AI). Any subgroup n-sizes that differ from the total sample size are denoted throughout this report.

The definitions of these various edtech issues as shown to survey respondents are also denoted throughout the body of the report, and other key terms are included on page 22. This research builds on CDT’s extensive body of quantitative and qualitative research, which is referenced on page 23. For additional details about the survey findings in this report, please reference the comprehensive slide deck.

Read the full report.

Explore the slide deck on the research findings.

The post Out of Step: Students, Teachers in Stride with EdTech Threats While Parents Are Left Behind appeared first on Center for Democracy and Technology.

]]>
Report – In Deep Trouble: Surfacing Tech-Powered Sexual Harassment in K-12 Schools https://cdt.org/insights/report-in-deep-trouble-surfacing-tech-powered-sexual-harassment-in-k-12-schools/ Thu, 26 Sep 2024 04:01:00 +0000 https://cdt.org/?post_type=insight&p=105609 Executive Summary Generative artificial intelligence (AI) tools continue to capture the imagination, but increasingly the technology’s damaging potential is revealing itself. An often problematic use of generative AI is in the creation and distribution of deepfakes online, especially because the vast majority contain sexually explicit intimate depictions. In the past school year (2023-2024), the rise […]

The post Report – In Deep Trouble: Surfacing Tech-Powered Sexual Harassment in K-12 Schools appeared first on Center for Democracy and Technology.

]]>
CDT report, entitled "In Deep Trouble: Surfacing Tech-Powered Sexual Harassment in K-12 Schools." Illustration of a cell phone and social media and messaging posts, floating amongst a dark and choppy body of water.
CDT report, entitled “In Deep Trouble: Surfacing Tech-Powered Sexual Harassment in K-12 Schools.” Illustration of a cell phone and social media and messaging posts, floating amongst a dark and choppy body of water.

Executive Summary

Generative artificial intelligence (AI) tools continue to capture the imagination, but increasingly the technology’s damaging potential is revealing itself. An often problematic use of generative AI is in the creation and distribution of deepfakes online, especially because the vast majority contain sexually explicit intimate depictions. In the past school year (2023-2024), the rise of generative AI has collided with a long-standing problem in schools: the act of sharing non-consensual intimate imagery (NCII). K-12 schools are often the first to encounter large-scale manifestations of the risks and harms facing young people when it comes to technology, and NCII, both deepfake and authentic, is no exception. Over the past year, anecdotes of children being the perpetrators and victims of deepfake NCII have been covered by major news outlets, elevating concerns about how to curb the issue in schools. But just how widespread is NCII really? And how well equipped are schools to handle this challenge?

The Center for Democracy & Technology (CDT) conducted surveys of public high school students and public middle and high school parents and teachers from July to August 2024 to understand the prevalence of deepfakes, NCII, and related issues in K-12 schools. CDT’s research contributes to better understanding these issues within the U.S. educational context, as research has not yet been publicly published that both quantifies the rising prevalence of deepfakes and NCII in K-12 schools and reflects the perspectives of teachers, parents, and students.

In short, concerns over the widespread nature of NCII, both authentic and deepfake, in public K-12 schools across the country are well-founded:

  • NCII, both authentic and deepfake, is a significant issue in K-12 public schools: Students and teachers report substantial amounts of NCII, both authentic and deepfake, depicting individuals associated with their school being shared in the past school year (2023-2024), with the primary perpetrators and victims being students.
  • Female and LGBTQ+ students are the most alert to the impact of NCII: Students and teachers report that female students are more often depicted in deepfake NCII that is shared by their classmates, and both female and LGBTQ+ students say that they have lower levels of confidence in their schools’ ability to prevent and respond to the increasing threat of deepfake NCII.
  • Schools are not doing enough to prevent students from sharing NCII: Very few teachers report that their schools have policies and procedures that proactively address the spread of authentic and deepfake NCII. Instead, schools reactively respond once there has been an incident at their school. This unfortunately leaves many students and parents in the dark and seeking answers from schools that are ill-equipped to provide them. 
  • When schools do respond, they focus heavily on imposing serious consequences on perpetrators without providing support to victims of NCII: Both students and teachers report perpetrators receiving harsh penalties, including expulsion, long-term suspension, and referrals to law enforcement. But students and teachers say that schools provide few resources for victims of NCII, like counseling or help removing damaging content from social media.
  • While stakeholders inside the school building, like students and teachers, report that NCII in all its forms is a significant issue in K-12 schools, parents find themselves out of the loop: Parents are significantly less aware of these threats or the harms that they pose. At the same time, parents agree that more education of students is needed and feel they should play a primary role in providing it. 

Although addressing NCII, both authentic and deepfake, will require a long-term, multistakeholder approach, one thing is clear – NCII has a significant effect on students, and schools need to do more now to protect them from its harms and create a learning environment that is free from sexual harassment. Efforts to do so should center on bolstering prevention measures, improving victim support, and engaging parents.

Read the full report.

Explore the slide deck.

The post Report – In Deep Trouble: Surfacing Tech-Powered Sexual Harassment in K-12 Schools appeared first on Center for Democracy and Technology.

]]>
Report – Up in the Air: Educators Juggling the Potential of Generative AI with Detection, Discipline, and Distrust  https://cdt.org/insights/report-up-in-the-air-educators-juggling-the-potential-of-generative-ai-with-detection-discipline-and-distrust/ Wed, 27 Mar 2024 04:01:00 +0000 https://cdt.org/?post_type=insight&p=102954 Educators are having a very different experience with generative artificial intelligence (AI) since the 2022-23 school year came to a close. K-12 schools have now had the opportunity to take a breath and regroup to determine how to get a grip on the explosion of generative AI in the classroom – after the education sector […]

The post Report – Up in the Air: Educators Juggling the Potential of Generative AI with Detection, Discipline, and Distrust  appeared first on Center for Democracy and Technology.

]]>
CDT report entitled “Up in the Air: Educators Juggling the Potential of Generative AI with Detection, Discipline, and Distrust.” Illustration of an “AI-generated apple” with a parachute flying through an open sky, and “AI-generated” schoolwork, book, pencil & eraser falling behind. Note: this illustration was created solely by a human.
CDT report illustration of an “AI-generated apple” with a parachute flying through an open sky, and “AI-generated” schoolwork, book, pencil & eraser falling behind. Note: this illustration was created solely by a human.

Educators are having a very different experience with generative artificial intelligence (AI) since the 2022-23 school year came to a close. K-12 schools have now had the opportunity to take a breath and regroup to determine how to get a grip on the explosion of generative AI in the classroom – after the education sector was caught off guard when ChatGPT burst abruptly onto the scene during the last school year. 

To understand how teachers are currently interacting with and receiving support on this technology, the Center for Democracy & Technology (CDT) conducted a nationally representative survey of middle and high school teachers in November and December 2023. This research builds on previous CDT findings that highlighted how schools were failing to enact and/or share policies and procedures on generative AI and how, as a result, teachers lacked clarity and guidance, were more distrustful of students, and reported that students were getting in trouble due to this technology. 

This school year, teachers report some welcome movement towards more guidance and training around generative AI – but also areas that are cause for concern:

  • Familiarity, training, and school policymaking on generative AI in schools has increased, but the biggest risks remain largely unaddressed. Teachers report that both they and students have made increasing use of generative AI, and a majority indicate their schools now have a policy in place and provide training to teachers on generative AI. However, schools are providing teachers with little guidance on what responsible student use looks like, how to respond if they suspect a student is using generative AI in ways that are not allowed, and how to detect AI-generated work.
  • Teachers are becoming heavily reliant on school-sanctioned AI content detection tools. A majority of teachers report using school-endorsed AI content detection tools, despite research showing that these tools are ineffective. The proliferation of AI content detection tools could lead to negative consequences for students – given their known efficacy issues and teachers reporting low levels of school guidance on how to respond if they suspect a student has used generative AI in ways they should not.
  • Student discipline due to generative AI use has increased. Although schools are still in the process of setting generative AI policies, and the technology has been in use longer, more teachers report students experiencing disciplinary consequences than last school year. Historically marginalized students, like students with disabilities and English learners, are at particular risk for disciplinary action.
  • Teacher distrust in their students’ academic integrity remains an issue and is more pronounced in schools that ban generative AI. A majority of teachers still report that generative AI has made them more distrustful of whether their students’ work is actually theirs, and teachers at schools who ban the technology say they are even more distrustful. This is especially concerning because teachers from schools who ban generative AI are more likely to report student(s) at their school experiencing disciplinary action.

Read the full report.

Read the slide deck on the research findings.

The post Report – Up in the Air: Educators Juggling the Potential of Generative AI with Detection, Discipline, and Distrust  appeared first on Center for Democracy and Technology.

]]>
Brief – Late Applications: Disproportionate Effects of Generative AI-Detectors on English Learners https://cdt.org/insights/brief-late-applications-disproportionate-effects-of-generative-ai-detectors-on-english-learners/ Mon, 18 Dec 2023 18:13:21 +0000 https://cdt.org/?post_type=insight&p=101992 [ PDF Version ] CDT recently released legal research on the application of civil rights laws to uses of education data and technology, including AI. As the use of generative AI increases both inside and outside the classroom, one group of students at particular risk of unequal treatment are those who are not yet able […]

The post Brief – Late Applications: Disproportionate Effects of Generative AI-Detectors on English Learners appeared first on Center for Democracy and Technology.

]]>
[ PDF Version ]

CDT recently released legal research on the application of civil rights laws to uses of education data and technology, including AI. As the use of generative AI increases both inside and outside the classroom, one group of students at particular risk of unequal treatment are those who are not yet able to communicate fluently or learn effectively in English – that is, English Learner (EL) students. Research indicates that so-called AI detectors are disproportionately likely to falsely flag the writing of non-native English speakers as AI-generated, putting them at greater risk for being disciplined for cheating in school. Schools need to be aware of this potential disparity and take steps to ensure it does not result in violating the civil rights of EL students. 

Who Are EL Students?

Nationally, English learners (ELs) are the fastest growing student population, accounting for 10 percent of the overall student population in 2019, with 81 percent of public schools serving at least one EL student. While some EL students are immigrants themselves, most are actually the U.S.-born children of immigrants. Both face unique challenges in school. For example, non-U.S. born ELs who enter the K-12 system as high schoolers are under immense pressure to graduate on time while also reaching English language proficiency; they may also have entered the U.S. without their family, meaning that they bear significant burdens such as unstable housing and the obligation to work to support themselves. 

The goal for all ELs is to reach English proficiency– once they achieve this, they are reclassified and no longer considered ELs. This reclassification process makes ELs a dynamic student group who are more difficult than other vulnerable student populations to properly track. By 12th grade, ELs make up only 4 percent of the total population of students, down from 16 percent in kindergarten. Even after reclassification, however, studies have historically suggested that EL students still struggle – “sizable proportions of the reclassified students, while able to keep pace in mainstream classrooms in the early elementary school years, later encountered difficulties in middle and high school,” with some ending up having to repeat a grade. Data out of California shows ELs lagging behind their peers academically, from test scores to grades to graduation rates. However, some advocates are optimistic that ELs, with the right support and tracking, are closing this gap.

Generative AI, EL Students, and the Risk of Disproportionate Discipline

EL students already are at higher risk for school discipline. The risk of suspension for a student with EL status is 20 percent higher than a non-EL student.[1] Moreover, approximately three quarters of EL students are native Spanish speakers, and Hispanic students are overrepresented in alternative schools, where students are typically placed due to disciplinary issues and where they tend to have less access to support staff like counselors and social workers. CDT research also found that Hispanic students are more likely than non-minority students to use school-issued devices, and thus more likely to be subject to continuous monitoring by student activity monitoring software, which can lead to even higher rates of discipline.

The increased use of chatbots such as ChatGPT threatens to exacerbate the discipline disparity for EL students. Generative AI has become a contentious topic in the education sector. Concerns about academic dishonesty are high, with 90 percent of teachers reporting that they think their students have used generative AI to complete assignments. As CDT has previously reported, student accounts suggest that generative AI is actually primarily used for personal reasons rather than to cheat, and that certain populations, such as students with disabilities, are more likely to use the technology and more likely to have legitimate accessibility reasons for doing so. Still, disciplinary policies are cropping up across the country to penalize student use of generative AI and are sometimes accompanied by newly acquired programs that purport to detect the use of generative AI in student work. 

For EL students, this could be uniquely problematic. A recent study out of Stanford University shows that AI-detectors are very likely to falsely flag the writing of non-native English speakers as AI-generated, and that there is significant disparity in false flags for non-native English speakers versus native speakers. The study was conducted using the test of English as a foreign language (TOEFL) done by eighth graders. Detectors were “near perfect” in evaluating essays written by U.S. born writers, but falsely flagged 61.22 percent of TOEFL essays written by non-native English speakers as AI-generated (particularly troubling as this is a test that would, by its nature, not ever be administered to native English speakers in the first place). All seven AI detectors that the study tested unanimously but falsely identified 18 of the 91 TOEFL student essays (19 percent) as AI-generated and a remarkable 89 of the 91 TOEFL essays (97 percent) were flagged by at least one of the detectors. James Zou, who conducted the study, said of its results: “These numbers pose serious questions about the objectivity of AI detectors and raise the potential that foreign-born students and workers might be unfairly accused of or, worse, penalized for cheating.” 

Like students with disabilities, there might be legitimate uses of generative AI that could benefit EL students in ways that might make them more likely users, and thus even more likely to be disciplined under new school policies. According to some EL educators, generative AI “can potentially address some of the pressing needs of second language writers, including timely and adaptive feedback, a platform for practice writing, and a readily available and dependable writing assistant tool.” Some say that generative AI could benefit both students and teachers in the classroom, by providing students with engaging and personalized language learning experiences, while allowing teachers to “help students improve their language skills in a fun and interactive way, while also exposing them to natural-sounding English conversations.”

Civil Rights Considerations

These concerns about disproportionate flagging and discipline are not just a matter of bad policy. Where students belonging to a protected class are being treated differently from others because of their protected characteristics, civil rights alarm bells sound. The Civil Rights Act of 1964 (the Act) generally prohibits state-sponsored segregation and inequality in crucial arenas of public life, including education. Title VI of the Act protects students from discrimination on the basis of, among other attributes, race, color, and national origin, and was enacted to prevent (and in some cases, mandate action to actively reverse) historical racial segregation in schools. ELs are protected from discrimination under Title VI on the basis of both race and national origin, and are entitled to receive language services and specialized instruction from their school in the “least segregated” manner possible. Under the circumstances described above, EL students arguably experience unlawful discrimination under the theories of disparate treatment, disparate impact, or hostile learning environment as a result of false flagging.

  1. Disparate impact and disparate treatment. Disparate impact occurs where a neutral policy is applied to everyone, but primarily members of a protected class experience an adverse effect. Disparate impact does not require intentional discrimination. Disparate treatment requires a showing of intent to treat a student differently (at least in part because of their protected characteristics) and can occur either where a neutral policy is selectively enforced against students belonging to a protected class, or where the policy explicitly targets that protected group. Here, an education agency’s generative AI and discipline policy might be over-enforced against EL students, due to the sheer disproportionality of false flags for non-native English speakers suggested by the Stanford study. Where an education agency is aware of these high error rates and consequent adverse effects for a protected group of students but nonetheless chooses to deploy the technology, it arguably meets requirements for a disparate impact or even a disparate treatment claim. 
  2. Hostile learning environment. A hostile learning environment occurs where a student — or group of students — experiences severe, pervasive, or persistent treatment that interferes with the student’s ability to participate in or benefit from services or activities provided by the school. For EL students, having their work frequently flagged for cheating by AI detectors and dealing with the accusation, investigation, and discipline that results, might create such an environment. Education agencies are tasked with the general obligation of ensuring a nondiscriminatory learning environment for all students. This obligation extends to responsibility for the conduct of third parties, such as vendors or contractors, with which the agency contracts, even if the conduct was not solely its own.

Recommendations

Given the known inadequacies of AI detectors and the clear potential for disproportionate adverse effects on marginalized groups of students such as EL learners, education agencies should at minimum consider taking the following steps.

Contemplate necessity of use

Assess whether the use of this technology will be helpful in accomplishing the stated goal and should be used at all. As a starting point, the goal of deploying these technologies is to prevent academic dishonesty. Educators are skilled professionals who are tasked with understanding their students’ skills and challenges. More traditional mechanisms for cheating, such as purchasing essays online or having them written by a friend or family member, are often easy to identify for an educator familiar with that student’s work and skill level. Given the known error rates of AI detectors, there is nothing to suggest that these technologies could or should be used to supplant a teacher’s professional judgment in determining whether a piece of writing was actually the student’s own work. 

Provide training regarding reliability  

Ensure educators understand: (i) the success and error rates of AI detectors, and the disproportionate error rate for non-native English speakers; (ii) that AI detectors should not supplant an educator’s professional judgment; and (iii) that AI detector flags are not reliable as concrete proof of academic dishonesty. At most,  if they use AI detectors at all, educators should recognize they can only be one piece of a broader inquiry for identifying potential academic dishonesty.  

Provide students an appeal process to challenge flags 

To the extent that schools use AI detectors, they must put in place significant procedural protections especially given the known error rates. Among the checks and balances that should be in place following a flag by an AI detector is the opportunity for implicated students to respond and advocate for themselves. Understand, however, that there are likely to be equity concerns with this process as well, as some students may not be as equipped as others (depending on grade level, English proficiency, etc.) to even understand the allegations or refute them.  

Conclusion

As schools grapple with rapidly emerging technologies, it is understandable that the response may include adopting innovative technologies of their own to combat undesired uses. However, it remains vital to stay vigilant of the potential pitfalls of these technologies and ensure that the protection of civil rights for all students in the classroom is a key priority.

[ PDF Version ]

The post Brief – Late Applications: Disproportionate Effects of Generative AI-Detectors on English Learners appeared first on Center for Democracy and Technology.

]]>
Report – Off Task: EdTech Threats to Student Privacy and Equity in the Age of AI  https://cdt.org/insights/report-off-task-edtech-threats-to-student-privacy-and-equity-in-the-age-of-ai/ Wed, 20 Sep 2023 04:01:00 +0000 https://cdt.org/?post_type=insight&p=99938 This report is also authored by Hugh Grant-Chapman, Independent Consultant In schools across the country, the use of educational data and technology (edtech) remains nearly ubiquitous. In addition to supporting instruction, schools have used edtech to respond to the painfully present safety threats that they face on a daily basis — from gun violence to […]

The post Report – Off Task: EdTech Threats to Student Privacy and Equity in the Age of AI  appeared first on Center for Democracy and Technology.

]]>
Graphic for CDT report, entitled "Off Task: EdTech Threats to Student Privacy and Equity in the Age of AI." A browser with a warning symbol.
Graphic for CDT report, entitled “Off Task: EdTech Threats to Student Privacy and Equity in the Age of AI.” A browser with a warning symbol.

This report is also authored by Hugh Grant-Chapman, Independent Consultant

In schools across the country, the use of educational data and technology (edtech) remains nearly ubiquitous. In addition to supporting instruction, schools have used edtech to respond to the painfully present safety threats that they face on a daily basis — from gun violence to the youth mental health crisis. However, long-standing technologies such as content filtering and blocking and student activity monitoring pose well-documented privacy and equity risks to students. Nonetheless, schools continue to deploy these technologies on a mass scale. And with generative artificial intelligence (AI) becoming rapidly integrated into the education space, many new risks are being introduced to students.

The Center for Democracy & Technology (CDT) conducted surveys of high school students and middle and high school parents and teachers from July to August 2023 to understand how edtech used by schools is tangibly affecting those it claims to serve. The research focuses on student privacy concerns and schools’ capacity to address them; emerging uses of AI-driven technology such as predictive analytics; and deep dives into content filtering and blocking, student activity monitoring, and generative AI, encompassing both well-established and emerging technology. These surveys build on CDT’s previous research, which revealed that student activity monitoring is adversely affecting all students, especially historically marginalized and under-resourced students.

Whether old or new, technologies deployed across schools have negative impacts on students, and schools are out of step in addressing rising concerns:

  • Schools are not adequately engaging and supporting students, parents, and teachers in addressing concerns about school data and technology practices: Students, parents, and teachers report a lack of guidance, information, and training on privacy, student activity monitoring, content filtering and blocking, and generative AI. They want more support from their schools and to be involved in decisions about whether and how these technologies are used.
  • Content blocking and filtering is stifling student learning and growth: Students and teachers agree that this technology is a barrier to learning, often making it hard to complete school assignments and access useful information.
  • Student activity monitoring continues to harm many of the students it claims to help: Disciplinary actions, outing of students, and initiating of law enforcement contact are still regular outcomes of the use of this technology, even though it is procured by schools to help keep students safe.
  • Schools have provided little guidance about generative AI, leaving students, parents, and teachers in the dark: Students, parents, and teachers report a collective state of confusion about policies and procedures related to responsible generative AI use in the classroom. Meanwhile, students are getting in trouble for the use of this technology.

Even more disheartening is that in all of these areas, at-risk communities of students are still experiencing disproportionate negative impacts of these old and new technologies:

  • Schools are filtering and blocking LGBTQ+ and race-related content, with Title I and licensed special education teachers more likely to report such practices: Although filtering and blocking technology was originally intended to primarily target explicit adult content, more school administrators are using it to restrict access to other content they think is inappropriate, including LGBTQ+ and race-related content. Title I and licensed special education teachers are more likely to report this occurrence. In key respects, this finding parallels the broader trend in education of removing books and curricular content on these subjects.
  • Student activity monitoring is disproportionately harming students with disabilities and LGBTQ+ students: Students with individualized education programs (IEPs) and/or 504 plans as well as licensed special education teachers report higher rates of discipline arising from student activity monitoring. LGBTQ+ students are also still being disciplined more than their peers and outed without their consent.
  • Title I and licensed special education teachers report higher rates of students receiving disciplinary actions for using or being accused of using generative AI: Despite having little guidance from schools on generative AI use, Title I teachers, licensed special education teachers, and parents of students with IEPs and/or 504 plans report higher rates of their student(s) getting in trouble as compared to peers.

Previous CDT research and this year’s findings continue to document the risks and harms of edtech on all students but especially on vulnerable communities. As uses of edtech, particularly AI-driven technology, continue to expand, education leaders across the country should focus not only on privacy concerns but also on identifying and preventing discrimination. Luckily, they already have the tools to do so with well-established civil rights laws that apply to discriminatory uses of technology.

Read the full report here.

Read the summary brief here.

Explore the research slide deck here.

Read the press release here.

The post Report – Off Task: EdTech Threats to Student Privacy and Equity in the Age of AI  appeared first on Center for Democracy and Technology.

]]>
Report – Late Applications: Protecting Students’ Civil Rights in the Digital Age https://cdt.org/insights/report-late-applications-protecting-students-civil-rights-in-the-digital-age/ Wed, 20 Sep 2023 04:01:00 +0000 https://cdt.org/?post_type=insight&p=99943 This report is also authored by Sydney Brinker, former CDT Intern Education data and technology continue to expand their role in students’, teachers’, and parents’ lives. While issues of school safety, student mental health, and achievement gaps remain at the forefront of education, emerging technologies such as predictive analytics, monitoring software, and facial recognition are […]

The post Report – Late Applications: Protecting Students’ Civil Rights in the Digital Age appeared first on Center for Democracy and Technology.

]]>
Graphic for CDT report, entitled "Late Applications: Protecting Students’ Civil Rights in the Digital Age."
Graphic for CDT report, entitled “Late Applications: Protecting Students’ Civil Rights in the Digital Age.”

This report is also authored by Sydney Brinker, former CDT Intern

Education data and technology continue to expand their role in students’, teachers’, and parents’ lives. While issues of school safety, student mental health, and achievement gaps remain at the forefront of education, emerging technologies such as predictive analytics, monitoring software, and facial recognition are becoming more popular. As these technologies expand, so have questions about how they might be used responsibly and without inflicting negative consequences on students, especially historically marginalized students.

The education sector has been responsible for protecting the civil rights of students for decades. Existing civil rights laws provide an important foundation to ensure that data and technology practices in schools achieve their intended function without inadvertently having discriminatory effects against students on the basis of race, sex, or disability.

Analysis of data that is disaggregated by a number of student demographics is crucial to understanding trends regarding protected classes of students and illustrates why an ongoing focus on student civil rights is necessary; however, the analysis contained in this report focuses on the use of technology and data in real time to make decisions about individual students, rather than the use of data to identify overall trends.

Examining the current uses of education data and technology under various civil rights concepts, this report offers guidance to help policymakers and education leaders understand how to better center civil rights in the digital age with respect to their practices and policies, especially regarding nondiscrimination and technology procurement. This guidance includes recommendations for school leaders to ensure that education data and technology uses do not run afoul of civil rights laws and that all students are positioned to be successful in school and beyond:

  • Audit existing nondiscrimination policies, practices, and notices.
  • Update or create new policies to address data and technology use.
  • Revise or implement procurement policy for education technologies.
  • Consolidate and make readily available all required nondiscrimination notices.
  • Post the consolidated policy in district buildings and on school websites.
  • Designate specific personnel to be responsible for ensuring compliance with nondiscrimination laws regarding education data and technology.
  • Conduct analysis and publicly report information on nondiscrimination policies and practices for data and technology on an ongoing basis.

Read the full report here.

Read the press release here.

The post Report – Late Applications: Protecting Students’ Civil Rights in the Digital Age appeared first on Center for Democracy and Technology.

]]>
Report – Beyond the Screen: Parents’ Experiences with Student Activity Monitoring in K-12 Schools https://cdt.org/insights/report-beyond-the-screen-parents-experiences-with-student-activity-monitoring-in-k-12-schools/ Mon, 31 Jul 2023 04:01:00 +0000 https://cdt.org/?post_type=insight&p=99439 This report is also authored by Hugh Grant-Chapman, Independent Consultant The role of technology in K-12 education continues to grow, and schools across the U.S. are turning to monitoring technologies to track students’ online activity. Yet, as student activity monitoring has become commonplace, students and parents report concerns about irresponsible uses of these tools even […]

The post Report – Beyond the Screen: Parents’ Experiences with Student Activity Monitoring in K-12 Schools appeared first on Center for Democracy and Technology.

]]>
CDT Research report, entitled "Beyond the Screen: Parents' Experiences with Student Activity Monitoring in K-12 Schools." Illustration of a laptop, browser windows and social media posts being monitored – and the hands of adults reaching in to understand not just what's being flagged, but the system itself and its impacts.
CDT Research report, entitled “Beyond the Screen: Parents’ Experiences with Student Activity Monitoring in K-12 Schools.” Illustration of a laptop, browser windows and social media posts being monitored – and the hands of adults reaching in to understand not just what’s being flagged, but the system itself and its impacts.

This report is also authored by Hugh Grant-Chapman, Independent Consultant

The role of technology in K-12 education continues to grow, and schools across the U.S. are turning to monitoring technologies to track students’ online activity. Yet, as student activity monitoring has become commonplace, students and parents report concerns about irresponsible uses of these tools even as they recognize their potential benefits.

Over the past two years, CDT has investigated the rise in popularity of student activity monitoring technology, and the benefits and risks it poses to students’ well-being. Survey research conducted last summer revealed that 9 out of 10 secondary school teachers report that their schools use student activity monitoring technology and that these tools are used for disciplinary applications more often than for student safety (Laird et al., 2022). In addition, 44 percent of teachers report that a student in their schools was contacted by law enforcement because of student activity monitoring, and 29 percent of LGBTQ+ students report that they or someone they know were involuntarily “outed” due to this technology (Laird et al., 2022). These trends indicate that student activity monitoring may be negatively impacting the well-being and safety of a large proportion of students. Further, Black, Hispanic, and LGBTQ+ students report experiencing disproportionate harm compared to other students (Laird et al., 2022). 

To examine these impacts in greater depth, CDT recently conducted twenty interviews with parents whose children have experienced short- and long-term consequences based on the use of student activity monitoring technology. This new research sheds light on the first-hand experiences of students and their families who were impacted by student activity monitoring. The stories of these parents paint a more complete picture of the effects of student activity monitoring on students, the ways schools respond to the information collected, and the changes parents want to see if these systems continue to be implemented.

CDT’s interviews with parents identified six main findings:

  • The most common type of activity flagged by student activity monitoring software was the viewing of inappropriate content.
  • Monitoring has a chilling effect on students’ speech and use of the internet, which can also impact their learning.
  • The actions that follow the monitoring and reporting of student activity can have significant emotional impacts on students.
  • Monitoring can undermine relationships between students and adults including their teachers and school administrators.
  • Student activity monitoring alerts were not always kept private, resulting in stigmatizing students.
  • Monitoring can catalyze negative student behavior and lead to direct threats to students’ safety and future well-being. 

Based on their experiences, parents prioritized four key areas of change for how student activity monitoring should be conducted:

  • More transparency about the student activity monitoring decision-making process.
  • A narrower scope of student activity monitoring use.
  • More careful, nuanced responses to alerts generated from monitoring systems.
  • A more active role for parents themselves in responding to alerts.

Read the full report here.

Read the summary brief here.

The post Report – Beyond the Screen: Parents’ Experiences with Student Activity Monitoring in K-12 Schools appeared first on Center for Democracy and Technology.

]]>
Report – Hidden Harms: The Misleading Promise of Monitoring Students Online https://cdt.org/insights/report-hidden-harms-the-misleading-promise-of-monitoring-students-online/ Wed, 03 Aug 2022 04:00:00 +0000 https://cdt.org/?post_type=insight&p=94494 The pressure on schools to keep students safe, especially to protect them physically and support their mental health, has never been greater. The mental health crisis, which has been exacerbated by the COVID-19 pandemic, and concerns about the increasing number of school shootings have led to questions about the role of technology in meeting these […]

The post Report – Hidden Harms: The Misleading Promise of Monitoring Students Online appeared first on Center for Democracy and Technology.

]]>
CDT report, entitled "Hidden Harms: The Misleading Promise of Monitoring Students Online." Text in white and pink, on a dark blue gradient background. Subtle images of a computer with student activity monitoring software on the screen, and monitoring reports underneath it, are on the bottom.
CDT report, entitled “Hidden Harms: The Misleading Promise of Monitoring Students Online.” Text in white and pink, on a dark blue gradient background. Subtle images of a computer with student activity monitoring software on the screen, and monitoring reports underneath it, are on the bottom.

The pressure on schools to keep students safe, especially to protect them physically and support their mental health, has never been greater. The mental health crisis, which has been exacerbated by the COVID-19 pandemic, and concerns about the increasing number of school shootings have led to questions about the role of technology in meeting these goals. From monitoring students’ public social media posts to tracking what they do in real-time on their devices, technology aimed at keeping students safe is growing in popularity. However, the harms that such technology inflicts are increasingly coming to light. 

CDT conducted survey research among high school students and middle and high school parents and teachers to better understand the promise of technologies aimed at keeping students safe and the risks that they pose, as reported by those most directly interacting with such tools. In particular, the research focused on student activity monitoring, the nearly ubiquitous practice of schools using technology to monitor students’ activities online, especially on devices provided by the school. CDT built on its previous research, which showed that this monitoring is conducted primarily to comply with perceived legal requirements and to keep students safe. While stakeholders are optimistic that student activity monitoring will keep students safe, in practice it creates significant efficacy and equity gaps: 

  • Monitoring is used for discipline more often than for student safety: Despite assurances and hopes that student activity monitoring will be used to keep students safe, teachers report that it is more frequently used for disciplinary purposes in spite of parent and student concerns. 
  • Teachers bear considerable responsibility but lack training for student activity monitoring: Teachers are generally tasked with responding to alerts generated by student activity monitoring, despite only a small percentage having received training on how to do so privately and securely. 
  • Monitoring is often not limited to school hours despite parent and student concerns: Students and parents are the most comfortable with monitoring being limited to when school is in session, but monitoring frequently occurs outside of that time frame. 
  • Stakeholders demonstrate large knowledge gaps in how monitoring software functions: There are significant gaps between what teachers report is communicated about student activity monitoring, often via a form provided along with a school-issued device, and what parents and students retain and report about it. 

Additionally, certain groups of students, especially those who are already more at risk than their peers, disproportionately experience the hidden harms of student activity monitoring: 

  • Students are at risk of increased interactions with law enforcement: Schools are sending student data collected from monitoring software to law enforcement officials, who use it to contact students. 
  • LGBTQ+ students are disproportionately targeted for action: The use of student activity monitoring software is resulting in the nonconsensual disclosure of students’ sexual orientation and gender identity (i.e., “outing”), as well as more LGBTQ+ students reporting they are being disciplined or contacted by law enforcement for concerns about committing a crime compared to their peers. 
  • Students’ mental health could suffer: While students report they are being referred to school counselors, social workers, and other adults for mental health support, they are also experiencing detrimental effects from being monitored online. These effects include avoiding expressing their thoughts and feelings online, as well as not accessing important resources that could help them. 
  • Students from low-income families, Black students, and Hispanic students are at greater risk of harm: Previous CDT research showed that certain groups of students, including students from low-income families, Black students, and Hispanic students, rely more heavily on school-issued devices. Therefore, they are subject to more surveillance and the aforementioned harms, including interacting with law enforcement, being disciplined, and being outed, than those using personal devices. 

Given that the implementation of student activity monitoring falls short of its promises, this research suggests that education leaders should consider alternative strategies to keep students safe that do not simultaneously put students’ safety and well-being in jeopardy.

See below for our complete report, summary brief, and in-depth research slide deck. For more information, see our letter calling for action from the U.S. Department of Education’s Office for Civil Rights — jointly signed by multiple civil society groups — as well as our related press release and recent blog post discussing findings from our parent and student focus groups.

Read the full report here.

Read the summary brief here.

Read the research slide deck here.

The post Report – Hidden Harms: The Misleading Promise of Monitoring Students Online appeared first on Center for Democracy and Technology.

]]>
Report – Online and Observed: Student Privacy Implications of School-Issued Devices and Student Activity Monitoring Software https://cdt.org/insights/report-online-and-observed-student-privacy-implications-of-school-issued-devices-and-student-activity-monitoring-software/ Tue, 21 Sep 2021 04:01:13 +0000 https://cdt.org/?post_type=insight&p=91251 Many school districts across the nation expanded efforts to provide devices like laptops and tablets to students during the global pandemic in an effort to close the homework gap and address inequities in technology access. Part of this shift included the introduction of student activity monitoring software and other digital tools aimed in part at […]

The post Report – Online and Observed: Student Privacy Implications of School-Issued Devices and Student Activity Monitoring Software appeared first on Center for Democracy and Technology.

]]>
CDT's report, entitled "Online and Observed: Student Privacy Implications of School-Issued Devices and Student Activity Monitoring Software." White background with black text and blue artifacts. Three laptops, lined from left to right, have a variety of pop-ups and open windows on their screens, as well as purple-colored alerts to demonstrate the monitoring and flagging of student activity.

Many school districts across the nation expanded efforts to provide devices like laptops and tablets to students during the global pandemic in an effort to close the homework gap and address inequities in technology access. Part of this shift included the introduction of student activity monitoring software and other digital tools aimed in part at facilitating remote classroom management and driving student engagement. However, these tools can also be used in ways that are unduly intrusive. In this report, we examine whether students who receive school-issued devices are subject to more monitoring than their peers who have their own devices. We also examine local education agencies’ motivations in implementing monitoring and how they communicate about it with parents and students.

Building on recent CDT guidance on how schools could address privacy gaps in the implementation of remote education technology, this report presents findings based on virtual semi-structured interviews with nine individuals from five local education agencies (LEAs), including district level administrators and information technology (IT) directors.

This research uncovered seven main findings:

  1. Students using school-issued devices are monitored to a greater extent than their peers using personal devices;
  2. LEAs with wealthier student populations reported that their students are more likely to have access to personal devices, which are subject to less monitoring than school-issued devices;
  3. LEAs feel compelled to monitor student activity to satisfy perceived legal requirements and protect student safety;
  4. Most prevalent community concerns were focused on appropriate use of student activity monitoring data for disciplinary purposes;
  5. LEAs communicate privacy expectations to students and families, but are unsure about how much detail about student activity monitoring to include in those messages;
  6. LEAs are holding device and student activity monitoring software vendors accountable on privacy and security through data sharing and privacy agreements; and
  7. LEAs are looking for ways to improve the privacy and security protections for devices and data shared with student activity monitoring vendors.

For more information on our research, see our recent blog post discussing the findings, our press release for the report, and our related survey research and recommendations.


The post Report – Online and Observed: Student Privacy Implications of School-Issued Devices and Student Activity Monitoring Software appeared first on Center for Democracy and Technology.

]]>
Research Report: With Increased EdTech Comes Increased Responsibility https://cdt.org/insights/research-report-with-increased-edtech-comes-increased-responsibility/ Wed, 31 Mar 2021 04:01:00 +0000 https://cdt.org/?post_type=insight&p=89858 A year has passed since our education system was disrupted, with schools being forced to transition overnight to remote learning due to the global pandemic. Since then, students and their families have seen the best—and the worst—that education technology and data have to offer. Technology and data have enabled important educational services like instruction delivery […]

The post Research Report: With Increased EdTech Comes Increased Responsibility appeared first on Center for Democracy and Technology.

]]>
CDT's Research Report – With Increased EdTech Comes Increased Responsibility

A year has passed since our education system was disrupted, with schools being forced to transition overnight to remote learning due to the global pandemic. Since then, students and their families have seen the best—and the worst—that education technology and data have to offer. Technology and data have enabled important educational services like instruction delivery in students’ homes, relationships with caring adults in their lives, and mental health services during a time of crisis. On the other hand, too many students have not been connected with their schools due to inequitable access, and some have even been harmed by “Zoombombings” that inflicted traumatic experiences and cybersecurity attacks that shut down their schools. 

Last year the Center for Democracy and Technology (CDT) commissioned research on the views of those who have the most at stake: parents, teachers, and students. Our latest report updates those findings among parents and teachers with new polling data that shows changes from last spring and summer to February 2021. Our research shows that the need and demand for data and technology continues to grow, but attention to privacy, security, and responsible data use is not keeping pace. While schools are making progress, and support for online learning among teachers and parents remains strong, important gaps in student privacy remain. To address these deficiencies, education leaders and practitioners should take the following actions:

  1. Continue to establish and update privacy-forward policies
  2. Better equip teachers to use technology responsibly
  3. Address the latest privacy and security risks that pose the greatest threats to students
  4. Engage parents in privacy protection
  5. Embed privacy protection in efforts to close the homework gap

Read the full report here.

Read the research slides here.

Read more from Hugh on why this research is so important.

The post Research Report: With Increased EdTech Comes Increased Responsibility appeared first on Center for Democracy and Technology.

]]>