AI in Education Archives - Center for Democracy and Technology https://cdt.org/area-of-focus/equity-in-civic-tech/ai-in-education/ Thu, 08 May 2025 16:54:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://cdt.org/wp-content/uploads/2019/11/cropped-cdt-logo-32x32.png AI in Education Archives - Center for Democracy and Technology https://cdt.org/area-of-focus/equity-in-civic-tech/ai-in-education/ 32 32 Looking Back at AI Guidance Across State Education Agencies and Looking Forward https://cdt.org/insights/looking-back-at-ai-guidance-across-state-education-agencies-and-looking-forward/ Tue, 15 Apr 2025 14:20:59 +0000 https://cdt.org/?post_type=insight&p=108356 This blog is part of a series of pieces highlighting AI regulation trends across states. See CDT’s other blogs on state AI executive orders, public sector AI legislation, and local AI governance efforts. Artificial intelligence (AI) has shaken up the education sector, particularly since the public release of ChatGPT and other generative AI tools. School […]

The post Looking Back at AI Guidance Across State Education Agencies and Looking Forward appeared first on Center for Democracy and Technology.

]]>
This blog is part of a series of pieces highlighting AI regulation trends across states. See CDT’s other blogs on state AI executive orders, public sector AI legislation, and local AI governance efforts.

Artificial intelligence (AI) has shaken up the education sector, particularly since the public release of ChatGPT and other generative AI tools. School administrators, teachers, students, and parents have grappled with whether and how to utilize AI, amidst fears such as diminishing student academic integrity and even more sinister concerns like rising prevalence of deepfake non-consensual intimate imagery (NCII).

In response to AI taking classrooms by storm, the education agencies of over half of states (Alabama, Arizona, California, Colorado, Connecticut, Delaware, Georgia, Hawaii, Indiana, Kentucky, Louisiana, Michigan, Minnesota, Mississippi, New Jersey, North Carolina, North Dakota, Ohio, Oklahoma, Oregon, Utah, Virginia, Washington, West Virginia, Wisconsin, Wyoming) and Puerto Rico have released guidance for districts and schools on the responsible use of AI in public education. These pieces of guidance vary by types of AI systems they cover, with some solely focusing on generative AI and others encompassing AI more broadly. Analysis of current state education agencies’ (SEAs’) guidance reveals four primary trends:

  1. There is alignment on the potential benefits of AI in education.
  2. Education agencies acknowledge the base risks of AI use in schools.​
  3. Across the board, states emphasize the need for human oversight and investment in AI literacy/education.
  4. As a whole, SEA guidance is missing critical topics related to AI, such as how to meaningfully engage communities on the issue and how to approach deepfakes.

Below, we detail these trends; highlight what SEAs can do to advance responsible, rights-respecting use of AI in education in light of these trends; and explore a few particularly promising examples of SEA AI guidance.

Trends in SEAs’ AI Guidance

  1. Alignment on the potential benefits of AI in education

Guidance out of SEAs consistently recognizes the following four benefits of using and teaching AI in the classroom: 

  • Personalized learning: At least 17 SEAs cite personalized learning for students as a benefit of AI in education. Colorado’s AI roadmap, for instance, states that AI can support students by “tailor[ing] educational content to match each student’s learning pace and style and helping students learn more efficiently by offering individualized resources and strategies that align with their learning goals, styles, and needs.” Another example is Arizona’s generative AI guidance document, which highlights three different methods of personalized learning opportunities for students: interactive learning, AI coaching, and writing enhancement.
  • Expediting workflow and streamlining administrative processes: Roughly 13 SEAs mention AI’s potential benefit of speeding up or even automating tasks, such as writing emails or creating presentations. Washington mentions “streamlin[ing] operational and administrative functions” as an opportunity for AI use in education, and similarly, Oklahoma states that educators can use AI to “increase efficiency and productivity” through means like automating administrative tasks, thus freeing up time to focus on teaching.
  • Preparing students for the future workforce: Around 11 states discuss teaching AI and AI literacy to students now as essential in equipping them for future career opportunities, often predicting that AI tools will revolutionize the workforce. Indiana’s AI in education guidance states that “the ability to use and understand AI effectively is critical to a future where students will enroll in higher education, enlist in the military, or seek employment in the workforce.” Similarly, Delaware’s generative AI in education guidance explains that “students who learn how AI works are better prepared for future careers in a wide range of industries,” due to developing the skills of computational thinking, analyzing data critically, and evaluating the effectiveness of solutions.
  • Making education more accessible to underrepresented groups: At least 11 of the AI in education guidance documents tout AI as making education more accessible, especially for student populations like those with disabilities and English learners. For example, California’s Department of Education and Minnesota’s Department of Education both note that AI can improve access for marginalized populations through functions such as language translation assistance and generating audio descriptions for students with disabilities. In addition to these communities of students, North Dakota’s Department of Public Instruction also mentions that AI tools can make education more accessible for students in rural areas and students from economically disadvantaged backgrounds.
  1. Acknowledgement of the base risks of AI use in schools

The majority of SEA guidance documents enumerate commonly recognized risks of AI in education, namely:

  • Privacy harms: Roughly 20 states explicitly mention privacy harms as a risk or concern related to implementation of AI in education, especially as it pertains to personally identifiable information. For example, Hawaii’s AI in education guidance geared towards students urges them to be vigilant about protecting their privacy by avoiding sharing sensitive personal information with AI tools, such as their address and phone number. Another example is Mississippi’s Department of Education, which highlights that AI can “increase data privacy and security risks depending on the [vendor’s] privacy and data sharing policies.”
  • Inaccuracy of AI-generated outputs: At least 16 SEAs express concerns about AI tools’ ability to produce accurate information, often citing the common generative AI risk of hallucination. North Dakota’s Department of Public Instruction encourages high schoolers to learn about the limitations of AI and to have a “healthy skepticism” of tools due, in part, to the risk of inaccuracies in information. Along the same lines, Wyoming’s AI in education guidance affirms that students are always responsible for checking the accuracy of AI-generated content, and that school staff and students should critically evaluate all AI outputs. 
  • Reduction of students’ critical thinking skills: Around 10 SEAs discuss the risk of students becoming overreliant on AI tools, thus diminishing their necessary critical thinking skills. Puerto Rico’s Department of Education cites the risk of students and staff becoming dependent on AI tools, which can reduce skills such as critical thinking, creativity, independent decision-making, and quality of teaching. Another example is Arizona’s generative AI guidance, stating that overreliance on AI is a risk for both students and teachers – technology cannot replace the deep knowledge teachers have of their students, nor can it “improve student learning if it is used as a crutch.”
  • Perpetuation of bias: At least 22 states cite perpetuating bias as a risk of AI tools in the classroom. One of the ethical considerations that Louisiana puts forth is “avoiding potential biases in algorithms and data” when possible and placing safeguards during AI implementation to address bias. Virginia’s AI guidelines also affirm that the use of AI in education should do no harm, including “ensuring that algorithms are not based on inherent biases that lead to discriminatory outcomes.”
  • Unreliability of AI content detection tools: Many states also express skepticism about the use of AI content detection tools by educators to combat plagiarism, in part due to their unproven efficacy and risk of erroneously flagging non-native English speakers. For example, West Virginia’s Department of Education recommends that teachers do not use AI content detectors “due to concerns about their reliability,” and North Carolina’s generative AI guidance notes that AI detection tools “often create false positives, penalizing non-native speakers and creative writing styles.”
  1. Emphasis on the need for human oversight and investment in education

Across the board, SEAs also stress the importance of taking a human-centric approach to AI use in the classroom – emphasizing that AI is just a tool and users are still responsible for the decisions they make or work they submit. For example, the Georgia Department of Education’s AI guidance asserts that human oversight is critical and that “final decision-making should always involve human judgment.” Similarly, the Kentucky Department of Education emphasizes how vital having a human in the loop is, especially when AI makes decisions that could have significant consequences for individuals or society.

To equip school stakeholders with the skills necessary to be responsible users of AI, many SEA guidance documents also highlight the need for AI literacy and professional development and training for teachers. Colorado’s AI roadmap frequently mentions the need for both teachers and students to be given AI literacy education so that students are prepared to enter the future “AI-driven world.” The Oregon Department of Education’s AI guidance continually mentions the need for educators to be trained to address the equity impacts of generative AI, including training on topics like combating plagiarism and spotting inaccuracies in AI outputs.

  1. Exclude critical topics, such as meaningful community engagement and deepfakes

Creating mechanisms for robust community engagement allows districts and schools to make more informed decisions about AI procurement to ensure systems and their implementations directly respond to the needs and concerns of those the tools impact most. Some pieces of guidance mention including parents in conversations about AI adoption and implementation, but only in a one-way exchange (e.g., the school provides parents resources/information on how AI will be used safely in the classroom). North Carolina, West Virginia, Utah, Georgia, Connecticut, and Louisiana are the only states that talk about more meaningful engagement, like obtaining parental consent for students using AI tools at school, or including parents and other external stakeholders in the policymaking and decision-making processes. For example, Connecticut’s AI guidance states that parents and community members may have questions about AI use in their children’s school, so, “Leaders may consider forming an advisory around the use of technology generally and AI tools specifically to encourage a culture of learning and transparency, as well as to tap the expertise that community experts may offer.”

One of the most pernicious uses of AI that has become a large issue in schools across the country is the creation of deepfakes and deepfake NCII. CDT research has shown that in the 2023-2024 school year, around 40 percent of students said that they knew about a deepfake depicting someone associated with their school, and 15 percent of students reported that they knew about AI-generated deepfake NCII that depicted individuals associated with their school. The harms from using AI for bullying or harassment, including the creation of deepfakes and deepfake NCII, is only mentioned in roughly four of the guidance documents – those from Utah, Washington, West Virginia, and Connecticut. Utah’s AI in education guidance expresses that schools should prohibit students from “using AI tools to manipulate media to impersonate others for bullying, harassment, or any form of intimidation,” and in the same vein, Washington’s Office of Superintendent of Public Instruction explicitly mentions that users should never utilize AI to “create misleading or inappropriate content, take someone’s likeness without permission, or harm humans or the community at large.”

What SEAs Can Do to Advance Responsible AI Use in Education

After analyzing the strengths and weaknesses of current SEAs’ AI guidance documents, the following emerge as priorities for effective guidance:

  1. Improve the form of the guidance itself
  • Tailor guidance for specific audiences: School administrators, teachers, students, and parents each have unique roles in ensuring AI is implemented and used responsibly, thus making it necessary for guidance to clearly define the benefits, risks, risk mitigation strategies, and available resources specific to each audience. Mississippi’s guidance serves as a helpful example of segmenting recommendations for specific groups of school stakeholders (e.g., student, teachers, and school administrators). 
  • Ensure guidance is accessible: SEAs should ensure that guidance documents are written in plain language so that they are more accessible generally, but also specifically for individuals with disabilities. In addition, guidance released online should be in compliance with the Web Content Accessibility Guidelines as required by Title II of the Americans with Disabilities Act.
  • Publish guidance publicly: Making guidance publicly available for all school stakeholders is key in building accountability mechanisms, strengthening community education on AI, and building trust. It can also allow other states, districts, and schools to learn from other approaches to AI policymaking, thus strengthening efforts to ensure responsible AI use in classrooms across the country.
  1. Provide additional clarity on commonly mentioned topics 
  • Promote transparency and disclosure of AI use and risk management practices: Students, parents, and other community members are often unaware of the ways that AI is being used in their districts and schools. To strengthen trust and build accountability mechanisms, SEAs should encourage public sharing about the AI tools being used, including the purposes for their use and whether they process student data. On the same front, guidance should also include audience-specific best practices to ensure students’ privacy, security, and civil rights are protected.
  • Include best practices for human oversight: The majority of current SEA guidance recognizes the importance of having a “human in the loop” when it comes to AI, but few get specific on what that means in practice. Guidance should include clear, audience-specific examples to showcase how individuals can employ the most effective human oversight strategies.
  • Be specific about what should be included in AI literacy/training programs: SEAs recognize the importance of AI literacy and training for school administrators, teachers, and students, but few pieces of guidance include what topics should be covered to best equip school stakeholders with the skills needed to be responsible AI users. Guidance can identify priority areas for these AI literacy/training programs, such as training teachers on how to respond when a student is accused of plagiarism or how students can verify the output of generative AI tools.
  1. Address important topics that are missing entirely
  • Incorporate community engagement throughout the AI lifecycle: Outside of school staff, students, parents, and other community members hold vital expertise that should be considered during the AI policymaking and decision-making process, such as concerns and past experiences.
  • Articulate the risks of deepfake NCII: As previously mentioned, this topic was missing from most SEA guidance. This should be included, with a particular focus on encouraging implementation of policies that address the largest gaps: investing in prevention and supporting victims. 

Promising Examples of SEA AI Guidance

Current AI guidance from SEAs contains strengths and weaknesses, but three states stand out in particular for their detail and unique approaches:

North Carolina Department of Public Instruction

North Carolina’s generative AI guidance stands out for five key reasons:

  • Prioritizes community engagement: The guidance discusses the importance of community engagement when districts and schools are creating generative AI guidelines. It points out that having community expertise from groups like parents establishes a firm foundation for responsible generative AI implementation.
  • Encourages comprehensive AI literacy: The state encourages LEAs to develop a comprehensive AI literacy program for staff to build a “common understanding and common language,” laying the groundwork for responsible use of generative AI in the classroom.
  • Provides actionable examples for school stakeholders: The guidance gives clear examples for concepts, such as how teachers can redesign assignments to combat cheating and a step-by-step academic integrity guide for students.
  • Highlights the benefit of built-for-purpose AI models: It explains that built-for-education tools, or built-for-purpose generative AI models, may be better options for districts or schools concerned with privacy.
  • Encourages transparency and accountability from generative AI vendors: The guidance provides questions for districts or schools to ask vendors when exploring various generative AI tools. One example of a question included to assess “evidence of impact” is, “Are there any examples, metrics, and/or case studies of positive impact in similar settings?”

Kentucky Department of Education

Three details of Kentucky’s AI guidance make it a strong example to highlight: 

  • Positions the SEA as a centralized resource for AI: It is one of the only pieces of guidance that positions the SEA as a resource and thought partner to districts who are creating their own AI policies. As part of the Kentucky Department of Education’s mission, the guidance states that the Department is committed to encouraging districts and schools by providing guidance and support and engaging districts and schools by fostering environments of knowledge-sharing.
  • Provides actionable steps for teachers to ensure responsible AI use: Similar to North Carolina, it provides guiding questions for teachers when considering implementing AI in the classroom. One sample question that teachers can ask is, “Am I feeding any sensitive or personal information/data to an AI that it can use or share with unauthorized people in the future?”
  • Prioritizes transparency: The guidance prioritizes transparency by encouraging districts and schools to provide understandable information to parents, teachers, and students on how an AI tool being used is making decisions or storing their data, and what avenues are available to hold systems accountable if errors arise.

Alabama State Department of Education

Alabama’s AI policy template stands out for four primary aspects:

  • Promotes consistent AI policies: Alabama takes a unique approach by creating a customizable AI policy template for LEAs to use and adapt. This allows for conceptual consistency in AI policymaking, while also leaving room for LEAs to include additional details necessary to govern AI use in their unique contexts.
  • Recognizes the importance of the procurement process: The policy template prioritizes the AI procurement process, by including strong language about what details should be included in vendor contracts. The policy template points out two key statements that LEAs should get written certification from contractors that they will comply with: that “the AI model has been pre-trained and no data is being used to train a model to be used in the development of a new product,” and that “they have used a human-in-the-loop strategy during development, have taken steps to minimize bias as much as possible in the data selection process and algorithm development, and the results have met the expected outcomes.”
  • Provides detailed risk management practices: It gets very specific about risk management practices that LEAs should adhere to. A first key detail included in the template is that the LEA will conduct compliance audits of data used in AI systems, and that if changes need to be made to a system, the contractor will be required to submit a corrective action plan. Another strong detail included is that the LEA must establish performance metrics to evaluate the AI system procured to ensure that the system works as intended. Finally, there is language included that, as part of their risk management framework, the LEA should comply with the National Institute of Standards and Technology’s AI Risk Management Framework (RMF), conduct annual audits to ensure they are in compliance with the RMF, identify risks and share them with vendors to create a remediation plan, and maintain a risk register for all AI systems.
  • Calls out the unique risks of facial recognition technology in schools: Alabama recognizes the specific risks of cameras with AI systems (or facial recognition technologies) on campuses and in classrooms, explicitly stating that LEAs need to be in compliance with federal and state laws.

Conclusion

In the past few years, seemingly endless resources and information have become available to education leaders, aiming to help guide AI implementation and use. Although more information can be useful to navigate this emerging technology, it has created an overwhelming environment, making it difficult to determine what is best practice and implying that AI integration is inevitable. 
As SEAs continue to develop and implement AI guidance in 2025, it is critical to first be clear that AI may not be the best solution to the problem that an education agency or school is attempting to solve, and second, affirm what “responsible” use of AI in education means – creating a governance framework that allows AI tools to enhance childrens’ educational experiences while protecting their privacy and civil rights at the same time.

The post Looking Back at AI Guidance Across State Education Agencies and Looking Forward appeared first on Center for Democracy and Technology.

]]>
U.S. Department of Education’s AI Toolkit and Nondiscrimination Resources Provides Lasting Guidance for Educators on AI and Civil Rights https://cdt.org/insights/u-s-department-of-educations-ai-toolkit-and-nondiscrimination-resources-provides-lasting-guidance-for-educators-on-ai-and-civil-rights/ Fri, 24 Jan 2025 20:24:35 +0000 https://cdt.org/?post_type=insight&p=107148 In October 2024, the U.S. Department of Education (ED) released its Toolkit for Safe, Ethical, and Equitable AI Integration, pursuant to its obligation under President Joe Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The Order mandated that ED create resources, policies, and guidance to address safe, responsible, […]

The post U.S. Department of Education’s AI Toolkit and Nondiscrimination Resources Provides Lasting Guidance for Educators on AI and Civil Rights appeared first on Center for Democracy and Technology.

]]>
In October 2024, the U.S. Department of Education (ED) released its Toolkit for Safe, Ethical, and Equitable AI Integration, pursuant to its obligation under President Joe Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The Order mandated that ED create resources, policies, and guidance to address safe, responsible, and nondiscriminatory uses of AI in education. Drilling further into their mandate, ED’s Office for Civil Rights (OCR) then released another set of guidance titled Avoiding the Discriminatory Use of Artificial Intelligence to further address the intersection of AI and civil rights in schools. Together, these resources provide much-needed guidance that CDT has strongly advocated for over the course of several years, particularly around: 1) reinforcing the intersection of AI and civil rights; 2) addressing deepfake nonconsensual intimate imagery (NCII); and 3) rebuilding trust in student work in the wake of widespread availability of generative AI. 

AI and Civil Rights

Although federal and state civil rights laws have been in existence for decades, school leaders have not had clarity on how they apply to edtech. The toolkit discusses civil rights and algorithmic bias, with a description of applicable civil rights laws that impact a school’s use and implementation of AI, while the OCR’s guidance provides illustrative examples of such uses. These acknowledgements of the existing legal obligations that school leaders must fulfill echoes CDT research and analysis published last year, discussing specific edtech and AI use cases within the existing legal frameworks of the Civil Rights Act (which includes Title IV and Title IX), Section 504 of the Rehabilitation Act, and the Americans with Disabilities Act — the combination of which is aimed at preventing discrimination on the basis of race, sex, and disability, among other characteristics. The resources specifically acknowledge the risks of bias and discrimination for use cases including student activity monitoring software, content filtering and blocking software (content moderation), facial/movement recognition software that relies on biometric information, generative AI detectors, and remote proctoring software.

Deepfake NCII

ED has previously found that certain online conduct can be actionable under the rule, which would include how non-consensual intimate imagery (NCII) creates a hostile learning environment on the basis of sex. In the toolkit, ED cited the recommendations to schools leaders in CDT’s report, In Deep Trouble, which focuses on the issue of NCII (both authentic and deepfake) in K-12 schools. These recommendations include: 1) instituting a trauma-informed approach to reports of deepfake NCII; 2) ensuring privacy and confidentiality for the parties involved; and 3) creating a mechanism for supporting victims after the fact (e.g., counseling, resources about having content removed from online platforms, and resources on how to report the conduct to law enforcement). OCR’s nondiscrimination guidance also offered a hypothetical on deepfake NCII to provide an example of what might be an insufficient response from schools. Because this is a growing issue in schools that shows no signs of slowing down, ED’s guidance on this topic is particularly meaningful.

Academic Integrity and Generative AI 

In addressing the equity risks posed by generative AI, ED calls out the widening gap in trust between teachers and students due to the outsized fear of gen-AI facilitated academic dishonesty and states that educational leaders should seek evidence to ensure that protected groups of students are not disproportionately impacted. The toolkit also refers to CDT’s polling research on educator experiences with generative AI in the classroom, which touched on the use of AI detector tools, student discipline for suspected generative AI use, and the distrust these uses have caused between teachers and students. In a similar vein, the OCR resource highlights the potential discriminatory impact of generative AI detectors on English Learners, spelling this out as a potentially actionable scenario under Title VI. 

Conclusion 

For years, civil society has called upon ED to provide policies, guidance, and enforcement around schools’ approach to edtech and AI— especially as it relates to civil rights and marginalized students. The toolkit was a helpful first step, and now paired with ED’s resource on avoiding discrimination, provides much needed clarity on the application of civil rights laws to a variety of AI uses in schools. These resources also make clear that ED should take enforcement actions for such uses, consistent with its commitment to doing so through the Department of Justice’s pledge for enforcement of civil rights, fair competition, consumer protection, and equal opportunity laws in automated systems

As we navigate a recent change in Administration, we hope to see these issues remain a priority for the Department in the years ahead. CDT remains committed to advocating for the privacy and civil rights of students in whether and how AI is used in schools. These resources, coupled with tailored guidance (e.g., model policies, best practices on appropriate prevention and response to deepfake NCII) and active enforcement efforts, will provide states and families with a model to help ensure that all students have the opportunity to learn and grow in an environment free from discrimination and harassment.

The post U.S. Department of Education’s AI Toolkit and Nondiscrimination Resources Provides Lasting Guidance for Educators on AI and Civil Rights appeared first on Center for Democracy and Technology.

]]>
Press Release: CDT Research Uncovers Widespread Use of Questionable Technologies in K-12 Schools Despite Parent Concern and Lack of Awareness https://cdt.org/insights/press-release-cdt-research-uncovers-widespread-use-of-questionable-technologies-in-k-12-schools-despite-parent-concern-and-lack-of-awareness/ Wed, 15 Jan 2025 14:00:00 +0000 https://cdt.org/?post_type=insight&p=107122 New survey reveals approximately one in four teachers say their school uses drones to patrol their campus – and the same number say their school has experienced a large-scale data breach in the past school year (2023-24) (WASHINGTON) — Today, the Center for Democracy & Technology (CDT) published new survey research showing that the explosive […]

The post Press Release: CDT Research Uncovers Widespread Use of Questionable Technologies in K-12 Schools Despite Parent Concern and Lack of Awareness appeared first on Center for Democracy and Technology.

]]>
New survey reveals approximately one in four teachers say their school uses drones to patrol their campus – and the same number say their school has experienced a large-scale data breach in the past school year (2023-24)

(WASHINGTON) Today, the Center for Democracy & Technology (CDT) published new survey research showing that the explosive growth of artificial intelligence (AI), along with many other technologies, in schools has happened despite parents expressing concern and lack of awareness about these tools being used in educational settings.

“Since CDT began this quantitative research in 2020, we have seen a rapid expansion of educational data and technology use, including AI, in K-12 schools. However, this has happened without meaningful engagement with the families that they serve,” said CDT President and CEO Alexandra Reeve Givens. “Decisions to implement edtech tools in the classroom should be made transparently and in consultation with those they impact the most – students and their families.”

In the 2023-24 school year, CDT found that:

  • AI and other technologies are being used for student safety and academic purposes, despite high concern among parents and lack of awareness about level of use:
    • Approximately one quarter of teachers report that their school uses drones to patrol school campuses. Nearly half of parents are concerned about this practice, with Black and Hispanic parents expressing heightened concerns compared to white parents.
    • Fifty-five percent of teachers say that their school uses student data to predict whether individual students are at risk of poor academic outcomes. About half of parents are concerned about this approach, with Black and Hispanic parents again expressing heightened concerns compared to white parents.
    • Seventy percent of high school students report that they have used generative AI, whereas only 46 percent of parents of high schoolers say that their child has used the technology.
    • Eighty-eight percent of teachers say that their school uses student activity monitoring software to track what students are doing online, but only 45 percent of parents know about the use of this technology by their child’s school.
  • Finally, teachers reveal ongoing issues caused by lack of strong privacy and security practices:
    • Nearly one in four teachers report their school has experienced a large-scale data breach in the past school year, and 13 percent of teachers say that they or another teacher have been doxxed.
    • One third of teachers report their school does not have policies in place regarding student privacy for gender expansive students, or they are not sure that their school has one.

“Introducing new technologies into K-12 schools also introduces new threat vectors and risks of irresponsible use. This underscores the importance of schools needing to clearly communicate with families and educate them about how edtech tools are being used in their child’s school – including its potential harms,” says Elizabeth Laird, Director of the Equity in Civic Technology Project at CDT. “Just because technology is rapidly evolving doesn’t mean that community engagement should be left behind. Schools can do both.”

CDT’s research is based on nationally representative surveys of 6th-12th grade public school teachers and parents, and 9th-12th grade students. The full text of the research report can be accessed at: https://cdt.org/insights/out-of-step-students-teachers-in-stride-with-edtech-threats-while-parents-are-left-behind/

###

The Center for Democracy & Technology (CDT) is the leading nonpartisan, nonprofit organization fighting to advance civil rights and civil liberties in the digital age. We shape technology policy, governance, and design with a focus on equity and democratic values. Established in 1994, CDT has been a trusted advocate for digital rights since the earliest days of the internet. The organization is headquartered in Washington, D.C., and has a Europe Office in Brussels, Belgium.

The post Press Release: CDT Research Uncovers Widespread Use of Questionable Technologies in K-12 Schools Despite Parent Concern and Lack of Awareness appeared first on Center for Democracy and Technology.

]]>
Brief – Unique Civil Rights Risks for Immigrant K-12 Students on the AI-Powered Campus  https://cdt.org/insights/brief-unique-civil-rights-risks-for-immigrant-k-12-students-on-the-ai-powered-campus/ Wed, 15 Jan 2025 05:01:00 +0000 https://cdt.org/?post_type=insight&p=106899 Ongoing public discourse has sparked renewed questions about the intersection of immigration and K-12 schools. Recent statements indicate that there will be a focus on immigrant children in schools by the incoming presidential administration, including efforts to block undocumented children from attending public school and take immigration enforcement actions on school grounds. State leaders are […]

The post Brief – Unique Civil Rights Risks for Immigrant K-12 Students on the AI-Powered Campus  appeared first on Center for Democracy and Technology.

]]>
Unique Civil Rights Risks for Immigrant K-12 Students on the AI-Powered Campus. White document on a light grey background.
Unique Civil Rights Risks for Immigrant K-12 Students on the AI-Powered Campus. White document on a light grey background.

Ongoing public discourse has sparked renewed questions about the intersection of immigration and K-12 schools. Recent statements indicate that there will be a focus on immigrant children in schools by the incoming presidential administration, including efforts to block undocumented children from attending public school and take immigration enforcement actions on school grounds. State leaders are taking similar interest in the issue, with some publicly announcing plans to challenge Plyler v. Doe’s constitutional right to an education for undocumented students and notices sent home to parents regarding their plans to “[stop] illegal immigration’s impact” on schools.

I. Introduction

Immigrant students are protected from discrimination on the basis of national origin in school under Title VI of the Civil Rights Act of 1964. National origin discrimination occurs when someone is harassed, bullied, or otherwise treated differently “stemming from prejudice or unfounded fears about their national origin (including the country or part of the world they or their family members were born in or are from, their ethnicity or perceived ethnic background, and/or the language they speak).” This brief focuses on the unique civil rights considerations for immigrant students and how schools can fulfill these legal obligations when it applies to their use of data and technology. Specifically, it: 

  • Defines who immigrant students are and how they may be present in the U.S.;
  • Analyzes the unique circumstances and risks that immigrant students face in schools;
  • Identifies the ways in which data and technology could run afoul of immigrant students’ civil rights; and
  • Provides recommendations to school leaders to ensure their use of data and technology is consistent with civil rights laws and supports the success of all students.

Although this brief focuses on non-citizen immigrants because of the unique legal risks and vulnerabilities they face, it is important to note that other groups, like immigrants who become U.S. citizens, English Learners, and those who are merely perceived to have been born outside of the U.S., are also protected from national origin discrimination. 

Read the full brief.

The post Brief – Unique Civil Rights Risks for Immigrant K-12 Students on the AI-Powered Campus  appeared first on Center for Democracy and Technology.

]]>
Brief – Education Leaders’ Guide to Complying with Existing Student Privacy and Civil Rights Laws Amidst an Evolving Immigration Landscape https://cdt.org/insights/brief-education-leaders-guide-to-complying-with-existing-student-privacy-and-civil-rights-laws-amidst-an-evolving-immigration-landscape/ Wed, 15 Jan 2025 05:01:00 +0000 https://cdt.org/?post_type=insight&p=106902 With immigration enforcement likely to intensify this year, it is critical that school administrators comply with existing privacy and civil rights laws with respect to the data they collect and the technology that they use. CDT research suggests that some schools are currently using data and technology to play a role in immigration enforcement. For […]

The post Brief – Education Leaders’ Guide to Complying with Existing Student Privacy and Civil Rights Laws Amidst an Evolving Immigration Landscape appeared first on Center for Democracy and Technology.

]]>
Education Leaders’ Guide to Complying with Existing Student Privacy and Civil Rights Laws Amidst an Evolving Immigration Landscape. White document on a light grey background.
Education Leaders’ Guide to Complying with Existing Student Privacy and Civil Rights Laws Amidst an Evolving Immigration Landscape. White document on a light grey background.

With immigration enforcement likely to intensify this year, it is critical that school administrators comply with existing privacy and civil rights laws with respect to the data they collect and the technology that they use. CDT research suggests that some schools are currently using data and technology to play a role in immigration enforcement. For example, 17 percent of teachers report that their school has shared student information with immigration enforcement in the past school year. 

Further, despite Immigration and Customs Enforcement’s (ICE) traditional policy of refraining from enforcement actions on K-12 school campuses, school officials should recognize that this is a norm – not a prohibition – and that schools need to be prepared to address potential enforcement on campus.

This document will provide background on how immigration enforcement may affect K-12 schools, and offers recommendations for how schools can meet long-standing legal obligations that remain unchanged regardless of increased enforcement activity. 

Read the full brief.

The post Brief – Education Leaders’ Guide to Complying with Existing Student Privacy and Civil Rights Laws Amidst an Evolving Immigration Landscape appeared first on Center for Democracy and Technology.

]]>
Students’ Use of Generative AI: The Threat of Hallucinations https://cdt.org/insights/students-use-of-generative-ai-the-threat-of-hallucinations/ Mon, 18 Dec 2023 18:15:26 +0000 https://cdt.org/?post_type=insight&p=101975 [ PDF version ] Generative AI systems trained on large amounts of existing data use machine learning to produce new content (e.g., text or images) in response to user prompts. In education, generative AI is most often talked about in the context of academic integrity, with teachers expressing fears of cheating in the classroom. However, […]

The post Students’ Use of Generative AI: The Threat of Hallucinations appeared first on Center for Democracy and Technology.

]]>
[ PDF version ]

Generative AI systems trained on large amounts of existing data use machine learning to produce new content (e.g., text or images) in response to user prompts. In education, generative AI is most often talked about in the context of academic integrity, with teachers expressing fears of cheating in the classroom.

However, our polling of teachers, parents, and students shows that 45 percent of students who say that they have used generative AI report using it for personal reasons, while only 23 percent of students report using it for school. Of those who have used the technology for personal reasons, many of the uses are high stakes – 29 percent have used it for dealing with anxiety or mental health issues, 22 percent have used it for dealing with issues with friends, and 16 percent have used it for dealing with family issues. As a result, even in the context of personal use, generative AI systems that produce incorrect information can have significant harmful consequences. 

What Are Hallucinations And Why Do They Happen?

By virtue of their style of writing and the way they impart information, generative AI systems can appear to be trustworthy and authoritative sources of information. However, these systems often produce text that is factually incorrect. These factual errors are referred to as “hallucinations.” Hallucinations are a consequence of both the design and operating structure of generative AI systems. 

From a design standpoint, generative AI systems are built with the intention of mimicking human-produced text. To accomplish this, they are generally trained on enormous datasets of text from which the system learns about the structure of sentences and paragraphs, and then produces text that seems meaningful to human readers by repeatedly predicting the next most sensible word. This process is not designed to create content that is true or correct, just that is sensical. 

Structurally, most generative AI systems operate “offline,” meaning they are not actively pulling data from the internet to respond to prompts. So they are restricted to the data contained in their training datasets. This makes generative AI systems particularly unreliable when it comes to current events that do not appear in their training datasets.

The Potential Detrimental Impacts of Hallucinations on Students

The reality of generative AI hallucinations paired with high levels of student personal use for important issues raise huge concerns about access to accurate information in times of crisis. For example, students could be asking ChatGPT (or another generative AI tool) questions about how to deal with an ongoing mental health issue, which could potentially be a life or death situation. Because most generative AI systems likely to be used by students are trained on information gleaned from the internet, they may replicate common misunderstandings of sensitive issues like mental health challenges, gender roles, and sexual orientation. 

In addition to traditional hallucinations, which are simply incorrect information, generative AI can also have significant emotional impacts on students who utilize the tool for personal reasons by replicating societal biases against marginalized populations, including on the basis of race, gender, or sexual orientation. Students, especially during the vital developmental stages of K-12 education, may internalize these biases, whether against themselves or others.

Hallucinations are also of significant concern when students use generative AI platforms for academic use. The possibility for students to receive inaccurate information can run directly counter to schools’ goal of imparting reliable, quality information to students. Students who do not understand these tools’ potential for hallucinations may use the tools in ineffective ways and miss beneficial uses. Without understanding generative AI’s shortcomings and limitations, students may not be able to effectively leverage its potential as a tool to supplement their learning and critical thinking skills.

How Should Schools Approach the Issue of Hallucination?

To combat the potential devastating consequences of generative AI hallucinations in both the personal and academic contexts, schools must:

  • Understand the limitations of generative AI and ensure that teachers are adequately trained: Though the potential benefits of these tools to enhance learning can be exciting, it is imperative for school officials to be thoroughly steeped in the technology’s shortcomings and impart that knowledge on educators. Teachers play a critical role in ensuring that generative AI is used in responsible, appropriate ways in the classroom. But to do so, they need access to resources and training.
  • Continue to invest in counselors and other mental health supports: Schools should be wary of pushing students towards using generative AI as a resource on a topic as sensitive as their mental health. Ongoing mental health issues require human empathy and expertise, so schools should not be acquiring generative AI tools to replace or even to triage care that would otherwise be provided by a human. If schools are going to procure a tool to supplement the counselors and mental health supports already in place, they should reference our guidance on responsible procurement principles, since even as a supplemental tool, generative AI systems can cause harm if not tested and governed appropriately.
  • Provide education for students on what generative AI is, how it works, and why hallucinations occur: To combat the unchecked public hype around generative AI, schools should equip students with basic knowledge of the technology, its capabilities and limitations, and how it can go wrong in both academic and personal uses.
  • Provide education for students on media literacy and research skills: The release of ChatGPT last November underscored the need for students to understand how to be responsible, effective consumers of knowledge via new technological tools. Student use of generative AI is increasingly inevitable in the same way as their use of the internet, so it is vital that schools provide students training and resources on how to assess the accuracy and reliability of information gleaned through ChatGPT and other generative AI platforms.
  • Ensure that teachers and students understand when generative AI is appropriate to use: Generative AI is not meant to replace traditional teaching and learning by any means – it is not a replacement for knowledge and not an effective therapist or sounding board for personal issues. However, it can be used, for example, as an assistive tool to help improve writing or used as a novel tool for research when beginning to explore a new topic. Schools should provide guidance and training to both teachers and students on how to make effective use of generative AI.

[ PDF version ]

The post Students’ Use of Generative AI: The Threat of Hallucinations appeared first on Center for Democracy and Technology.

]]>
The Shortcomings of Generative AI Detection: How Schools Should Approach Declining Teacher Trust In Students https://cdt.org/insights/the-shortcomings-of-generative-ai-detection-how-schools-should-approach-declining-teacher-trust-in-students/ Mon, 18 Dec 2023 18:14:41 +0000 https://cdt.org/?post_type=insight&p=101980 [ PDF version ] Generative AI – systems that use machine learning to produce new content (e.g., text or images) in response to user prompts – has infiltrated the education system and fundamentally shifted the relationships between teachers and their students. Across the country, educators have expressed high levels of anxiety about students using generative […]

The post The Shortcomings of Generative AI Detection: How Schools Should Approach Declining Teacher Trust In Students appeared first on Center for Democracy and Technology.

]]>
[ PDF version ]

Generative AI – systems that use machine learning to produce new content (e.g., text or images) in response to user prompts – has infiltrated the education system and fundamentally shifted the relationships between teachers and their students.

Across the country, educators have expressed high levels of anxiety about students using generative AI tools, like ChatGPT, to cheat on assignments, exams, and essays in addition to fears of students losing critical thinking skills. One professor even described it as having “infected [the education system] like a deathwatch beetle, hollowing out sound structures from the inside until the imminent collapse.” In response to these fears, school districts, like New York City and Los Angeles, quickly imposed bans on its use by both educators and students. Schools have turned to tools like generative AI detectors to attempt to restore educator control and trust; however, detection efforts have fallen short in both their implementation and efficacy.

CDT Research Affirms Declining Trust…

One significant finding from our polling research of teachers, parents, and students is that teacher perception of widespread generative AI use for cheating appears to be largely unfounded. Forty percent of teachers who say that their students have used generative AI for school think their students have used it to write and submit a paper. But only 19 percent of students who report having used generative AI say they have used it to write and submit a paper – a finding that is supported by other survey research

Even despite the reality that a large majority of students are not using generative AI for nefarious academic purposes, teachers have still become more mistrustful of students’ work – perhaps due to the widespread, fear-stoking coverage of cheating instances. Sixty two percent of teachers agreed with the statement that “[g]enerative AI has made me more distrustful of whether my students’ work is actually theirs.” And this mistrust is bleeding into certain groups of students being disciplined at disproportionate rates for using or being accused of using generative AI – Title I and licensed special education teachers report higher rates of disciplinary actions for generative AI use among their students.

These high levels of mistrust among teachers and subsequent disciplinary action have led to frustration among students and parents about erroneous accusations of cheating, which can cause an even further rift between teachers and students. This erosion of trust is potentially damaging to school communities where strong relationships between educators and their students are imperative in providing a safe, quality learning environment.

…And Insufficient Detection Tools And Training

Tools designed to detect when generative AI was used to produce content are the only technological solutions currently available to help teachers attempt to combat generative AI-based cheating; however, they fall short of solving existing trust issues. To begin, school policies on content detection tool use is spotty – only 17 percent of teachers say that their school provides a content detection tool as part of its larger technology platform, and 26 percent say their school recommends their use, but leaves it up to the educator to choose one and implement it. Without strong guidance on the use and implementation of content detection tools, teachers appear uneasy about utilizing them as a defense mechanism for cheating. Only 38 percent of teachers report using a generative AI content detection tool regularly, and just 18 percent of teachers strongly agree that these tools “are an accurate and effective way to determine whether a student is using AI-generated content.” Teachers’ lack of confidence is well-founded as, at least at this point, these tools are not consistently effective at differentiating between AI-generated and human-written text.

Beyond using tools for detection, teacher confidence in their own effectiveness at detecting generative AI created writing is low – 22 percent say they are very effective and 43 percent say they are somewhat effective. This is particularly concerning given that most teachers have not received guidance on how to detect cheating. Only 23 percent of teachers who have received training on their schools’ policies and procedures regarding generative AI have gotten guidance on how to detect student use of ChatGPT (or another generative AI tool) when submitting school assignments.

How Should Schools Approach Declining Teacher Trust?

Given our research and what we know about generative AI content detection tools, they are not the answer, at least for now. These tools suffer from accuracy issues, and may disproportionately flag non-native speakers. Instead, schools need to:

  • Offer teacher training on how to assess student work in light of generative AI: To help teachers feel like they have more control over academic integrity in the classroom, schools must properly equip them to deal with the new reality of generative AI. This means providing them with training on the limitations of detectors and how to respond if they reasonably suspect that a student is cheating.
  • Craft and implement clear policies about which uses are allowed and prohibited: Our polling from this past summer shows that schools are failing to provide guidance on what is defined as “improper use” of generative AI, with 37 percent of teachers reporting that their school has no policy or they are not sure if there is a policy in place on generative AI. It is imperative for both teachers and students to know this, so that everyone is on the same page about responsible generative AI use. 
  • Encourage teachers to modify assignments to minimize the effectiveness of generative AI: Understanding what generative AI systems are not good at can help teachers design assignments where using generative AI will not be helpful to students. For instance, generative AI systems are often ineffective at providing accurate sources for their claims. Requiring students to provide citations for any claims they make will likely require students to go far beyond a generated response. 

[ PDF version ]

The post The Shortcomings of Generative AI Detection: How Schools Should Approach Declining Teacher Trust In Students appeared first on Center for Democracy and Technology.

]]>
Brief – Late Applications: Disproportionate Effects of Generative AI-Detectors on English Learners https://cdt.org/insights/brief-late-applications-disproportionate-effects-of-generative-ai-detectors-on-english-learners/ Mon, 18 Dec 2023 18:13:21 +0000 https://cdt.org/?post_type=insight&p=101992 [ PDF Version ] CDT recently released legal research on the application of civil rights laws to uses of education data and technology, including AI. As the use of generative AI increases both inside and outside the classroom, one group of students at particular risk of unequal treatment are those who are not yet able […]

The post Brief – Late Applications: Disproportionate Effects of Generative AI-Detectors on English Learners appeared first on Center for Democracy and Technology.

]]>
[ PDF Version ]

CDT recently released legal research on the application of civil rights laws to uses of education data and technology, including AI. As the use of generative AI increases both inside and outside the classroom, one group of students at particular risk of unequal treatment are those who are not yet able to communicate fluently or learn effectively in English – that is, English Learner (EL) students. Research indicates that so-called AI detectors are disproportionately likely to falsely flag the writing of non-native English speakers as AI-generated, putting them at greater risk for being disciplined for cheating in school. Schools need to be aware of this potential disparity and take steps to ensure it does not result in violating the civil rights of EL students. 

Who Are EL Students?

Nationally, English learners (ELs) are the fastest growing student population, accounting for 10 percent of the overall student population in 2019, with 81 percent of public schools serving at least one EL student. While some EL students are immigrants themselves, most are actually the U.S.-born children of immigrants. Both face unique challenges in school. For example, non-U.S. born ELs who enter the K-12 system as high schoolers are under immense pressure to graduate on time while also reaching English language proficiency; they may also have entered the U.S. without their family, meaning that they bear significant burdens such as unstable housing and the obligation to work to support themselves. 

The goal for all ELs is to reach English proficiency– once they achieve this, they are reclassified and no longer considered ELs. This reclassification process makes ELs a dynamic student group who are more difficult than other vulnerable student populations to properly track. By 12th grade, ELs make up only 4 percent of the total population of students, down from 16 percent in kindergarten. Even after reclassification, however, studies have historically suggested that EL students still struggle – “sizable proportions of the reclassified students, while able to keep pace in mainstream classrooms in the early elementary school years, later encountered difficulties in middle and high school,” with some ending up having to repeat a grade. Data out of California shows ELs lagging behind their peers academically, from test scores to grades to graduation rates. However, some advocates are optimistic that ELs, with the right support and tracking, are closing this gap.

Generative AI, EL Students, and the Risk of Disproportionate Discipline

EL students already are at higher risk for school discipline. The risk of suspension for a student with EL status is 20 percent higher than a non-EL student.[1] Moreover, approximately three quarters of EL students are native Spanish speakers, and Hispanic students are overrepresented in alternative schools, where students are typically placed due to disciplinary issues and where they tend to have less access to support staff like counselors and social workers. CDT research also found that Hispanic students are more likely than non-minority students to use school-issued devices, and thus more likely to be subject to continuous monitoring by student activity monitoring software, which can lead to even higher rates of discipline.

The increased use of chatbots such as ChatGPT threatens to exacerbate the discipline disparity for EL students. Generative AI has become a contentious topic in the education sector. Concerns about academic dishonesty are high, with 90 percent of teachers reporting that they think their students have used generative AI to complete assignments. As CDT has previously reported, student accounts suggest that generative AI is actually primarily used for personal reasons rather than to cheat, and that certain populations, such as students with disabilities, are more likely to use the technology and more likely to have legitimate accessibility reasons for doing so. Still, disciplinary policies are cropping up across the country to penalize student use of generative AI and are sometimes accompanied by newly acquired programs that purport to detect the use of generative AI in student work. 

For EL students, this could be uniquely problematic. A recent study out of Stanford University shows that AI-detectors are very likely to falsely flag the writing of non-native English speakers as AI-generated, and that there is significant disparity in false flags for non-native English speakers versus native speakers. The study was conducted using the test of English as a foreign language (TOEFL) done by eighth graders. Detectors were “near perfect” in evaluating essays written by U.S. born writers, but falsely flagged 61.22 percent of TOEFL essays written by non-native English speakers as AI-generated (particularly troubling as this is a test that would, by its nature, not ever be administered to native English speakers in the first place). All seven AI detectors that the study tested unanimously but falsely identified 18 of the 91 TOEFL student essays (19 percent) as AI-generated and a remarkable 89 of the 91 TOEFL essays (97 percent) were flagged by at least one of the detectors. James Zou, who conducted the study, said of its results: “These numbers pose serious questions about the objectivity of AI detectors and raise the potential that foreign-born students and workers might be unfairly accused of or, worse, penalized for cheating.” 

Like students with disabilities, there might be legitimate uses of generative AI that could benefit EL students in ways that might make them more likely users, and thus even more likely to be disciplined under new school policies. According to some EL educators, generative AI “can potentially address some of the pressing needs of second language writers, including timely and adaptive feedback, a platform for practice writing, and a readily available and dependable writing assistant tool.” Some say that generative AI could benefit both students and teachers in the classroom, by providing students with engaging and personalized language learning experiences, while allowing teachers to “help students improve their language skills in a fun and interactive way, while also exposing them to natural-sounding English conversations.”

Civil Rights Considerations

These concerns about disproportionate flagging and discipline are not just a matter of bad policy. Where students belonging to a protected class are being treated differently from others because of their protected characteristics, civil rights alarm bells sound. The Civil Rights Act of 1964 (the Act) generally prohibits state-sponsored segregation and inequality in crucial arenas of public life, including education. Title VI of the Act protects students from discrimination on the basis of, among other attributes, race, color, and national origin, and was enacted to prevent (and in some cases, mandate action to actively reverse) historical racial segregation in schools. ELs are protected from discrimination under Title VI on the basis of both race and national origin, and are entitled to receive language services and specialized instruction from their school in the “least segregated” manner possible. Under the circumstances described above, EL students arguably experience unlawful discrimination under the theories of disparate treatment, disparate impact, or hostile learning environment as a result of false flagging.

  1. Disparate impact and disparate treatment. Disparate impact occurs where a neutral policy is applied to everyone, but primarily members of a protected class experience an adverse effect. Disparate impact does not require intentional discrimination. Disparate treatment requires a showing of intent to treat a student differently (at least in part because of their protected characteristics) and can occur either where a neutral policy is selectively enforced against students belonging to a protected class, or where the policy explicitly targets that protected group. Here, an education agency’s generative AI and discipline policy might be over-enforced against EL students, due to the sheer disproportionality of false flags for non-native English speakers suggested by the Stanford study. Where an education agency is aware of these high error rates and consequent adverse effects for a protected group of students but nonetheless chooses to deploy the technology, it arguably meets requirements for a disparate impact or even a disparate treatment claim. 
  2. Hostile learning environment. A hostile learning environment occurs where a student — or group of students — experiences severe, pervasive, or persistent treatment that interferes with the student’s ability to participate in or benefit from services or activities provided by the school. For EL students, having their work frequently flagged for cheating by AI detectors and dealing with the accusation, investigation, and discipline that results, might create such an environment. Education agencies are tasked with the general obligation of ensuring a nondiscriminatory learning environment for all students. This obligation extends to responsibility for the conduct of third parties, such as vendors or contractors, with which the agency contracts, even if the conduct was not solely its own.

Recommendations

Given the known inadequacies of AI detectors and the clear potential for disproportionate adverse effects on marginalized groups of students such as EL learners, education agencies should at minimum consider taking the following steps.

Contemplate necessity of use

Assess whether the use of this technology will be helpful in accomplishing the stated goal and should be used at all. As a starting point, the goal of deploying these technologies is to prevent academic dishonesty. Educators are skilled professionals who are tasked with understanding their students’ skills and challenges. More traditional mechanisms for cheating, such as purchasing essays online or having them written by a friend or family member, are often easy to identify for an educator familiar with that student’s work and skill level. Given the known error rates of AI detectors, there is nothing to suggest that these technologies could or should be used to supplant a teacher’s professional judgment in determining whether a piece of writing was actually the student’s own work. 

Provide training regarding reliability  

Ensure educators understand: (i) the success and error rates of AI detectors, and the disproportionate error rate for non-native English speakers; (ii) that AI detectors should not supplant an educator’s professional judgment; and (iii) that AI detector flags are not reliable as concrete proof of academic dishonesty. At most,  if they use AI detectors at all, educators should recognize they can only be one piece of a broader inquiry for identifying potential academic dishonesty.  

Provide students an appeal process to challenge flags 

To the extent that schools use AI detectors, they must put in place significant procedural protections especially given the known error rates. Among the checks and balances that should be in place following a flag by an AI detector is the opportunity for implicated students to respond and advocate for themselves. Understand, however, that there are likely to be equity concerns with this process as well, as some students may not be as equipped as others (depending on grade level, English proficiency, etc.) to even understand the allegations or refute them.  

Conclusion

As schools grapple with rapidly emerging technologies, it is understandable that the response may include adopting innovative technologies of their own to combat undesired uses. However, it remains vital to stay vigilant of the potential pitfalls of these technologies and ensure that the protection of civil rights for all students in the classroom is a key priority.

[ PDF Version ]

The post Brief – Late Applications: Disproportionate Effects of Generative AI-Detectors on English Learners appeared first on Center for Democracy and Technology.

]]>
What Was Once Science Fiction Is Now Reality: Orwellian Uses of Safety Tech in K-12 Schools https://cdt.org/insights/what-was-once-science-fiction-is-now-reality-orwellian-uses-of-safety-tech-in-k-12-schools/ Tue, 12 Dec 2023 19:01:37 +0000 https://cdt.org/?post_type=insight&p=101858 Prior to releasing our survey research of students, parents, and teachers this past September, our team spent months working with an independent research firm to brainstorm questions that would thoughtfully reveal how schools are currently implementing various uses – including more extreme uses – of educational data and technology (edtech) aimed at keeping students safe. […]

The post What Was Once Science Fiction Is Now Reality: Orwellian Uses of Safety Tech in K-12 Schools appeared first on Center for Democracy and Technology.

]]>
Prior to releasing our survey research of students, parents, and teachers this past September, our team spent months working with an independent research firm to brainstorm questions that would thoughtfully reveal how schools are currently implementing various uses – including more extreme uses – of educational data and technology (edtech) aimed at keeping students safe. Anecdotally, we had heard of school districts utilizing predictive analytics, remote proctoring, facial recognition, law enforcement data sharing, weapon detection systems, and student location tracking – which all carry serious, documented risks. But we wanted to know: How common are these edtech tools, really?

To cover all our bases on potential “extreme” use cases, we originally included armed drones that could surveil school grounds for safety reasons and even respond to safety threats; however, we quickly decided to cut it since it seemed far too outlandish. Fast forward to August when our surveys were in the field, that assumption was proven wrong.

Philadelphia, a large urban school district, announced that it would be rolling out district-owned drones “to patrol violence-prone areas without the need for police on the ground.” In some cases, the drones would be piloted by students, but drone footage would be monitored (presumably) by a school safety official. This seemingly dystopian use of technology paired with the results of our survey research made one thing very clear: Invasive school safety tools are actively being implemented by school districts across the country, even ones we previously thought “too outlandish.”

Experimental, Potentially Harmful Safety Tools Are Being Used Regardless of Student, Parent Concerns

Uses of school safety technology largely driven by artificial intelligence (AI) are expanding in schools to respond to mass shootings, the youth mental health crisis, and other ever-present safety threats to staff and students. Even though our survey research shows there is not yet widespread adoption of some of the more invasive tools, schools still report sizable use of such tools to prevent safety issues, respond to safety issues, or involve law enforcement.

What is more alarming is that students and parents report high levels of concern about most of these more extreme use cases, but schools are still deploying them regardless. For example, 36 percent of teachers reported that student data is being analyzed to predict who would be more likely to commit a crime, act of violence, or act of self-harm, even though 69 percent of students and parents reported being extremely or somewhat concerned about that particular use. This shows a deep disconnect between schools, parents, and students in their priorities when it comes to edtech procurement decisions.

Teacher Q: Listed below are potential ways that data or technology could be used in schools. Which of the following is your school or school district doing today?

Student & Parent Q: Listed below are potential ways that data or technology could be used in schools. How concerned would you say you are with each if they were used at your school?
TeachersStudentsParents
Prevent safety issues
Student data are being analyzed to predict which individual students would be more likely to commit a crime, commit an act of violence, commit an act of self-harm, etc.36%69%69%
Monitoring what students post publicly on their personal social media accounts37%71%68%
Using cameras with facial recognition technology to check who should be allowed to enter a school building or someone who should not be there 33%55%58%
Respond to safety issues
Tracking students’ physical location through their phones, school-provided devices like laptops, or digital “hall passes” when they leave the classroom36%74%71%
Gunshot detection system on school property27%45%55%
Using cameras that use artificial intelligence to notice unusual or irregular physical movements, which could identify an emergency or critical event at the school31%58%60%
Involve law enforcement
Student data such as grades, attendance, and discipline information are being shared with law enforcement38%65%66%

Table showing the high rates of concern among students and parents about specific uses of school safety technology (e.g. a gunshot detection system), and the sizeable rates of their use in schools as reported by teachers.

Why Should Education Leaders And Policymakers Be Concerned About These Uses?

As previous CDT research has affirmed, the use of technology in the name of student safety presents significant risks such as:

  • Lack of efficacy and accuracy: Many tools used to prevent or respond to safety issues lack evidence that they actually live up to their stated intent – to keep students safe. They can be subject to technical limitations, difficult to audit, and produce false positives, which could lead to students experiencing excessive, unsubstantiated disciplinary action or interaction with law enforcement. Additionally, some safety tools are not designed specifically for the school context, making them potentially unequipped to handle the nuances and highly sensitive nature of student data.
  • Chilling effects: Having various invasive safety technology tools as a regular part of a student’s learning environment can actually cause students to feel less safe in the classroom. Excessive monitoring and surveillance can chill speech, associations, movement, and access to vital resources, posing serious risks to students’ privacy, free expression, and ability to learn. 
  • Disproportionate impact: Safety tools driven largely by AI, like student activity monitoring and predictive analytics, are proven to cause disproportionate negative privacy and equity harms to protected classes of students on the basis of race, sex, and disability status. Not to mention, algorithmic risk assessment and facial recognition tools are often trained on biased data that lack social nuance factors. This could cause students of color to be overly identified in especially high-stakes safety situations, thus subjecting them to increased scrutiny and unwarranted encounters with law enforcement.
  • Cost and resource management: Schools often lack resources, expertise, and personnel to effectively monitor and measure the impact of school safety tools, leading them to overly rely on the data gathered and decisions made by a machine. Again, this is particularly concerning given the lack of evidence that these tools are actually effective.
  • Governance mechanisms: Often school safety technology is procured out of fear of imminent threats to students and staff, without clear policies and procedures on how these systems and student data will be used and governed. There is also no uniform practice of parent, student, and broader community engagement in the procurement process of these tools, raising concerns about transparency and centering concrete student needs.
  • Cybersecurity: Related to governance issues, schools having more tools to collect sensitive student data introduces more risk for cyberattacks and data breaches, especially with many schools lacking sufficient IT/privacy personnel.

Recommendations

With increasing safety threats to schools, expanded use of these tools is likely. However, to responsibly utilize high-stakes school safety technology to best serve students’ safety and wellbeing needs, schools must approach procurement and implementation with students’ privacy, civil rights, and civil liberties at the top of minds. Looking ahead, schools should:

  • Expand their “safety” definition: On top of the increasing devastating safety threats like school shootings, students face countless other safety and wellbeing challenges that, if not acknowledged prior to procurement of new technology, will only be exacerbated. For example, students of color have faced negative impacts of overbroad policing, surveillance, and discipline in schools. School officials must take into account not only imminent safety issues, but also systematized safety concerns, especially facing historically marginalized and under-resourced students.
  • Assess capacity and feasibility: A misconception is that acquiring new safety technology may offset the workload of existing school safety staff; however, that often is not the case given the time and care needed to ensure these tools are being used responsibly. Responding to and preventing safety threats during school hours requires expertise and proper personnel, which school districts can already lack given the many other functions they are required to perform. Schools also face increased pressure to monitor and respond to threats that happen outside of school hours, such as students’ social media posts. Before adding a new tool onto their already existing school safety infrastructure, school officials should ask questions like:
    • Given our existing resources and staffing, can we ensure (a) that this new tool would effectively and positively impact students’ safety; (b) that we have the capacity and capability to use it effectively; and (c) that we can mitigate any risks of discrimination or other harms that it presents?
    • Would acquiring this new piece of technology detract from our mission of providing kids a quality education?
  • Practice data minimization: Particularly when dealing with minors, the best data practices are collecting only strictly necessary information; limiting how the data can be used, who can access it; how long it is retained; and determining whether specific features of the acquired tool can be disabled.
  • Be transparent about data management: Vendors of school safety technology can often lack transparency around how data is stored, how long it is retained, and the security measures they have in place. It is vital to acquire this information and communicate that information clearly to students and parents. Parents and their children have a right to know and understand how their data is being used by the school and these third-party companies.
  • Probe vendors about effectiveness and auditing during procurement: School officials must ask vendors to provide evidence demonstrating their product’s effectiveness, particularly in a school setting. In the same vein, vendors should also be able to provide information about auditing their product for disproportionate impact on protected classes of students. If vendors cannot answer some of these basic questions about their product, schools should rethink purchasing:
    • What evidence can you provide that the tool is effective at fulfilling the purposes for which we are purchasing it?
    • How is safety and impact being measured, if at all?
    • Do you have data on how your product performs across varying student demographics? If so, may we have access to it?
  • Bring all necessary school staff to the decision-making table: Procuring new safety technology is a complex process that requires a breadth of knowledge and expertise to ensure that tools can be used responsibly and for the benefit of students. On top of traditional school administrators in charge of procurement and contracts, schools should ensure that their chief information officers (CIOs), chief privacy officers (CPOs), and civil rights coordinators are included in the conversation to bring their unique perspectives and recommendations.
  • Engage parents, students, and other community members in the procurement process: Before making a decision to purchase from a vendor, it is imperative for school officials to understand the thoughts, concerns, and needs of who the technology is ultimately serving – students. Schools should ask questions like:
    • Do students and parents perceive these tools as modes of keeping them (or their child) safe? If not, how would this impact their learning experiences? 
    • Do parents want to be informed about our vision for this procurement decision? If so, what details would they like to know?
  • Create clear governance policies and procedures prior to procurement: Before engaging in purchasing conversations, it is imperative that schools have a plan in place for what data they can securely collect, how it will be stored, how long it will be stored, deletion procedures, who has access to it, and the process for responding to safety threats that may be flagged by these systems. Humans must be involved in the monitoring, analyzing, and decision-making of the data these systems generate since, left on their own, safety tools may make erroneous mistakes or decisions that significantly impact students’ wellbeing and educational outcomes – thus creating a mechanism for accountability. This includes determining when designated school personnel should alter or override the determination/decision made by a safety tool. Humans, particularly those trained to monitor safety systems in real time, may have knowledge or context clues that a safety tool might not be able to account for. Additionally, due to the potential impact that these tools have, schools must also have an accessible redress process if students and their families feel a decision was wrong or unfair.

Conclusion

Though the idea of being at the cutting edge is alluring, CDT research has shown that schools are already failing to properly implement seemingly more straightforward edtech tools, like content blocking and filtering (which has been around since the early 2000s), in ways that protect students’ privacy, equity, and civil rights. Piling on these more complex, invasive modes of school safety technology without adhering to the recommendations made above can only serve to further exacerbate these harms, especially on protected classes of students who are already disproportionately harmed by edtech tools. 

Innovation, privacy, and equity considerations can and should go together when deciding to adopt new school safety technology tools, especially for more extreme, high-stakes safety uses. Classrooms are a space for children to learn and grow, without having to think of personal safety, threats to their wellbeing, and invasive data practices. The promises of technological advancement can be realized alongside thoughtful procurement by school administrators, which includes elevating the voices of students, parents, and teachers.

The post What Was Once Science Fiction Is Now Reality: Orwellian Uses of Safety Tech in K-12 Schools appeared first on Center for Democracy and Technology.

]]>
Report – Late Applications: Protecting Students’ Civil Rights in the Digital Age https://cdt.org/insights/report-late-applications-protecting-students-civil-rights-in-the-digital-age/ Wed, 20 Sep 2023 04:01:00 +0000 https://cdt.org/?post_type=insight&p=99943 This report is also authored by Sydney Brinker, former CDT Intern Education data and technology continue to expand their role in students’, teachers’, and parents’ lives. While issues of school safety, student mental health, and achievement gaps remain at the forefront of education, emerging technologies such as predictive analytics, monitoring software, and facial recognition are […]

The post Report – Late Applications: Protecting Students’ Civil Rights in the Digital Age appeared first on Center for Democracy and Technology.

]]>
Graphic for CDT report, entitled "Late Applications: Protecting Students’ Civil Rights in the Digital Age."
Graphic for CDT report, entitled “Late Applications: Protecting Students’ Civil Rights in the Digital Age.”

This report is also authored by Sydney Brinker, former CDT Intern

Education data and technology continue to expand their role in students’, teachers’, and parents’ lives. While issues of school safety, student mental health, and achievement gaps remain at the forefront of education, emerging technologies such as predictive analytics, monitoring software, and facial recognition are becoming more popular. As these technologies expand, so have questions about how they might be used responsibly and without inflicting negative consequences on students, especially historically marginalized students.

The education sector has been responsible for protecting the civil rights of students for decades. Existing civil rights laws provide an important foundation to ensure that data and technology practices in schools achieve their intended function without inadvertently having discriminatory effects against students on the basis of race, sex, or disability.

Analysis of data that is disaggregated by a number of student demographics is crucial to understanding trends regarding protected classes of students and illustrates why an ongoing focus on student civil rights is necessary; however, the analysis contained in this report focuses on the use of technology and data in real time to make decisions about individual students, rather than the use of data to identify overall trends.

Examining the current uses of education data and technology under various civil rights concepts, this report offers guidance to help policymakers and education leaders understand how to better center civil rights in the digital age with respect to their practices and policies, especially regarding nondiscrimination and technology procurement. This guidance includes recommendations for school leaders to ensure that education data and technology uses do not run afoul of civil rights laws and that all students are positioned to be successful in school and beyond:

  • Audit existing nondiscrimination policies, practices, and notices.
  • Update or create new policies to address data and technology use.
  • Revise or implement procurement policy for education technologies.
  • Consolidate and make readily available all required nondiscrimination notices.
  • Post the consolidated policy in district buildings and on school websites.
  • Designate specific personnel to be responsible for ensuring compliance with nondiscrimination laws regarding education data and technology.
  • Conduct analysis and publicly report information on nondiscrimination policies and practices for data and technology on an ongoing basis.

Read the full report here.

Read the press release here.

The post Report – Late Applications: Protecting Students’ Civil Rights in the Digital Age appeared first on Center for Democracy and Technology.

]]>