Health Privacy Archives - Center for Democracy and Technology https://cdt.org/area-of-focus/privacy-data/health-privacy/ Fri, 11 Apr 2025 18:44:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://cdt.org/wp-content/uploads/2019/11/cropped-cdt-logo-32x32.png Health Privacy Archives - Center for Democracy and Technology https://cdt.org/area-of-focus/privacy-data/health-privacy/ 32 32 Mind Matters: Mental Privacy in the Era of Brain Tracking Technologies https://cdt.org/insights/mind-matters-mental-privacy-in-the-era-of-brain-tracking-technologies/ Wed, 14 Aug 2024 15:07:46 +0000 https://cdt.org/?post_type=insight&p=105259 By CDT Intern, Ebie Quinn In an age where personal data is regularly collected and tracked online, it can feel like our brain is the last truly private place. While people are generally aware that their clicks, likes, and scroll are recorded and stored, many take solace in the idea that their thoughts remain private. […]

The post Mind Matters: Mental Privacy in the Era of Brain Tracking Technologies appeared first on Center for Democracy and Technology.

]]>
By CDT Intern, Ebie Quinn

In an age where personal data is regularly collected and tracked online, it can feel like our brain is the last truly private place. While people are generally aware that their clicks, likes, and scroll are recorded and stored, many take solace in the idea that their thoughts remain private. Yet, our “neural” or “mental” privacy is being threatened by the introduction of commercial products that measure and track our brain signals. These brain-based technologies record electrical activity and motor function in the brain, which companies might use to try to discern information about our emotions, preferences, and mental health. 

Despite the novelty of these products, technology measuring brain activity is not new. Brain-based technologies have primarily been deployed in health care and research settings, where they are used to diagnose, treat, and monitor patients with brain-related diseases. Progress has been made in treating patients with paralysis or other mobility-limiting diseases through the use of Brain Computer Interfaces (BCIs), an “invasive” version of brain tracking technology. Additionally, a range of brain-based technologies are being developed to address mental health disorders, including depression and anxiety, through the use of neurofeedback and similar treatments. 

Commercially, brain tracking technologies are growing. In 2022, a man with ALS successfully used a computer independently after receiving a BCI that converts his neural activity to cursor movement. Elon Musk’s company Neuralink is also pursuing this technology, and in January 2024, the company’s first patient received the implant

Generally, however, commercial brain tracking technologies are still largely non-invasive, and appear in the form of wearable headbands and headphones. Estimates show that the burgeoning neurotech industry is growing at an annual rate of 12% and is expected to reach $21 billion by 2026. Companies like Muse and Brainbit have developed headbands that collect brain activity to improve meditation and sleep. Further, NeurOptimal developed EEG sensors and ear clips designed to assist users with their golf game through “brain training,” and Emotiv developed EEG headphones that claim to monitor attention in the workplace. Just one year ago, Apple patented a design for AirPods that measure and collect brain signals from the wearer, an indication that these technologies are becoming increasingly mainstream. 

Commercial brain tracking presents new privacy risks. While brain data in the medical setting is protected by the Health Insurance Portability and Accountability Act (HIPAA), these protections do not apply in a commercial context, which is generally governed by the Federal Trade Commission (FTC) at the federal level. As a result, consumer brain data is vulnerable to being collected, stored, and sold with little oversight. Specifically, the collection of this data could lead to harms arising from practices such as unwanted disclosure of sensitive health information and diagnoses, increased surveillance and productivity monitoring in the work-place, and targeted advertising.

Neural data collected in a commercial setting may be used to make inferences about an individual’s health without their knowledge or understanding. Brain data may reveal if a person has epilepsy, anxiety, depression, or other mental disorders, as certain patterns of brain activity, called “biomarkers” or “neuromarkers” can be linked to specific mental health conditions. Furthermore, the link between these brain activity patterns and health conditions might be used to make predictions about future health outcomes, including the development of mental health disorders, diseases, learning styles, and alcohol and drug-use. While this form of data collection might provide significant benefits in a medical context by helping guide treatment approaches, when deployed in a commercial setting, individuals risk the unwanted use and disclosure of sensitive health information. 

Additionally, brain-based technologies serve as the next frontier in workplace surveillance, an area full of risks as CDT has previously explained. Tech-company Emotiv has promoted their EEG headphones as a solution to wandering attention at work. They boast that this product reads employees’ cognitive states in real time and provides this data to both employee and employer to boost workplace productivity. The use of brain data to assess employee productivity poses significant risks, including potential discrimination and the erosion of employee trust

Finally, the increased collection of consumer neural data will supercharge targeted advertising. Advertisers already recognize the utility of using insights from neural data in designing and marketing products, a strategy known as “neuromarketing,” where a person’s brain activity is leveraged to inform marketing decisions. However, widespread use of brain-tracking technologies would enable advertisers to target individuals based on, for instance, unique responses to stimuli by combining data about what appears on your screen with data about your brain activity at that same moment. Such targeting would lead to the sale and commodification of brain data, a problematic extension of targeted advertising, which results in harms to consumers like manipulation, discrimination, and invasion of privacy.  

Recognizing the potential risks associated with brain tracking data, policymakers have begun to respond. In April 2024, Colorado became the first state to pass legislation explicitly protecting neural data by expanding the scope of the Colorado Privacy Act to include “biological data,” which includes data generated by the brain, in the definition of “sensitive data.” California and Minnesota have each introduced similar legislation, marking a positive step in the recognition of the emergent privacy concerns around consumer neurotechnologies. The most recent version of the American Privacy Rights Act in the House of Representatives also included “neural data” in its definition of “sensitive data.”

These policies, though well-intentioned, don’t go far enough. Neural privacy advocate Nita Farahany points out that the Colorado law applies only when the biological data is collected for identification purposes. However, many companies developing consumer neurotechnologies are not aiming to identify individuals but instead to make inferences about their mental state, mood, and productivity, or perhaps to train artificial intelligence systems. This language therefore might make the law inapplicable to a wide swath of the commercial activity that it was intended to reach. 

Moving forward, it is important to further understand the risks presented by brain tracking technology and to respond to those risks accordingly. Without those protections, we risk ceding essential brain privacy and autonomy. 

The post Mind Matters: Mental Privacy in the Era of Brain Tracking Technologies appeared first on Center for Democracy and Technology.

]]>
Update from Our CEO: CDT Leads Breakthrough Work on Gender Justice https://cdt.org/insights/update-from-our-ceo-cdt-leads-breakthrough-work-on-gender-justice/ Fri, 28 Jun 2024 20:59:07 +0000 https://cdt.org/?post_type=insight&p=104573 This week marked the second anniversary of the Supreme Court’s decision in Dobbs v. Jackson Women’s Health Organization, the case that overturned Roe v. Wade and eliminated the constitution’s federal protections for abortion. That ruling put a bright spotlight on the importance of digital privacy in defending gender justice. This week, CDT published a series […]

The post Update from Our CEO: CDT Leads Breakthrough Work on Gender Justice appeared first on Center for Democracy and Technology.

]]>
This week marked the second anniversary of the Supreme Court’s decision in Dobbs v. Jackson Women’s Health Organization, the case that overturned Roe v. Wade and eliminated the constitution’s federal protections for abortion.

That ruling put a bright spotlight on the importance of digital privacy in defending gender justice.

This week, CDT published a series of resources assessing the landscape for health privacy two years after Dobbs. In that time, CDT has helped inform new state laws and federal regulatory protections, while leading our Dobbs Task Force which brought together leading tech companies, reproductive rights groups, health and privacy experts from around the country.  

Already, we’ve received feedback from reproductive care providers that our Shield Law Guide will help them make informed decisions about their obligations in various states.

But that’s not all. 

On Tuesday, CDT, in partnership with the Cyber Civil Rights Initiative (CCRI) and the National Network to End Domestic Violence, announced a new multistakeholder working group tackling one of the thorniest challenges online today: the creation and distribution of Non-Consensual Intimate Imagery.

Our announcement comes in response to last month’s White House Call to Action to Combat Image-Based Sexual Abuse, which invited companies and other stakeholders to work together to address the profound harms of intimate image abuse. 

When sexual images are shared without consent — or even threatened to be shared — the impacts can be profound; financially, psychologically and emotionally. This new working group brings together representatives of tech companies, trust and safety practitioners, experts in gender based violence, organizations representing impacted communities, and digital rights experts to develop new solutions to this profoundly difficult challenge.

These efforts are a prime example of CDT’s unique role in making change by leveraging our wide-ranging expertise, our deep commitment to civil rights and civil liberties, and our ability to gather stakeholders together for productive, important conversations. 

The post Update from Our CEO: CDT Leads Breakthrough Work on Gender Justice appeared first on Center for Democracy and Technology.

]]>
Dobbs – A Two Year Retrospective https://cdt.org/insights/dobbs-a-two-year-retrospective/ Mon, 24 Jun 2024 19:44:06 +0000 https://cdt.org/?post_type=insight&p=104456 Also authored by CDT Intern Ebie Quinn I. Introduction Two years ago, the Supreme Court decided Dobbs v. Jackson Women’s Health Organization, reversing prior court precedent under Roe v. Wade and its progeny that guaranteed constitutional protections for abortions. As a result of Dobbs, individual states were given the ability to decide how, and how […]

The post Dobbs – A Two Year Retrospective appeared first on Center for Democracy and Technology.

]]>
Also authored by CDT Intern Ebie Quinn

I. Introduction

Two years ago, the Supreme Court decided Dobbs v. Jackson Women’s Health Organization, reversing prior court precedent under Roe v. Wade and its progeny that guaranteed constitutional protections for abortions. As a result of Dobbs, individual states were given the ability to decide how, and how much, to burden abortion practices and reproductive health care in general. States have responded to Dobbs in a variety of ways. While some states have expanded and codified access to abortion, others have severely restricted or prohibited abortion entirely. 

States that have criminalized or otherwise restricted abortions will seek to enforce those laws. One common way to prove such allegations is by collecting data about the content of people’s text messages, their purchase history, or their visits to certain doctors. For instance, in 2022, law enforcement in Nebraska attempted to use private online communications to prosecute a mother for assisting her daughter in connection with an alleged abortion. Police involved in this investigation sent a warrant to Facebook and retrieved the contents of private messages between the mother and daughter for use in a criminal prosecution. Moreover, given the growing prevalence of medication abortion — and ability to receive reproductive health services from telemedicine — enforcement of anti-abortion laws may increasingly rely on digital and electronic information. 

The looming threat of criminal penalties has also undermined trust between patients and health care providers. Patients who worry that the information they share with doctors and hospitals may be used against them will be much less likely to be truthful and candid with their providers, which can result in lower quality and less beneficial health care. 

Medical privacy has been dramatically reshaped in the two years since Dobbs. This post describes these changes at both the state and federal level. Moving forward, CDT believes it is essential to ensure that patients have a full expectation of privacy when it comes to health care data – as well as the broad range of seemingly unrelated data that can be used to deduce health care activities – and that companies, who can be compelled to share private information in lawsuits and investigations, minimize the collection, storage, and sharing of sensitive health data in order to enhance users’ trust and privacy.

II. State Activity 

In the wake of Dobbs, several states have taken measures to protect sensitive health data. Some states have enacted privacy laws, either comprehensive or health-specific. Through legislative action and governor-issued executive orders, some states have also enacted “shield laws” which restrict the sharing of data related to reproductive health care in various forms, such as in response to an out-of-state investigation. By implementing these shield laws, states aim to protect the data of patients and providers within their jurisdiction, regardless of whether or not the patient is a state resident. CDT’s June 2024 Issue Brief, Two Years After Dobbs: An Analysis of State Laws to Protect Reproductive Healthcare Information from Interstate Investigations and Prosecutions, describes these laws in depth. 

Health Privacy Laws

In the two years following the Dobbs decision, states including Washington, Connecticut, and California enacted data privacy protections that either include or specifically address sensitive health data — as well as other forms of sensitive data that may be used to determine health status and activities.

Washington: My Health, My Data Act 

In 2023, Washington State did what many other states and the federal government have not: it passed a comprehensive health privacy bill, which went into full effect on March 31, 2024. Under the My Health, My Data Act, Washington residents have more agency and control over how their health data will be collected, used, and shared by companies. While this bill is not perfect, it’s an important model for lawmakers seeking to enact meaningful privacy protections. 

The legislation responds to countless instances in which data about a person’s health, including reproductive health data, has been collected, used, or shared in harmful ways. Key provisions of the bill stop companies from collecting or sharing consumer health data when that data is not necessary to provide a product or service that a customer has requested. These are strong limitations and are similar to those found in other legislative proposals, like the federal American Data Privacy and Protection Act (ADPPA) introduced in Congress in 2022, and the American Privacy Rights Act (APRA) introduced in Congress in 2024. Washington’s My Health, My Data Act also has robust mechanisms for people to access and delete their health data. 

Connecticut: Online Privacy Act 

Connecticut enacted Public Act 23-56, the Online Privacy Act, in order to strengthen protections for sensitive health data. This act introduces safeguards on the collection and storage of sensitive health data by businesses in the state. It requires that businesses allow consumers to view their personal data and have the option to delete it. The act explicitly includes information about reproductive health and gender-affirming care in its definition of “Consumer Health Data.” Importantly, the Online Privacy Act includes a prohibition on geofencing to track consumers and gather/send consumer data within 1,750 feet of reproductive health facilities. Both Nevada and New York have also implemented similar geofencing prohibitions. 

California: California Consumer Privacy Act

The California Consumer Privacy Act (CCPA) sets baseline privacy protections for Californians, including the right to know what personal information a business collects, the right to delete that information (with some exceptions), and the right to opt out of the sale or disclosure of that information. The CCPA applies to for-profit companies in California that meet a threshold size of revenue, income, or number of customers. While it does not apply to health information captured by HIPAA (i.e. information held by HIPAA-covered entities such as medical providers and insurers), it does capture sensitive medical data that falls outside HIPAA’s scope (e.g. health information gathered by your smartphone or health tracking app). These types of comprehensive regulations enhance consumers’ ability to exercise control over their own data and mitigate potential data privacy risks post-Roe

Shield Laws

Shield laws, at their core, aim to protect people’s health privacy by prohibiting entities who hold or can access people’s healthcare information from sharing such information in an investigation or prosecution under the anti-abortion laws of another jurisdiction. States have taken multiple approaches towards this goal, varying in application and scope: 

Government Officials 

The majority of state shield laws apply to state government officials, such as law enforcement and state courts. These laws prevent officials from assisting other states in abortion-related investigations or prosecutions made possible through Dobbs. This means that state judges and law enforcement are prohibited from issuing or executing subpoenas and other legal process on behalf of an out-of-state investigator, or aiding extraditions that further criminal abortion prosecutions and civil litigation. These restrictions help protect health care providers and recipients from state investigations or lawsuits that may seek to target them even when the care was lawful in the state where it was provided. 

Communication Service Providers 

States like Washington and California have sought to protect private messages and other digital information by enacting broader laws that apply to electronic communications service providers These laws prohibit companies like Meta and Google from providing consumer communications and other data such as web browsing information for use in out-of-state abortion investigations or prosecution. Under this form of shield law, personal messages sent to a physician or friend, are protected from being shared in an anti-choice law enforcement action.

Medical Providers

Some states apply their shield laws to medical professionals, organizations, and electronic health networks. These laws restrict medical providers or other entities that hold medical data from sharing that data for use in out-of-state investigations. These laws take a similar approach to the U.S. Department of Health & Human Services’ recent update to the HIPAA Privacy Rule, which prohibits HIPAA-covered entities from disclosing protected health information to investigators when the healthcare is lawful under the circumstances in which it was provided. This new HIPAA Rule is discussed in more detail below.  

Out-of-State Care 

In the wake of the Dobbs decision, an increasing number of patients are turning to remotely-prescribed abortion medication. In response, some states have drafted shield laws to protect individuals within their jurisdiction who provide reproductive health care services to people who may be located outside their state. These laws recognize that telemedicine providers, and even in-person providers issuing a prescription for abortion medication like mifepristone, may not know the geographic location where their patient takes the series of pills for a self-managed abortion. The law protects such providers, and prohibits them from sharing patients’ information to support an out-of-state lawsuit or investigation. It seems likely that these laws in particular will be challenged by anti-abortion officials, with some going to the Supreme Court.

Gender-Affirming Care

Finally, some states have protected information relating to gender-affirming care in their shield laws, in addition to information about reproductive health care. These provisions similarly serve to protect doctors and patients from out-of-state investigations by prohibiting disclosure of sensitive patient data related to gender-affirming care.    

The below chart provides a summary of the 19 shield laws currently in effect; CDT has also prepared a detailed analysis of these shield laws in each of the states in which they’ve been enacted.

State-by-State Survey*

StateRestricts communication service providersRestricts judges’ actions (e.g. issuing a search warrant) Restricts state officials’ actionsRestricts medical professionals, health info exchanges, & e-health networksProtects gender-affirming care 
California✔✔✔✔✔
Colorado✖✔✔✖✔
Conn.✖✔✔  ✖✔  
Delaware ✖ ✔✖ ✔✖ 
Hawaii✖✔✔✔✖
Illinois✖✔✖✖✔
Maine✖✔✔✔✔
Maryland✖✔ ✔✔✔* 
Mass.✖✔✔✖✔
Michigan✖✖✔✖✖
Minn.✖✔✔✖✔
NJ✖✖✔✔✔
NM✖ ✔✔✔✔
NY✔✔✔✔✔
Nevada✖✖✔✖✖
Oregon✖✔✔✖✔
RI✔*✔*✔✖✔*
Vermont✖✔✔✖✔
Wash.✔✔✔✖✔

✔* = legislation that has passed the state legislature and awaits the governor’s signature

III. Federal Protections

In addition to the state action prompted by Dobbs, the decision prompted renewed calls for comprehensive and health-specific privacy protections at the federal level. In the absence of legislative movement, executive agencies used their existing authority to protect the data of those seeking and providing reproductive health care. 

Key actors included the U.S. Department of Health and Human Services Office of Civil Rights (HHS OCR) and the Federal Trade Commission (FTC), which issued guidance and brought new enforcement actions against entities that failed to protect people’s private health information.

HHS Office of Civil Rights

Earlier this year, the Department of Health and Human Services’ Office of Civil Rights took a crucial step in protecting sensitive reproductive health data with its new HIPAA Privacy Rule to Support Reproductive Health Care Privacy, which updated the Privacy Rule it issued under the Health Information Portability and Accountability Act (HIPAA). Specifically, the rule prohibits covered entities from complying with requests and legal process–like subpoenas, court orders and warrants–involving reproductive health care for use in an investigation or prosecution if the care was legal in that state. Covered entities may only comply with requests for data related to reproductive health care if the request is accompanied with a signed attestation stating that the data will not be used for an investigation or prosecution of abortion-related laws. 

In issuing this rule, HHS OCR sought to respond to increasing mistrust in the medical system post-Dobbs, as patients fear their medical data might be shared without their knowledge, and even used against them in court. The HHS OCR rule helps to create an ecosystem in which patients can safely seek out reproductive health care and share information with their doctor confident that their data will be kept private.

This rule also seeks to empower providers of reproductive health care. Previously, the HIPAA Privacy Rule operated on a permissions basis, in which providers had discretion when responding to requests by law enforcement for patient data related to reproductive care. However, the uncertainty of the discretionary model put the burden on doctors and providers, who often felt pressured to comply with requests from law enforcement, to discern and fulfill their affirmative obligations. In shifting the model from permissions to prohibitions, HHS OCR simplifies the decision making process for health care providers, empowering them to protect their patients. 

In addition to the new final HIPAA rule, HHS OCR has taken additional actions to keep health data private. In December 2022, HHS OCR released a Bulletin highlighting the important privacy obligations under HIPAA that health providers (like doctors’ offices and hospitals) must follow when using apps and websites. OCR’s bulletin is designed to address an ongoing problem where data shared by patients with their health providers is also being inappropriately shared with advertisers. There are ample news accounts of health providers’ services, like patient portals, containing tracking technologies, such as cookies or “beacons,” that can collect and share people’s health information with unrelated third parties to be used for purposes such as targeted advertising. This bulletin has been subject to legal challenges and in June of 2024, a federal judge in Texas found portions of the guidance unlawful. At the time of writing, it is unclear how OCR plans to proceed in the wake of this ruling.

HHS OCR has also partnered with the U.S. Food and Drug Administration and the Federal Trade Commission to release a Mobile Health App Interactive Tool. This interactive tool is designed to assist mobile health app developers in identifying which federal laws and regulations may apply to their apps. Checking this tool early in the development of consumer-facing products, well before any digital health app is released, can ensure apps are in compliance with applicable privacy laws.

Federal Trade Commission

The Federal Trade Commission (FTC) is also using existing authority to address privacy concerns after the Dobbs decision. The FTC has used its authority in several ways to protect health data, including through rulemaking and through its enforcement actions against particular companies.

Rulemaking

On May 30, 2024, the FTC published the final version of the Health Breach Notification Rule (HBNR), which sets forth the protocol in the case of a breach of health data. Since the FTC enacted its initial HBNR in 2009, the number of health tracking apps has dramatically increased and Dobbs has created new health privacy risks, making it critical for the FTC to clarify that HBNR applies to this novel form of data collection. The final rule requires entities that manage personal health records (but are not subject to HIPAA) to notify the FTC, the consumer, and in some cases the media following a breach of personally identifiable health data. The update of the rule clarifies its applicability to health apps, and strengthens the notification mechanisms in this space. 

Enforcement 

Additionally, the FTC has initiated a range of enforcement actions, including against GoodRx, Easy Healthcare, and Kochava. This increase in consumer protection enforcement sends a message similar to that of the updated HBNR: if companies plan to handle data related to reproductive health in a post-Dobbs world, they must proceed with caution. 

  • GoodRx: In February 2023, the FTC brought an enforcement action against GoodRx, an online prescription company, for disclosing customer health data to Google, Facebook, and other third-party advertisers. This data was shared without the consent of consumers and GoodRx did not provide notice of the disclosure. The enforcement action bars GoodRx from sharing this data and requires GoodRx pay a $1.5 million penalty.
  • Easy Healthcare: In May 2023, the FTC brought a similar enforcement action against Easy Healthcare Corporation, in connection with their fertility tracking app, Premom. This company collected sensitive health data related to menstruation and ovulation and it improperly disclosed the information to third party advertisers in violation of its stated privacy policies and the HBNR. This enforcement action resulted in a $100,000 civil penalty.
  • Kochava: In February 2024, the FTC undertook an enforcement action against Kochava, a geolocation data broker, which the FTC claims sold geolocation data for millions of individuals that might connect them to sensitive health locations. Particular locations of concern include “reproductive health clinics, places of worship, homeless and domestic violence shelters, and addiction recovery facilities.” This litigation is ongoing, and has significant privacy implications broadly as well as for the protection of data in a post-Dobbs ecosystem.

Legislative Proposals 

There have been some federal legislative efforts to increase protections for private health information in the wake of the Dobbs decision. For example, bills like the federal My Body My Data Act of 2023 (MBMDA) seek to restrain overreaching data collecting practices, limit how long companies can retain personal reproductive or sexual health information, and give people clear ways to access and delete their health data. Its core provisions would prevent companies and covered entities from collecting, retaining, or using reproductive or personal health information without express written consent unless it is “strictly necessary to provide a product or service that the individual to whom such information relates has requested from such regulated entity.” 

In addition to more targeted legislative proposals, a comprehensive federal privacy bill that includes robust limitations around corporate data practices, especially around sensitive data categories like health data, would go a long way towards keeping all Americans’ data private. The American Data Privacy & Protection Act (2022) and American Privacy Rights Act (2024) have each marked significant bipartisan efforts to establish long-overdue federal privacy protections, but their progress has been mired by opposition. Meaningful collection, use, retention, and sharing limits will result in less data overall being collected and retained, including health data. Less data and stronger limitations on data usage will inevitably result in fewer privacy harms.

IV. Moving Forward

Law enforcement and civil litigants have new powers under state anti-abortion laws, and they will seek data to help them build their cases. While the new state and federal protections discussed above represent positive steps, much more work is necessary to preserve individuals’ rights to access health care and maintain their privacy. This section details some additional actions that governments and companies should take to ensure individuals have agency and choice over their own health care and enjoy powerful privacy protections.

Government actions

First, states should continue to look to one another and iterate on privacy protective approaches like those taken in California, Washington, and Maryland. Moreover, state legislators should look to CDT’s Field Guide to Blocking Statutes: Limiting Interstate Abortion Investigations. This guide illustrates how states can most effectively create or improve shield laws to avoid complicity in enforcement of out-of-state abortion bans and create an environment where patients feel protected when accessing lawful care. It notes that shield law statutes, if adopted, must be carefully tailored to the specific laws of the states in which they are passed.

In addition to legislative efforts at the state level, additional steps can be taken at the federal level. Given the long odds of federal legislative solutions, it remains incumbent on executive agencies like the FTC and HHS to continue to issue guidance to companies and healthcare providers and rigorously enforce existing laws to protect people’s privacy rights. 

For example, the FTC should continue to use its authorities to ensure consumers’ health data is kept private and not shared in ways that are unknown and unwanted – including if it moves forward with a rulemaking on commercial data practices in the coming year. Moreover, while the HHS update to the HIPAA Privacy Rule is a positive step forward in the fight for securing essential reproductive rights, to best capitalize on its protections it is essential for HHS OCR to educate health care providers and insurers about its requirements. As an initial action item, HHS OCR stressed that providers must update their patient privacy notices, but that must be understood merely as a first step. Education and information sharing will play an outsized role in increasing awareness of the shift from a model in which doctors had to decide whether to share data, to one in which reproductive data sharing is generally prohibited. Doctors and providers must be aware of this change so that they can most effectively protect their patients.

Additionally, as the new HHS rule is implemented, it will be necessary to clarify its interaction with state laws such as Washington’s My Health My Data Act and state shield laws. The state shield laws are more specific and nuanced than the federal rule, which allows them to protect more data than is currently captured in the HHS rule. The HHS update of HIPAA therefore sets a floor for reproductive health data privacy on which the states can build by adopting more comprehensive protections in the health data sphere. That said, this remains a potentially confusing area which may make compliance more difficult. It will be helpful for HHS to issue further guidance on the overlapping state and federal compliance requirements in order to best maximize individual protections. 

Finally, the Biden Administration should take action to restrict federal surveillance and investigative resources from being used for abortion investigations. As it stands, the federal government provides assistance to state and local law enforcement to support surveillance and complex investigations. This federal support could be used by hostile states to monitor reproductive health activities, through the federal provision of digital forensic services, assistance in acquiring private communications and data, and storage of sensitive information. While the Administration has taken a strong position supporting reproductive rights, it should ensure that federal funds and resources aren’t directed toward investigations of individuals seeking reproductive care.

Company actions

In the post-Roe era, companies should play an active role in protecting their customers’ and users’ private information. The best way to prevent data that companies collect from being used against abortion-seekers and -providers is simply not to have that data in the first place. 

There are a host of steps the private sector should take to protect that information, captured in CDT’s 2023 report, Data After Dobbs: Best Practices for Protecting Reproductive Health Data. In particular, companies should be proactive in minimizing the risk (of the company’s data being used to jail someone for seeking or providing an abortion) by limiting the amount of data they collect and store. Moverover, this effort cannot be limited to obvious health data, such as biometrics, health conditions, and health tests. It should include all that data companies collect, retain, and share because information that may not appear to be health-related can nonetheless reveal a person’s reproductive health conditions and choices when used in certain ways or combined with other data points. Such data includes location data, browsing history, and search queries.

By being diligent and knowing what data is needed for products and services, and by declining to collect additional data, companies can help reduce the likelihood they will have to respond to law enforcement or civil litigants’ requests for data in abortion-related cases. In instances where companies must collect personal data, companies can and should retain that data only for as long as necessary to perform the task that the data was originally collected for. And then it should be deleted. For more, see CDT’s Data After Dobbs report.

V. Conclusion

The Supreme Court’s decision in Dobbs ended the fundamental right to an abortion and has introduced serious threats to data privacy. Without Roe as a safeguard, the desire for law enforcement to seek access to private health data will be supercharged. Implementing strong privacy protections is paramount. While legislation on the federal level remains improbable, executive agency action and pro-repro state legislation have served as an important preliminary step in the Dobbs era. But ultimately, companies must pick up the slack and adopt commercial practices that prioritize the privacy and fundamental rights of their customers.

The post Dobbs – A Two Year Retrospective appeared first on Center for Democracy and Technology.

]]>
Report – Two Years After Dobbs: An Analysis of State Laws to Protect Reproductive Healthcare Information from Interstate Investigations and Prosecutions https://cdt.org/insights/report-two-years-after-dobbs-an-analysis-of-state-laws-to-protect-reproductive-healthcare-information-from-interstate-investigations-and-prosecutions/ Mon, 24 Jun 2024 19:01:04 +0000 https://cdt.org/?post_type=insight&p=104449 Also authored by CDT Interns Irene Kim and Divya Vatsa Following the Supreme Court’s June 2022 decision in Dobbs to overturn Roe v. Wade, some states have banned or restricted abortion access, while others have moved to protect against criminal prosecutions stemming from such bans.  This intense split has created questions about how patients and […]

The post Report – Two Years After Dobbs: An Analysis of State Laws to Protect Reproductive Healthcare Information from Interstate Investigations and Prosecutions appeared first on Center for Democracy and Technology.

]]>
CDT report, entitled "Two Years After Dobbs: An Analysis of State Laws to Protect Reproductive Healthcare Information from Interstate Investigations and Prosecutions." White document on a grey background.
CDT report, entitled “Two Years After Dobbs: An Analysis of State Laws to Protect Reproductive Healthcare Information from Interstate Investigations and Prosecutions.” White document on a grey background.

Also authored by CDT Interns Irene Kim and Divya Vatsa

Following the Supreme Court’s June 2022 decision in Dobbs to overturn Roe v. Wade, some states have banned or restricted abortion access, while others have moved to protect against criminal prosecutions stemming from such bans. 

This intense split has created questions about how patients and providers located in one state will be impacted by the laws of another, especially when law enforcement seeks to compel disclosure of sensitive electronic information, such as private online messages, related to abortion care. Over the past two years, numerous state legislatures have enacted legislation, and state governors have issued executive orders (hereinafter collectively referred to as “shield laws”) to protect providers and recipients of reproductive health services from out-of-state investigations. In many cases, these laws also shield information about gender-affirming care in the wake of growing anti-trans state bills across the country. 

The breadth of these shield laws varies state by state. Most state shield laws bar state government officials — including law enforcement — and state courts from assisting out-of-state investigations and prosecutions of protected healthcare activities. For example, a state judge could be prohibited from domesticating an out-of-state subpoena seeking location data showing that an individual visited an abortion clinic, or local police could be prohibited from aiding extradition of a doctor to a state where they’ve been criminally charged with performing unlawful abortions. Other state shield laws go further and bar private companies — such as providers of communication services and companies involved in the delivery of health care — from disclosing protected health data, even when they receive warrants, court orders, and subpoenas demanding such disclosure.

This document examines the state measures that have been implemented regarding reproductive health care information, reviewing all 22 states that the Guttmacher Institute (a leading reproductive health research organization) currently lists as providing at least some protections for abortion. This work builds on a Field Guide that the Tech Accountability and Competition Project at Yale Law School developed for CDT to help state legislators draft shield laws.

Read the full report.

The post Report – Two Years After Dobbs: An Analysis of State Laws to Protect Reproductive Healthcare Information from Interstate Investigations and Prosecutions appeared first on Center for Democracy and Technology.

]]>
CDT Comments to NTIA Task Force on Kids Online Health & Safety Urge Protection of Rights https://cdt.org/insights/cdt-comments-to-ntia-task-force-on-kids-online-health-safety-urge-protection-for-rights/ Fri, 17 Nov 2023 15:32:23 +0000 https://cdt.org/?post_type=insight&p=100860 The Center for Democracy & Technology (CDT) submitted these comments in response to the National Telecommunications and Information Administration’s (NTIA) request for comments regarding efforts to protect youth mental health, safety, and privacy online. Protecting children online is a critical priority. But that goal must be pursued in a manner that does not cause more […]

The post CDT Comments to NTIA Task Force on Kids Online Health & Safety Urge Protection of Rights appeared first on Center for Democracy and Technology.

]]>
The Center for Democracy & Technology (CDT) submitted these comments in response to the National Telecommunications and Information Administration’s (NTIA) request for comments regarding efforts to protect youth mental health, safety, and privacy online.

Protecting children online is a critical priority. But that goal must be pursued in a manner that does not cause more harm than it brings benefits. Some proposals, while well-intentioned, may jeopardize the safety and well-being of the youth they are intended to protect and undermine their rights and those of adults.

These comments discuss four ways in which initiatives to protect children can undermine the safety, well-being, and rights of all users, adults and children alike, and impede services offered by educational institutions:

  • Approaches to children’s safety should not legitimize or facilitate content-based restrictions. Many approaches to protecting young children rest on the premise that certain types of content are harmful to children. Studies are divided regarding the impacts of online content on children and the question of what the right solutions to protect children should be. What is certain is that baking this principle into law and imposing a legal obligation on online service providers to filter out potentially harmful content under the threat of liability will create incentives for online services to err on the side of caution and over-filter content, undermining the right to free expression and minors’ access to information. In past instances, for example, filtering technology has led to lawful content related to LGBTQ+ identity being over-removed impeding all users’ ability to seek important information. 
  • Expanding surveillance of young people by parents and through the use of school monitoring systems will undermine young people’s rights, particularly teens’ right to privacy. Teenagers, especially older teens aged 15 and 16, have a reasonable need for privacy and private channels through which to access and exchange information. However, mandating the availability of parental surveillance mechanisms and student activity monitoring systems undermines young peoples’ ability to seek information securely and leads to decreased teen independence, which researchers say correlates with negative mental health consequences. Additionally, surveys of educators and students conducted by CDT found that overbroad use of student activity monitoring systems led to increased encounters with law enforcement for young people of color and inadvertent outings of LGBTQ+ teens.
  • Explicit or implicit age verification or assurance requirements to determine which users are children can undermine the privacy of minors and adults by mandating more data collection and potentially violating the right to speak and access information anonymously. Approaches to estimate and/or verify the ages of all users to identify child users will require further data collection and processing for children and adults alike and eliminate the ability for all users to seek information anonymously. Further, age estimation and identity verification systems can have discriminatory effects.  For example, facial analysis methods to estimate age may perform poorer on faces with different morphologies due to cognitive or physical disabilities, trans and non-binary faces, and non-white faces. 
  • Efforts to protect children should account for the unique needs of educational institutions. Schools and other educational institutions, including vendors of education technology, should not be treated the same as commercial actors so as not to inadvertently undermine educational services. Subjecting education providers and local education agencies to certain privacy provisions such as data deletion rights and data minimization protections may inadvertently undermine education service delivery by, for example, enabling parents to delete their children’s grades or attendance records. 

NTIA should recommend and advance proactive approaches to protecting young people online that avoid these pitfalls. These include:

  • Establishing comprehensive federal privacy protections to protect children as well as adults;
  • Promoting the development and deployment of user tools that empower young people online and help them shape their online experiences;
  • Investing in more research to better understand the harms different groups of minors face online and the causes of those harms; and
  • Developing dynamic and age-appropriate education and digital literacy initiatives to equip young users with the knowledge and responsible use practices to help them navigate the digital ecosystem.

Read the full comments here.

With contributions from Nick Doty, Maddy Dwyer, Kate Ruane, and Elizabeth Whatcott.

The post CDT Comments to NTIA Task Force on Kids Online Health & Safety Urge Protection of Rights appeared first on Center for Democracy and Technology.

]]>
EU Tech Policy Brief: September 2023 https://cdt.org/insights/eu-tech-policy-brief-september-2023/ Wed, 20 Sep 2023 20:52:01 +0000 https://cdt.org/?post_type=insight&p=100076 Also authored by CDT Europe’s Vânia Reis and Rachele Ceraulo This is the September 2023 issue of the Centre for Democracy & Technology Europe‘s monthly Tech Policy Brief. It highlights some of the most pressing technology and internet policy issues under debate in Europe, the U.S., and internationally, and gives CDT’s perspective on them. Our aim […]

The post EU Tech Policy Brief: September 2023 appeared first on Center for Democracy and Technology.

]]>
Also authored by CDT Europe’s Vânia Reis and Rachele Ceraulo

This is the September 2023 issue of the Centre for Democracy & Technology Europe‘s monthly Tech Policy Brief. It highlights some of the most pressing technology and internet policy issues under debate in Europe, the U.S., and internationally, and gives CDT’s perspective on them. Our aim is to help shape policies that advance our rights in a digital world. Please do not hesitate to contact our team in Brussels: Iverna McGowanAsha Allen, and Ophélie Stockhem.

EU Leaders Should Prohibit Biometric Mass Surveillance in EU AI Act 

Negotiations between the European institutions on the Artificial Intelligence (AI) Act are in full swing, but eagerness to move swiftly might be hampered by some contentious issues, including whether the legislation will prohibit remote biometric surveillance. In light of the ongoing institutional negotiations, CDT Europe’s new blog post outlines which uses of biometric technologies ought to be outlawed or carefully regulated in the upcoming EU AI Rulebook.

The EU is leading the world on regulation of AI, and it must not miss this opportunity to set high standards for human rights and an ethical approach to AI regulation. We call on the EU institutions to prohibit mass surveillance through indiscriminate and arbitrary uses of biometric technologies, given the unacceptable risks to human rights. We also advocate for non-mass surveillance uses of biometric data to be heavily regulated, and permitted only on a case-by-case basis under a robust regulatory regime that ensures transparency, proportionality, oversight, and redress.

How Can the DSA Promote Responsible, Rights-Respecting Business Conduct? 

On 25 August, the Digital Services Act (DSA) came into force for providers of “very large online platforms” and “very large search engines” such as Facebook and Google Search. This means that these companies must now fully comply with the law, including the additional obligations set for the largest online platforms. These platforms’ obligations include performing annual risk assessments on the potential harms and societal impacts of their products and services, mitigating those risks, and being subject to independent audits — thereby ensuring greater transparency and accountability.

To mark this important milestone, CDT Europe and the United Nations’ Human Rights B-Tech Project published a new blog post analysing how the DSA’s provisions on risk assessments, transparency, and stakeholder engagement compare with the UN’s Guiding Principles on Business and Human Rights (UNGPs), the gold standard for rights-respecting corporate responsibility.

As we lay out in the blog post, the European Commission should align the DSA with the UNGPs by more clearly describing what constitutes a “systemic risk”, and create guidance as to how companies are expected to comply with their human rights risk assessment obligations. There must be robust and comprehensive stakeholder engagement, as a way to support the monitoring of implementation and enforcement, and hold platforms accountable to their due diligence and transparency obligations. Finally, companies should be transparent and accurate in making information public, so that progress on content moderation can be tracked over time.

Exporters of Dual-Use Items Must Clarify Their Intended Use 

The global transfer and sale of digital surveillance technologies brings significant human rights risks, for example when those technologies are used to monitor and suppress journalists and members of civil society. Legal mechanisms — including the new EU Dual Use Regulation — for controlling dual-use exports (goods and technology that can be used for both civil and military applications) are one avenue for preventing some of the problematic consequences of those sales. CDT Europe emphasised in comments, though, that the guidelines for implementing the Regulation as currently proposed would create a series of unintended loopholes. 

Therefore, the guidelines should be revisited to include technologies – such as facial and emotion recognition technologies – used for both covert and overt cyber-surveillance. They must place a clearer obligation on exporters to take into account the human rights situation in a given country, and whether there is a risk that these items will be used for cyber-surveillance. 

Exporters should also take full stock of recent developments — for example, if a State has recently been found to engage in mass surveillance of human rights defenders and journalists, it follows that any cyber-surveillance import would be at high risk of further violations. The guidelines should also recommend assessment of dual-use exporters’ corporate policies and practices, in relation to the UNGPs and OECD guidelines for multinational enterprises. 

CDT Europe additionally stressed the need for further actions to halt the export of cyber-surveillance equipment for the purpose of unlawful surveillance and human rights violations.

🗞 Press Corner 

Concerns Over AI-Based Political Repression in Gulf States 

  • Radio Télévision Suisse (RTS), Tout un monde (podcast in French),Purchase of microprocessors by the thousands: authoritarian regimes are also interested in AI: CDT Europe Director Iverna McGowan joined the podcast, and commented, “We are dealing with a country where human rights are very weak, where we know that there has been a long history of serious human rights violation, including the suppression of civil society, like investigative journalists, bloggers, and defenders of human rights. Therefore, we should be worried about the way that artificial intelligence can be instrumentalized and militarised by such regimes on an even larger scale than is already happening.” 
  • Financial Times,Saudi Arabia and UAE race to buy Nvidia chips to power AI ambitions: “Human rights defenders and journalists are frequent targets of government crackdowns [in the UAE and Saudi Arabia]. Pair this with the fact that we know how AI can have discriminatory impacts, or be used to turbocharge unlawful surveillance. It’s a frightening thought,” Iverna McGowan told the Financial Times.

What’s Next for the DSA Regulation? 

  • Euronews,Online platforms targeted as the EU’s biggest ever shake-up of digital rules kicks in: Asha Allen, CDT’s Advocacy Director for Europe, Online Expression & Civic Space, told Euronews the DSA gives users more control: “Users will now have more transparency on how content moderation decisions are made. They will have more choice regarding the content that they engage in…. There will be more mechanisms for complaints and mechanisms for redress for individual users.”
  • Ars Technica, “Big Tech isn’t ready for landmark EU rules that take effect tomorrow: Asha Allen warns in the article, “Quite simply, without the meaningful engagement of advocates in the implementation and enforcement of the entire DSA, the potentially groundbreaking provisions we have collectively worked so diligently to obtain in the text won’t come to fruition.” The article goes on to quote Iverna McGowan, who elaborates: “To avoid the DSA becoming a paper tiger, it will be crucial that national level enforcement bodies are independent and well-resourced, that civil society be given a formal role in enforcement oversight, and that there be careful attention maintaining a public interest focus on questions such the foreseen auditing of algorithms.”
  • Deutsche Welle, “What impact will the EU’s Digital Services Act have?: “Enforcement should be rigorous,”  Iverna McGowan, told DW. “But to be rigorous in practice, a number of things have to happen: Firstly, we believe that civil society should have a formal role in overseeing the implementation because, obviously, civil society has a level of expertise and independence. And the other point would be that we need to see adequate resources at a national level for the different agencies that will have enforcement powers and that they also be independent in practice.”

Don’t forget to check out CDT’s publications for this month, and to sign up for CDT Europe’s AI newsletter!

The post EU Tech Policy Brief: September 2023 appeared first on Center for Democracy and Technology.

]]>
Interpreting California’s Reproductive Health Information Shield Law: The ‘In California’ Limitation https://cdt.org/insights/interpreting-californias-reproductive-health-information-shield-law-the-in-california-limitation/ Wed, 06 Sep 2023 18:07:08 +0000 https://cdt.org/?post_type=insight&p=99855 By Graham Streich, Legal Intern at CDT’s Security & Surveillance Project Last year, California enacted a unique shield law for reproductive rights, AB 1242, stipulating that California providers of electronic communications services “shall not, in California, provide records, information, facilities, or assistance” (emphasis added) in response to legal process issued by law enforcement in another […]

The post Interpreting California’s Reproductive Health Information Shield Law: The ‘In California’ Limitation appeared first on Center for Democracy and Technology.

]]>
By Graham Streich, Legal Intern at CDT’s Security & Surveillance Project

Last year, California enacted a unique shield law for reproductive rights, AB 1242, stipulating that California providers of electronic communications services “shall not, in California, provide records, information, facilities, or assistance” (emphasis added) in response to legal process issued by law enforcement in another state in connection with an investigation of an abortion which, if performed in California, would be lawful under California law. 

California’s innovative abortion data privacy protections will almost certainly be the subject of litigation in the future, either in California or in a state with abortion bans from which law enforcement issues such legal process.

One issue that may arise in such litigation is what it means for companies to provide records, information, facilities, or assistance “in California.” The “in California” requirement may have been included to make the law more likely to pass constitutional scrutiny by requiring a clear nexus between California and what companies withhold against out-of-state warrants— “records, information, facilities, or assistance”— in addition to the fact that the company is incorporated or headquartered in California.

The prohibition on the provision of “assistance” in California seems relatively clear:  If company personnel who access the company’s user data in response to a law enforcement request — along with legal staff and other personnel who oversee such responses and turn over data to law enforcement — are located in California, then the company would be providing assistance “in California” when responding to a law enforcement request.  Such assistance would be prohibited in the case of an out-of-state request in connection with an investigation of an abortion that would be lawful in California.  Companies headquartered in California often will locate the relevant personnel in California as a matter of course.  

In some cases, however, the relevant personnel may be located outside California (e.g., personnel are located remotely in this post-pandemic age).  Nonetheless, by its terms, the shield law still applies if the relevant “records, information, [or] facilities” are provided in California.  Under basic canons of statutory construction, those words must be given meaning and not be rendered superfluous.  That language could mean that AB 1242 protections apply when responsive records and information are stored on servers physically located “in California” and the act of disclosure occurs there.  In somewhat analogous circumstances, in Microsoft v. United States—a case involving government demands for data that Microsoft stored in Ireland but could access in the United States—the Second Circuit found the relevant location was where Microsoft stored the data. 

In contrast, a federal district court in Wisconsin described the data’s location as intangible and found the relevant location to be where the provider disclosed the data to the government. A California district court maintained that the location of electronic data was so prone to fluctuation that its location might not be effectively tied to one jurisdiction, and similarly focused on the location of the disclosure to the government.  These differences were never resolved, as the Supreme Court vacated the Second Circuit’s decision in the Microsoft-Ireland case as moot once Congress passed the CLOUD ACT in 2018. .  

These unresolved issues could potentially limit how broadly AB 1242 protects users if the company personnel providing assistance are located outside of California. Even if user data is stored in California, a state demanding data for abortion investigations could argue (following the Wisconsin court) that the data’s location is actually intangible, or (following the California court) that data is subject to such rapid and unpredictable movement that its location should not be a factor for courts and instead, the focus should be on where the data is provided to law enforcement. Such arguments might have the effect of rendering the statutory reference to records or information superfluous, but If they were to prevail, the location of data in California might not be sufficient to trigger AB 1242.  

Assistance “in California” may be found in circumstances other than those in which the act of data disclosure to law enforcement occurs in California. For example, if the data is stored in California and the company copies it there and moves the copy to another state in which the act of disclosure occurs, arguably, the act of copying the data in California would be regarded as “assistance” provided in California. Likewise, other compliance-related activities in California that do not include the act of disclosure but are engaged in to prepare for making the disclosure, may constitute “assistance” provided in California. For example, the act in California of checking to determine whether a company has information responsive to a law enforcement disclosure demand may constitute “assistance” in California regardless of whether the data is stored there. 

California-headquartered technology and telecommunication companies should locate personnel that respond to law enforcement requests in California.  They should also ensure that activities they engage in to prepare to make disclosures to law enforcement occur in California. That should assure users that their data receives AB 1242 protections because “assistance” to law enforcement would be provided in California. Alternatively, or ideally, in addition, companies should also store relevant user data in California, which could provide an alternative avenue for the application of the statutory protections.  

The California legislature should also evaluate whether the “in California” requirement is necessary.  Washington has a shield law that, while similar in key respects to the California law, makes no distinction based on where aid is located, meaning the Washington law’s protections apply even if data is stored and assistance is provided out of state, so long as the company is incorporated or based in Washington. Users should advocate that the California legislature amend its law to eliminate the “in California” requirement for assistance or the location of data and require only that the company in question be headquartered or based in California.  

The post Interpreting California’s Reproductive Health Information Shield Law: The ‘In California’ Limitation appeared first on Center for Democracy and Technology.

]]>
CDT Comments to CFPB Lay Out Data Broker Harms That Should Be Held Accountable https://cdt.org/insights/cdt-comments-to-cfpb-lay-out-data-broker-harms-that-should-be-held-accountable/ Wed, 26 Jul 2023 16:29:19 +0000 https://cdt.org/?post_type=insight&p=99444 Data brokers are making people’s personal data available to an expansive network of third parties. This widespread data sharing increases risks to people’s physical safety and data security, and it compromises people’s access to insurance, mental health support, and other vital resources.  The Consumer Financial Protection Bureau (CFPB) issued a request for input to better understand the breadth […]

The post CDT Comments to CFPB Lay Out Data Broker Harms That Should Be Held Accountable appeared first on Center for Democracy and Technology.

]]>
Data brokers are making people’s personal data available to an expansive network of third parties. This widespread data sharing increases risks to people’s physical safety and data security, and it compromises people’s access to insurance, mental health support, and other vital resources. 

The Consumer Financial Protection Bureau (CFPB) issued a request for input to better understand the breadth of the data broker ecosystem and its impacts on consumers, and to examine how the CFPB can apply its existing authorities to the harms that data brokers cause to consumers. To inform the CFPB’s efforts, CDT’s comments:

  • Describe data brokers’ practices with respect to sourcing, selling, or otherwise sharing financial data, worker data, health-related data, location data, and other consumer data;
  • Explain the limitations of certain measures that are supposed to protect consumers from these practices; and
  • Discuss how the CFPB should clarify the application of the Fair Credit Reporting Act to data brokers to minimize data sharing.

Read the full comments here.

The post CDT Comments to CFPB Lay Out Data Broker Harms That Should Be Held Accountable appeared first on Center for Democracy and Technology.

]]>
CDT & ACLU Urge Meta Oversight Board to Protect Speech in Abortion-Related Cases https://cdt.org/insights/cdt-aclu-urge-meta-oversight-board-to-protect-speech-in-abortion-related-cases/ Thu, 29 Jun 2023 21:00:00 +0000 https://cdt.org/?post_type=insight&p=99213 Also authored by CDT Intern Clare Mathias The Center for Democracy & Technology (CDT) and the American Civil Liberties Union (ACLU) have provided comments to the Meta Oversight Board on three cases in which users appealed to restore their posts related to abortion in the United States. In the comments, we explain why Meta should […]

The post CDT & ACLU Urge Meta Oversight Board to Protect Speech in Abortion-Related Cases appeared first on Center for Democracy and Technology.

]]>
Also authored by CDT Intern Clare Mathias

The Center for Democracy & Technology (CDT) and the American Civil Liberties Union (ACLU) have provided comments to the Meta Oversight Board on three cases in which users appealed to restore their posts related to abortion in the United States.

In the comments, we explain why Meta should refine the hostile speech classifier and update its guidance to content moderators to ensure that posts around abortion and other political topics that involve the term “kill” but that do not incite violence are not removed.

We believe this will improve users’ ability to engage in meaningful discussions on Meta’s services, including those about reproductive rights and abortion access.

Read the full comments.

The post CDT & ACLU Urge Meta Oversight Board to Protect Speech in Abortion-Related Cases appeared first on Center for Democracy and Technology.

]]>
Tech Talk: Best Practices for Protecting Reproductive Health Data — Talking Tech w/ CDT’s Andrew Crawford https://cdt.org/insights/tech-talk-best-practices-for-protecting-reproductive-health-data-talking-tech-w-cdts-andrew-crawford/ Thu, 29 Jun 2023 16:20:18 +0000 https://cdt.org/?post_type=insight&p=99162 CDT’s Tech Talk is a podcast where we dish on tech and Internet policy, while also explaining what these policies mean to our daily lives. You can find Tech Talk on Spotify, SoundCloud, iTunes, and Google Podcasts, as well as Stitcher and TuneIn. When the Supreme Court reversed Roe v. Wade, it enabled states to further restrict and criminalize abortion. Some states can […]

The post Tech Talk: Best Practices for Protecting Reproductive Health Data — Talking Tech w/ CDT’s Andrew Crawford appeared first on Center for Democracy and Technology.

]]>
Graphic for CDT's podcast, entitled "CDT's Tech Talks." Hosted by Jamal Magby, and available on iTunes, Google Play, Soundcloud, Spotify, Stitcher, and TuneIn. Dark grey text and app logos, as well as light blue text, on a white background.
Graphic for CDT’s podcast, entitled “CDT’s Tech Talks.” Hosted by Jamal Magby, and available on iTunes, Google Play, Soundcloud, Spotify, Stitcher, and TuneIn. Dark grey text and app logos, as well as light blue text, on a white background.

CDT’s Tech Talk is a podcast where we dish on tech and Internet policy, while also explaining what these policies mean to our daily lives. You can find Tech Talk on SpotifySoundCloudiTunes, and Google Podcasts, as well as Stitcher and TuneIn.

When the Supreme Court reversed Roe v. Wade, it enabled states to further restrict and criminalize abortion. Some states can now prosecute abortion providers, insurers, and, in some cases, even patients themselves. Some states also allow civil actions. Increasingly, law enforcement and civil litigants may turn to companies to gain access to data that could help prove that a person sought, received, aided, or provided an abortion.

Many types of data can reveal sensitive information about a person’s health and healthcare choices. Search queries, browsing history, the contents of communications, and a person’s location data can all reveal such private information, despite not typically being thought of as sources of “medical” or health-related data. Because of this, companies inside and outside the healthcare sector must be responsible for carefully assessing and limiting the private information they collect, store, and share. Without thoughtful action, a company’s data practices may be complicit in sending their customers to prison or exposing them to civil litigation, for personal choices that are still legal in the majority of the United States.

In the post-Dobbs era, companies must play an active role in protecting their customers’ and users’ private information. Tech Talk’s Jamal Magby sits down with CDT’s Andrew Crawford to explain what companies can do to protect user data.

Listen

(CDT relies on the generosity of donors like you. If you enjoyed this episode of Tech Talk, you can support it and our work at CDT by going to cdt.org/techtalk. Thank you for putting democracy and individual rights at the center of the digital revolution.)

The post Tech Talk: Best Practices for Protecting Reproductive Health Data — Talking Tech w/ CDT’s Andrew Crawford appeared first on Center for Democracy and Technology.

]]>