Cybersecurity & Standards Archives - Center for Democracy and Technology https://cdt.org/area-of-focus/cybersecurity-standards/ Fri, 11 Apr 2025 16:16:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://cdt.org/wp-content/uploads/2019/11/cropped-cdt-logo-32x32.png Cybersecurity & Standards Archives - Center for Democracy and Technology https://cdt.org/area-of-focus/cybersecurity-standards/ 32 32 Digital IDs Must Be Safe, Secure and Accessible https://cdt.org/insights/digital-ids-must-be-safe-secure-and-accessible/ Fri, 11 Apr 2025 16:16:19 +0000 https://cdt.org/?post_type=insight&p=108309 The digital replacements for the ID cards in our pockets and purses have already arrived.  Fourteen U.S. states have created some form of digital IDs, with more piloting and exploring mobile driver’s licenses, and the European Union will require member states to offer them as an option. Digital IDs are typically stored in a digital […]

The post Digital IDs Must Be Safe, Secure and Accessible appeared first on Center for Democracy and Technology.

]]>
The digital replacements for the ID cards in our pockets and purses have already arrived. 

Fourteen U.S. states have created some form of digital IDs, with more piloting and exploring mobile driver’s licenses, and the European Union will require member states to offer them as an option. Digital IDs are typically stored in a digital “wallet” on a user’s mobile phone, and they promise the ability to quickly verify users’ government-issued identities both in person and online.

While digital IDs offer convenience if implemented correctly, they may bring unwanted consequences, such as the potential to track people for reasons innocuous or nefarious. If it’s easy for us to present a digital ID, it’s also easy for a retailer to ask for it when someone wants to enter a store or access a website. And unlike traditional physical ID cards, by default digital credentials leave electronic trails. Companies could match IDs to consumer databases and gain a granular view of consumers’ behavior, which they could then resell for digital ad and direct mail targeting along with other commercial uses. If that data is vulnerable to being hacked or stolen, digital IDs could offer potent avenues for theft and fraud.

Digital ID verification systems could also be configured to “phone home” to a government agency or contractor, potentially creating a central record every time users present an ID. The keepers of the information would gain a powerful way to surveil ID-holders, which would only grow as more and more social and business interactions require people to show their digital papers. 

Conversely, systems based on ubiquitous digital IDs tied to smart phones may exclude some people entirely. Not everyone will have a sufficiently-advanced phone or be able to operate one, perhaps because of age or disability. Cell phones also have a habit of running out of power, and someone with a dead phone might not gain entry to an event or access to important information online if we become over- reliant on digital credentials. Technology for instant ID verification makes it much easier for companies to exclude people they don’t want to serve or block content from certain groups of people, based on age, citizenship or immigration status, financial records, etc.

As of now, states in the U.S. are implementing digital ID systems individually, with little or no federal coordination on the horizon. They’re also facing pressure to create these systems from industries that see clear benefits from the ability to seamlessly verify customers – for instance, car dealerships vetting buyers or age-restricted businesses like bars verifying their patrons’ ages. If digital ID mechanisms are set up too hastily, however, the public has no guarantee that they’ll protect privacy effectively and ensure the broadest access possible.

Digital IDs potentially have good use cases, as long as these systems are built in safe and secure ways. What’s needed is coordination, led by an engaged, inclusive program of stakeholders including technology companies that produce digital IDs, corporations that use them, governments that oversee the process, and civil society organizations and consumer representatives who can advocate for the public interest throughout. 

CDT was recently invited to participate in a meeting hosted by the Better Identity Coalition, an initiative of Venable’s Center for Cybersecurity Policy and Law. The coalition includes major banks and tech companies and is focused on working with states to design digital ID systems. At the event, CDT made clear that ensuring users will be protected by systems that put privacy and security at the center of their design is the best path forward for digital IDs that work for everyone. Concrete improvements – including both legislative safeguards and technical protections – are necessary to prevent inappropriate overasking, coercion, exclusion, and breaching of users’ digital IDs, and to enable meaningful consent and proper accountability.

CDT is actively collaborating with colleagues at Georgetown’s Beeck Center for Social Impact and Innovation to develop clear, actionable guidance for managing identity in digital contexts to enable access to government benefits, including by maintaining analog approaches for those beneficiaries for whom digital approaches simply aren’t viable. 

The widespread use of digital IDs may seem almost inevitable at this point, but avoiding annoying, harmful, and discriminatory outcomes is far from assured. We’re eager to participate in multi-stakeholder collaboration — including corporate, government, and civil society actors — to make it more likely that digital IDs benefit users rather than expose them to harm.

The post Digital IDs Must Be Safe, Secure and Accessible appeared first on Center for Democracy and Technology.

]]>
CDT Joins Letter to Fix the TAKE IT DOWN Act to Protect Encryption https://cdt.org/insights/cdt-joins-letter-to-fix-the-take-it-down-act-to-protect-encryption/ Tue, 01 Apr 2025 18:49:43 +0000 https://cdt.org/?post_type=insight&p=108109 (Last updated on April 4, 2025) Today, CDT joined more than 30 other civil society organizations and cybersecurity experts in submitting a letter to the House of Representatives Committee on Energy & Commerce, urging the Committee to amend the TAKE IT DOWN Act to protect encryption.  As the letter points out, the TAKE IT DOWN […]

The post CDT Joins Letter to Fix the TAKE IT DOWN Act to Protect Encryption appeared first on Center for Democracy and Technology.

]]>
(Last updated on April 4, 2025)

Today, CDT joined more than 30 other civil society organizations and cybersecurity experts in submitting a letter to the House of Representatives Committee on Energy & Commerce, urging the Committee to amend the TAKE IT DOWN Act to protect encryption. 

As the letter points out, the TAKE IT DOWN Act aims to combat the distribution of nonconsensual intimate imagery, which is a laudable goal, but its overbroad requirements for content removal could force encrypted service providers to break encryption or implement invasive monitoring technologies, endangering cybersecurity and free speech. 

The letter calls on the members of the U.S. House of Representatives Committee on Energy and Commerce to amend the TAKE IT DOWN Act by adding encrypted services to the list of services that are already excluded from obligations under the Act.

The letter builds on CDT’s advocacy urging modifications to the bill to address its risks to users’ online speech and privacy rights.

Read the full letter here.

The post CDT Joins Letter to Fix the TAKE IT DOWN Act to Protect Encryption appeared first on Center for Democracy and Technology.

]]>
A Call for US Leadership in the Digital Age https://cdt.org/insights/a-call-for-us-leadership-in-the-digital-age/ Mon, 31 Mar 2025 18:13:51 +0000 https://cdt.org/?post_type=insight&p=108104 The global digital economy stands at a crossroads. The decisions made today will determine whether the Internet remains an open engine for innovation, economic growth, and free expression, or becomes fragmented and controlled by forces hostile to these values. The United States has the opportunity—and responsibility—to lead the world towards a future where the Internet […]

The post A Call for US Leadership in the Digital Age appeared first on Center for Democracy and Technology.

]]>
The global digital economy stands at a crossroads. The decisions made today will determine whether the Internet remains an open engine for innovation, economic growth, and free expression, or becomes fragmented and controlled by forces hostile to these values. The United States has the opportunity—and responsibility—to lead the world towards a future where the Internet empowers individuals, businesses, and societies.

As a group of organizations and experts that believe an open, global, secure, and trusted Internet is crucial to digital trade and online discourse, we are eager to support the administration in advancing principles that protect the Internet’s ability to enable innovation, promote free expression and access to information, and foster a dynamic digital economy.

The Stakes: A Free and Open Internet Under Threat

The Internet has revolutionized the way we live, work, speak, and learn. It has fueled unprecedented economic growth, connected people across borders, and provided a platform for the free exchange of ideas. However, this progress is under threat. A growing number of countries are adopting policies that restrict cross-border data flows, mandate data localization, force the disclosure of source code, and discriminate against foreign digital products. These policies undermine the very foundations of the Internet, threatening its ability to support innovation, economic growth, and fundamental freedoms.

A Call to Leadership

The United States’ long tradition of leadership in promoting an open Internet has directly contributed to its strength as a hub for tech innovation and thriving digital economy.

Since the 1990s, and particularly beginning in 2013, leaders in Congress, including Senators John Thune and Ron Wyden, pushed for the United States to lead internationally to promote an open Internet and digital trade, by ensuring that data can flow freely among trading partners and to prevent discrimination against American digital content. From there, the United States promoted and secured international consensus that protected the Internet’s ability to support a thriving U.S. digital economy, including in trade agreements negotiated by the Trump administration with guidance and overwhelming bipartisan support from Congress.

Now is the time to maintain that leadership in the digital realm. The US should work with like-minded countries to establish a framework for open data flows crucial to an open Internet and digital trade, with the following core principles:

  1. Protect The Free Flow of Information: Data is the lifeblood of the digital economy. Restrictions on cross-border data flows, including tariffs on electronic transmissions and limits on access to information, would stifle innovation, limit consumer choice, and impede economic growth and the global exchange of ideas. The US must champion policies that ensure the free flow of information across borders, while respecting privacy and security.
  2. Data Security and Privacy, Not Data Localization: Data localization requirements do not enhance security. In fact, they often have the opposite effect, fragmenting the Internet and making it more difficult to protect data from cybersecurity threats. The US should advocate for policies that promote data security and privacy through international cooperation and the adoption of strong cybersecurity standards, and should push back against protectionist measures that isolate countries, harm businesses, and limit the free flow of information.
  3. Prevent Mandated Source Code Disclosure: Forced disclosure of source code as a condition for doing business in a country undermines intellectual property rights, discourages innovation, and makes businesses vulnerable to cyberattacks. While open-source development fosters transparency and collaboration, mandated access to proprietary code gives adversaries an unwarranted competitive advantage, while amplifying the potential for surveillance, exploitation, and jeopardizing national security and the integrity of the Internet. The US must firmly oppose such policies, recognizing that protecting intellectual property is essential for a dynamic digital economy. Disclosures for legitimate judicial and regulatory purposes must be narrowly tailored and accompanied by proportionate privacy and security assurances.
  4. Don’t Discriminate Against Foreign Digital Services and Products: Governments should not discriminate against foreign digital products or services. Such discrimination distorts markets, limits consumer choice, and undermines the benefits of global competition. While governments should be able to enforce generally applicable regulations, the US must advocate for policies that ensure a level playing field for all digital businesses, regardless of their country of origin.

The United States has a unique opportunity to protect the Internet and shape the future of the digital economy. By championing these principles, the US can help build a global digital ecosystem that is open and secure, and works for all. We urge the Administration and Congress to seize this moment and lead the world towards a future where the Internet continues to empower individuals, businesses, and societies around the globe.

Sincerely,

Internet Society

American Civil Liberties Union

Center for Democracy and Technology

Freedom House

Internet Society Washington DC Chapter

Also find the letter here.

The post A Call for US Leadership in the Digital Age appeared first on Center for Democracy and Technology.

]]>
Using Internet Standards to Keep Kids Away from Adult Content Online https://cdt.org/insights/using-internet-standards-to-keep-kids-away-from-adult-content-online/ Tue, 25 Mar 2025 15:13:21 +0000 https://cdt.org/?post_type=insight&p=108037 In an effort to block kids from online content intended for adults, some have argued that age-verification or age-assurance tools offer the possibility of simple, effective guardrails.  In our brief to the Supreme Court last year, CDT laid out serious concerns these tools raise regarding privacy and First Amendment freedoms – in addition to questions […]

The post Using Internet Standards to Keep Kids Away from Adult Content Online appeared first on Center for Democracy and Technology.

]]>
In an effort to block kids from online content intended for adults, some have argued that age-verification or age-assurance tools offer the possibility of simple, effective guardrails. 

In our brief to the Supreme Court last year, CDT laid out serious concerns these tools raise regarding privacy and First Amendment freedoms – in addition to questions about their efficacy. 

But that doesn’t mean technical solutions can’t address some valid concerns about adult content. In particular, two policies related to internet standards are worth pursuing right now.

First, parents can already set most children’s devices to block adult websites, which depends on sites labeling themselves as adults-only via metadata. Most adult content sites are happy to label themselves as adults-only: it’s cheap and easy, and allowing children to view their content raises legal, regulatory, ethical and commercial concerns that sites would rather avoid. Making these tools more robust — well-defined standards, widely adopted by websites and interpreted by web browsers and parental control tools — can make them more effective.

Alternatively, just as we allow users to request “safe mode” of Google search or YouTube, devices could be configured to request “safe mode” of other sites on the internet. Proactively alerting sites that there’s a young person (or just someone avoiding NSFW content) on the other end of the connection has the advantage of working on platforms that contain content appropriate for general audiences alongside content for adults only.

There’s plenty of work to do to implement these tools, but standards for sites to self-label and for users to indicate their content preferences are already being proposed.

It’s possible that in the future age-verification and age-assurance systems will be able to avoid the worst problems of the current systems, perhaps by associating a government-issued ID with unlinkable digital tokens that can be presented to a website without requiring someone to send a photo of an actual ID card or revealing a government-issued identifier. But for the time being, standards-based solutions like these provide the most practical opportunities both to protect children from adult content and to protect the rights of adults to access the content they want, while also avoiding severe privacy and security issues.

The post Using Internet Standards to Keep Kids Away from Adult Content Online appeared first on Center for Democracy and Technology.

]]>
Online Censorship Isn’t New – Neither Are Efforts to Evade It https://cdt.org/insights/online-censorship-isnt-new-neither-are-efforts-to-evade-it/ Tue, 11 Feb 2025 19:06:41 +0000 https://cdt.org/?post_type=insight&p=107374 Recently, Americans are seeing broader Internet censorship that has previously been more common in other countries. A brief, partial ban on TikTok’s apps (still inaccessible in centralized app stores) and service (currently available in the US, but in limbo) earlier this month affected over a hundred million American users. Many states have also recently enacted […]

The post Online Censorship Isn’t New – Neither Are Efforts to Evade It appeared first on Center for Democracy and Technology.

]]>
Recently, Americans are seeing broader Internet censorship that has previously been more common in other countries. A brief, partial ban on TikTok’s apps (still inaccessible in centralized app stores) and service (currently available in the US, but in limbo) earlier this month affected over a hundred million American users. Many states have also recently enacted different forms of bans, either of TikTok or of adult content websites like Pornhub.

As federal and state laws come into effect, we should anticipate that Americans will use two tools for censorship circumvention similar to those that have been a lifeline to Internet freedom under authoritarian regimes: proxying and content mirroring. As we’ve seen elsewhere, the technical and legal details of how site bans are implemented in the US may determine which and whether these tools are effective.

Proxying


A Virtual Private Network (VPN) is a tool that allows a user to access the internet via another location — whether down the street or halfway around the world. These systems can stand as a distancing layer between users and the content they want to access, hiding their IP addresses, location and identity. VPNs are commonly used, for example, by people accessing adult content sites from states with bans or age-verification requirements, or by those who want to view video content that is restricted to a certain location for licensing reasons. A VPN can also hide which websites a user visits from their internet service provider (ISP), an important requirement when censors impose penalties on the individuals who visit prohibited content. But the VPN provider itself is able to see which websites are being accessed, so the privacy and security protections of the VPN provider are essential to keep users protected.

A multi-hop or “onion routing” approach goes a step or two beyond a VPN by sending a user’s request for content from an app or website through more than one network or node. By forcing a website or regulator to “peel back” the (often encrypted) layers to find a user’s actual location or identity, onion routing further obscures the source of a request and can keep a user’s identity private from both the destination site and the intermediaries.

Content Mirroring

Content mirroring occurs when a provider copies banned content onto another channel that’s not being blocked.

When governments have banned popular websites like Wikipedia, users (often based in other countries) have taken the initiative to create sites that “mirror” blocked websites but that may still be accessible. For instance, if regulators block versions of the website specific to the languages their citizens use, activists may copy an English-language edition of the site, translate it and host it in an environment that’s still accessible. Governments can and will try to crack down on mirror sites, however, by blocking access when they find them. Although mirroring can be a powerful tool for avoiding censors in some situations, in others (like the TikTok ban) it may not be a practical solution at a large scale.

The concept of mirroring can be extended on a small scale: people may share content they previously accessed, like screenshots or downloads of files, via direct messaging,local networks or social media.

Other techniques — including domain fronting, encrypted DNS, and encrypting server names in large cloud providers — are used to circumvent censorship where the government mandates blocks by network providers, often to block access to content hosted on servers outside the government’s jurisdiction. These are techniques that are typically applied by the platform sharing the content, in coordination with hosting providers or client software. But where laws like the TikTok ban and adult content age verifications/bans apply directly to the US companies doing hosting or app distribution, not those at the network level, those circumvention techniques aren’t likely to be used.

The post Online Censorship Isn’t New – Neither Are Efforts to Evade It appeared first on Center for Democracy and Technology.

]]>
A Digital Red Cross: Keeping Humanitarian Aid Safe from Cyberattack https://cdt.org/insights/a-digital-red-cross-keeping-humanitarian-aid-safe-from-cyberattack/ Fri, 13 Dec 2024 17:30:11 +0000 https://cdt.org/?post_type=insight&p=106770 Keeping medical and relief workers safe in a war zone is a profoundly difficult task. In 1864, encouraged by the founder of the organization that would become the International Committee for the Red Cross (ICRC), the Geneva Conventions created a visual standard that attempts to protect aid workers by clearly marking them with a red […]

The post A Digital Red Cross: Keeping Humanitarian Aid Safe from Cyberattack appeared first on Center for Democracy and Technology.

]]>
Keeping medical and relief workers safe in a war zone is a profoundly difficult task. In 1864, encouraged by the founder of the organization that would become the International Committee for the Red Cross (ICRC), the Geneva Conventions created a visual standard that attempts to protect aid workers by clearly marking them with a red cross. The red cross emblem has since been joined by the red crescent and red crystal – all emblems marking noncombatants doing humanitarian work in a war zone under the protections of international humanitarian law.

But in the 21st century, digital systems have become targets in conflicts across the globe. From DDOS attacks to data theft and website/social media takeovers, combatants have gone after online systems to hobble an opponent’s military and its supply lines, undermine support for a government or simply embarrass the other side in front of its citizens and allies. Targeted intentionally or not, relief organizations, healthcare systems and medical workers may get caught in the digital crossfire, disrupting their life-saving services and potentially hurting the wounded and the vulnerable whom they help.

The ICRC, as well as the Red Cross and Red Crescent societies from countries around the world and the states that are signatories to the Geneva Conventions, have begun work to develop the virtual equivalent of a red cross or crescent—a digital emblem—that can be used to signal protections during cyber operations. CDT is working to help convene multistakeholder processes to develop the standards and systems necessary to define and deploy digital emblems that can serve protective purposes in digital conflicts. This is both technical and diplomatic work, and requires coordination among different actors with different forms of expertise who share a common humanitarian project.

This past May, CDT helped to organize a hybrid technical workshop, bringing together ICRC legal experts with experts in the technical sector and civil society to explore the need for a digital emblem and the potential for standardization. Technical discussions included the challenges of deploying unambiguous authenticated emblem signals, and communicating emblems in such a way that cyberattackers can receive them without revealing themselves: If potential attackers can’t find out if an online target is actually a hospital without revealing themselves, they won’t bother to look. Participants also recognized the need for accountability mechanisms in the law.

This past October, at the quadrennial International Conference of the Red Cross and Red Crescent, the 196 nation states party to the Geneva Conventions (as well as the 191 Red Cross societies from those nations) resolved by consensus to encourage further research, design and development of a digital emblem.

Technical standardization is a key next step. The Internet Engineering Task Force (IETF) now has an opportunity to move the process along by forming a working group to address humanitarian digital emblems. The group would incorporate voices from the aid community, from the technology world, from governments, and from civil society organizations like CDT. 

The requirements for this digital emblem and digital forms of other distinctive emblems and signs are in the process of being specified to guide the standards work. Consensus work can be slow, but important, and there have been two productive “birds of a feather” sessions that have demonstrated broad interest in the work and strong opinions on scope. Requirements may include:

  • Authentication by a wide variety of sources—for example, the ICRC doesn’t determine who can apply the existing emblems such as the Red Cross or Red Crescent; instead, that determination is made by individual states worldwide;
  • Covert inspection—in order for attackers to recognize and respect this mechanism, checking for a digital emblem shouldn’t be an activity that reveals their identity;
  • The need to be both resilient and usable by humanitarian organizations (and their digital services vendors) in zones of conflict around the world—a digital emblem needs to be easy to deploy, much like the red cross is easy to paint. The symbol should also be easy to remove from a digital property when needed.
  • The need for the symbol to be voluntarily adopted.

Interoperable standards make it possible to cover a cluster of use cases with a single technical design. International humanitarian law provides an array of protections to a wide range of different activities, from protecting historic artifacts to ensuring civil defense organizations are able to undertake their life-saving activities. This kind of effort may also be applicable in those cases as well. 

Of course, no digital emblem will protect an organization from truly malicious actors, including hackers seeking data to ransom or a state-based group that sees collateral damage as a bonus, not a cost. In the physical world, hospitals and healthcare workers have been ambushed or bombed far too often. But a digital system that echoes the protections accorded to physical aid workers would at least give organizations a chance to shield themselves and their work from digital disruption. And, it should provide for potential accountability through international governance mechanisms when medical services are disrupted by cyberattack

This important endeavor could not succeed without the deep interest, engagement and support from the technical community, nation states, the tech industry, academic researchers and the Red Cross movement generally. CDT is proud to work with the ICRC and other stakeholders to help shepherd the digital emblem through the technical standardization process.

The post A Digital Red Cross: Keeping Humanitarian Aid Safe from Cyberattack appeared first on Center for Democracy and Technology.

]]>
CDT Helps Form New W3C Privacy Working Group https://cdt.org/insights/cdt-helps-form-new-w3c-privacy-working-group/ Wed, 11 Dec 2024 16:07:07 +0000 https://cdt.org/?post_type=insight&p=106739 In an important step for online privacy, the World Wide Web Consortium (W3C) has now formed a dedicated Privacy Working Group to help ensure that new standards incorporate mechanisms to protect users’ data when browsing the web. The Privacy Working Group has been planned for some time, but it was held up by a procedural […]

The post CDT Helps Form New W3C Privacy Working Group appeared first on Center for Democracy and Technology.

]]>
In an important step for online privacy, the World Wide Web Consortium (W3C) has now formed a dedicated Privacy Working Group to help ensure that new standards incorporate mechanisms to protect users’ data when browsing the web.

The Privacy Working Group has been planned for some time, but it was held up by a procedural objection, which was ultimately overruled by a W3C council. Now, the Working Group — co-chaired by representatives from CDT, Brave Software and the Internet Society — is getting down to the business of developing new web standards to protect users across the globe. 

The W3C has long been active on privacy issues, of course. CDT has contributed to the organization’s work on these issues for decades, primarily through privacy reviews of proposed new web standards. This new group, however, will do more than review standards proposals from other groups. It will bring together companies, governments, civil society and other interested parties to work proactively on new privacy technology for the web.

First on the agenda: The group has now officially published a draft of the Global Privacy Control (GPC) specification. GPC will allow web users to “flip a switch” in their browsers to request their information not be sold or shared with others. As we noted last year, it’s time to standardize the Global Privacy Control in order to facilitate implementation by companies, including in browsers but also by websites that are required by law in some jurisdictions to comply with GPC requests. And a standard will give policymakers a reference point when they develop laws or regulations allowing people to easily exercise a right to exclude their data from being sold.

Beyond GPC, the Privacy Working Group will examine a range of issues, including privacy for survivors of intimate partner violence seeking information online and protections against invasive online tracking mechanisms including device IDs, IP addresses and browser “fingerprinting.” With digital tracking and surveillance becoming ever more pervasive, the Privacy Working Group will be positioned to respond to new technologies and other developments in the field as they emerge.
CDT invites companies, civil society organizations, government representatives and other interested parties to join the Working Group and collaborate on privacy standards.

The post CDT Helps Form New W3C Privacy Working Group appeared first on Center for Democracy and Technology.

]]>
Happy 30th Birthday, W3C https://cdt.org/insights/happy-30th-birthday-w3c/ Fri, 27 Sep 2024 17:51:49 +0000 https://cdt.org/?post_type=insight&p=105830 This week, I was honored to take part in a 30th anniversary celebration of the World Wide Web Consortium (W3C), the international standards body for the web. The W3C’s birthday is coming up on October 1, and I was part of a group that gathered in California – as well as many more joining online […]

The post Happy 30th Birthday, W3C appeared first on Center for Democracy and Technology.

]]>
This week, I was honored to take part in a 30th anniversary celebration of the World Wide Web Consortium (W3C), the international standards body for the web. The W3C’s birthday is coming up on October 1, and I was part of a group that gathered in California – as well as many more joining online – for a W3C@30 event on September 24th, during the annual technical plenary week of meetings.

The other speakers and I examined the profound effects the W3C has had on the way the web has evolved from a niche application to something deeply integrated into our everyday lives. My own talk focuses in particular on human rights and internet standards, with a view on what we’ve learned over the past three decades and the importance of our continued work.

We’ve had the basic technology of the web for 35 years, and since 1994, the international community has been working through the W3C to create the standards and interoperability necessary for it to thrive. In that time, we’ve seen the advantages of a global communications network for human rights, including our ability to read, see and hear voices that had been largely excluded from the public conversation before the digital era. We’ve seen activists and others use the web to bring abuses and atrocities to the world’s attention, and we’ve experienced the power of collaboration across national and cultural lines for everything from academic progress to artistic creation.

We’ve also seen the risks, from the invasion of privacy to online harassment, censorship and the power of misinformation, disinformation and propaganda to sway minds. As the W3C reaches a new milestone, we need to realize that we as a global community need intentional work to mitigate the internet’s real dangers to the rights — and sometimes the lives — of billions of people.

The W3C’s history as a multistakeholder organization shows us a promising way forward. When everyone from tech companies to national governments to academics and civil society organizations take part, the array of perspectives in the room helps us take a broader view of the questions about rights that are embedded in the decisions we make about how technologies should work.

For more, watch my presentation below as well as other talks at W3C@30. 


Full Video: https://www.youtube.com/watch?v=-LNUg5vDWDY

Full Text:

Good evening, I’m Nick Doty, senior technologist at the Center For Democracy and Technology. And it’s such a pleasure to be here to celebrate the 30th anniversary of W3C.

For those who aren’t familiar, the Center for Democracy & Technology (CDT) is a nonpartisan, nonprofit organization fighting to advance civil rights and civil liberties online. We shape technology policy, governance, and design with a focus on equity and democratic values.

CDT was an early organizational member of the World Wide Web Consortium in the 1990s and my predecessors at CDT have been participating in W3C standardization since 1995. 

I’m honored to be able to continue that long-time civil society participation and to be here to speak about the work we are all engaged in to support human rights in web standards.

Over those past 30 years, the Web has been an incredible boon for humanity and for human rights.

There may not be a singular source of human rights, but the Universal Declaration of Human Rights – adopted in 1948 – should be especially meaningful to this community because of its consensus development by countries around the world. 

Looking through that important text, I see the impact of the Web in supporting:

freedom of expression, 

freedom of assembly and association, 

freedom from discrimination, 

access to public services, 

the right to education, 

the right to work, 

and the right to participate in cultural life.

We use the Web now for political organizing, news reporting, social connection, healthcare, interacting with our governments, employers, and schools. It is where we work and learn, where we speak up and where we listen.

And W3C has played an important role in making those rights and freedoms a reality through its collaboration on web standards and interoperability.

The United Nations Office of the High Commissioner for Human Rights recognized that importance, and the role of technical standards in particular, in a report to the Human Rights Council. And that report cites heavily the work of W3C as an example of considering human rights impacts of technical standard setting.

Human rights and technical standard-setting processes for new and emerging digital technologies

Anniversaries are a fine time to reflect on those achievements and their importance to a growing number of people around the world.

But as the Web has become an essential part of public and community life, we also must realize the very significant threats to human rights online.

surveillance and threats to privacy (where I have spent much of my career, and where many of you have seen me discussing issues in your working groups)

discrimination, harassment and abuse

censorship and threats to freedom of expression and association

threats to security, safety, dignity or sustainability

In recognizing those threats, we acknowledge a weighty and urgent responsibility to protect users and society in our standards work.

Thirty years ago it might have been possible to unplug and leave the Web behind. Now, for people all around the world, including people living under dangerous and oppressive regimes, that’s simply not an option. We need to think about standards as if lives depend on them: because they do.

I remain an optimist about the potential for technology, especially the Web, to strengthen support for human rights, that tomorrow can be better than today.

But to live up to that promise, our community will need to recommit itself to the essential work of supporting human rights in the design of Web technology.

Human rights must not be an afterthought to the work that we’re doing. In today’s world there cannot be human rights offline if they aren’t protected online. And they can’t be protected online without the careful work of the people here today, among others.

I’m proud of the work that’s already ongoing, including

  • Ethical Web Principles
  • Privacy Principles, which I hope will be among our first W3C community Statements
  • Web Content Accessibility Guidelines and other work of the Web Accessibility Initiative
  • Internationalization work that you heard about earlier this evening
  • from colleagues at IETF/IRTF, a published set of Guidelines for Human Rights Protocol and Architecture Considerations
  • and I’m especially proud of W3C’s Horizontal Review process, and the consideration of those cross-cutting concerns across all of the Web standards work that we do

But there’s more to be done. As an open multistakeholder standard-setting body, W3C has a precious opportunity to include, collaborate on, address and pro-actively support human rights in Web standards and Web technology. This is not a matter for some other group, but for *us* as a technical and human community.

We need to consider human rights in all the work that we do at W3C in the next 30 years. That includes reviews, but also the development of new standards and revisiting old standards. And to be successful we must be more broadly inclusive of participation from around the world.

To return to the text of the Universal Declaration of Human Rights, one article stands out to me:

“everyone has duties to the community” (Article 29)

We are here to celebrate the achievements of the standards-based Web, but also to recognize our duties to the community in supporting human rights in Web technology. 

I look forward to continuing that important work with you all.

The post Happy 30th Birthday, W3C appeared first on Center for Democracy and Technology.

]]>
CDT Files Amicus Brief in Free Speech Coalition v. Paxton, Challenging TX Age Verification Law https://cdt.org/insights/cdt-files-amicus-brief-in-free-speech-coalition-v-paxton-challenging-tx-age-verification-law/ Mon, 23 Sep 2024 16:51:34 +0000 https://cdt.org/?post_type=insight&p=105703 CDT, in coalition with New America’s Open Technology Institute, The Internet Society, Professor Daniel Weitzner, Professor Eran Tomer, and Professor Sarah Scheffler, filed an amicus brief in the Supreme Court with the assistance of Keker, Van Nest & Peters in Free Speech Coalition v. Paxton. In this case, Free Speech Coalition is challenging a Texas […]

The post CDT Files Amicus Brief in Free Speech Coalition v. Paxton, Challenging TX Age Verification Law appeared first on Center for Democracy and Technology.

]]>
CDT, in coalition with New America’s Open Technology Institute, The Internet Society, Professor Daniel Weitzner, Professor Eran Tomer, and Professor Sarah Scheffler, filed an amicus brief in the Supreme Court with the assistance of Keker, Van Nest & Peters in Free Speech Coalition v. Paxton. In this case, Free Speech Coalition is challenging a Texas law requiring websites and online services that host a certain percentage of “sexual material harmful to minors” to verify the age of their visitors and prevent minors from accessing their sites. After the Fifth Circuit Court of Appeals vacated the injunction preventing the law from going into effect, relying upon reasoning wholly contradicting established Supreme Court precedent, Free Speech Coalition appealed to the Supreme Court.

In our amicus brief supporting Free Speech Coalition, we describe the various current methods of age verification that would satisfy the requirements of the Texas statute. For each method, we note how they purport to work, and the ways in which they are often inaccurate; can be circumvented; present privacy and security risks; and may be entirely inaccessible to certain groups, including undocumented immigrants, unbanked individuals, people with disabilities, and others who either do not have access to government ids or who might be more commonly misidentified by biometric technology.

We explain that these shortcomings in currently available age verification methods are of constitutional import because they will prevent and chill access to constitutionally protected speech by adults and will fail to achieve the government’s goals of protecting children because children will still be able to access the content either because they can easily circumvent these technologies or because of the error rates of age estimation technologies (which often were not trained on young faces). We also detail the ways in which current age verification methods increase risks to both privacy and security online, endangering not just access to the information the statute contemplates, but also internet use more broadly for everyone online.

For those reasons, we believe the Texas statute is unconstitutional and should not be enforced.

Read the full brief here.

The post CDT Files Amicus Brief in Free Speech Coalition v. Paxton, Challenging TX Age Verification Law appeared first on Center for Democracy and Technology.

]]>
Controlling AI Training Crawlers: Beyond Copyright https://cdt.org/insights/controlling-ai-training-crawlers-beyond-copyright/ Mon, 16 Sep 2024 17:49:18 +0000 https://cdt.org/?post_type=insight&p=105588 AI-powered chatbots draw on the collective work of billions of humans. To respond to the queries users enter into their prompts, ChatGPT, Google Gemini, Microsoft Copilot and other large language models rely on their analysis of trillions of words of human writing posted online. Likewise, image generators couldn’t turn out graphics without analyzing the billions […]

The post Controlling AI Training Crawlers: Beyond Copyright appeared first on Center for Democracy and Technology.

]]>
AI-powered chatbots draw on the collective work of billions of humans. To respond to the queries users enter into their prompts, ChatGPT, Google Gemini, Microsoft Copilot and other large language models rely on their analysis of trillions of words of human writing posted online. Likewise, image generators couldn’t turn out graphics without analyzing the billions of photos and illustrations we’ve already put on the web. Unless they’ve instead been given a specific, limited store of content to learn from, these systems “crawl” the web to learn enough to perform the tasks people now routinely ask them to do.

When the rights of the humans who created all that content have been discussed, it’s generally been in the context of copyright law. Many AI firms are already facing lawsuits from writers and artists upset that tech companies are profiting from the use or reproduction of their work. But copyright isn’t the only relevant interest, and courts aren’t always the best way to work out complex issues with many stakeholders. 

Many people won’t have a copyright claim, for example, but they may still care how their work is used or how information about them is shared. Meanwhile, researchers want the ability to analyze content for scientific and public-interest purposes without getting caught up in disputes about copyright or how AIs are trained. Plus, relying on the legal system to adjudicate an AI’s ability to access content may favor the big players, since they’ll have more resources to fight for their interests in court or establish exclusive licensing

Fortunately, the internet standards-setting process provides an alternative model for working out these issues. Tech standards bodies have a long history of finding solutions to problems with a variety of stakeholders involved, including tech companies large and small, civil society organizations, national governments, researchers, individual users and more. We also have a precedent for giving website owners a technical means to communicate their preferences to control automated attempts to index their content. As search engines began crawling the web to index content three decades ago, an informal, collaborative process yielded a system for websites to use a “robots.txt” file to allow website managers to at least indicate parameters around search-engine crawlers.

With this history in mind, the Internet Architecture Board will convene a workshop in Washington, DC this week, focused on controls on crawling for AI training and the potential of standards work at the Internet Engineering Task Force. Short position papers from potential participants explore the issues involved and potential standards to allow content creators to track or opt out of having their work become part of the training set for an AI model. CDT’s Eric Null will present, to encourage consideration of privacy and other non-copyright interests and inclusion of a breadth of stakeholders in a standard-setting process.

Technical standards are only recommendations and will not by themselves provide an enforcement mechanism. Nevertheless, an updated technical standard could help take steps towards a consensus that takes into account the wishes of the people who’ve created and maintained all that web content that’s being used for chatbots and training other AI tools.

More information:

The post Controlling AI Training Crawlers: Beyond Copyright appeared first on Center for Democracy and Technology.

]]>