Internet of Things Archives - Center for Democracy and Technology https://cdt.org/area-of-focus/privacy-data/internet-of-things/ Tue, 20 Apr 2021 21:18:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://cdt.org/wp-content/uploads/2019/11/cropped-cdt-logo-32x32.png Internet of Things Archives - Center for Democracy and Technology https://cdt.org/area-of-focus/privacy-data/internet-of-things/ 32 32 Answers to Questions from Senate Commerce on Privacy & Big Data in Coronavirus Response https://cdt.org/insights/answers-to-questions-from-senate-commerce-on-privacy-big-data-in-coronavirus-response/ Fri, 17 Apr 2020 14:38:44 +0000 https://cdt.org/?post_type=insight&p=86403 Questions Submitted by Members of the Senate Committee on Commerce, Science, and Transportation on Enlisting Big Data in the Fight Against Coronavirus Answers by Michelle Richardson, Director, Privacy and Data Project at the Center for Democracy and Technology April 15, 2020 Chairman Wicker 1. As the chairman points out, app-based programs are proliferating and will […]

The post Answers to Questions from Senate Commerce on Privacy & Big Data in Coronavirus Response appeared first on Center for Democracy and Technology.

]]>
Questions Submitted by Members of the Senate Committee on Commerce, Science, and Transportation on Enlisting Big Data in the Fight Against Coronavirus

Answers by Michelle Richardson, Director, Privacy and Data Project at the Center for Democracy and Technology

April 15, 2020

Chairman Wicker

1. As the chairman points out, app-based programs are proliferating and will likely draw on increasingly large or diverse datasets. Regarding privacy, apps that do not transfer personal information are best of class. Those that need personal information should be subject to strict purpose limitations so data cannot be used for non-coronavirus applications. As for effectiveness, there is no reliable data available at this time. Even though location and proximity tracing apps have been deployed in other countries, their impact has not been disentangled from contemporaneous efforts like widespread testing, compulsory quarantines, public information on the movement of infected individuals, and other responses. 

2. We do not believe that privacy and effectiveness are inversely proportional. Given the extraordinary resources that U.S. companies are investing in the coronavirus response, it is not a tradeoff we need to accept. In fact, excess data collection can often hide useful ‘signals’ behind a lot of data ‘noise’. Data collectors should have a clear idea of what data they want and why. This will encourage minimal data collection, strong data limitations, and result in the best health outcomes.

3. A comprehensive privacy law would have likely had several effects. First, it would have encouraged companies to conduct research in privacy protective ways. For example, the Chairman’s draft bill includes protections for public interest research that is necessary, proportionate and limited in purpose. It also excludes aggregate and de-identified data from its scope altogether. To maximize data use while receiving liability protection, companies would be more likely to commit to these methods. Second, under these protections people would likely feel more comfortable sharing their personal information. Knowing that there are clearer and more meaningful rules – including a way to enforce them – would encourage people to take part in voluntary data sharing that may currently feel too risky.

4. We recommend that location tracking use aggregated and anonymized data whenever possible. Less stringent de-identification tactics – such as creating a pseudonymous identifier – are not sufficient for such a sensitive data set. Because it is so easy to re-identify individual location data, it’s collection should be strongly disfavored. We are still working to understand how to effectively use anonymized, de-identified, and aggregate location data, but one area of benefit is allowing public health officials to identify and compare, in aggregate, the effectiveness of social distancing measures.

5. Data collected or shared during this health emergency should only be used to inform the response to the COVID-19 pandemic. The data should not be repurposed or retained for any other reason. Once the immediate public health crisis has passed, data collected by companies and the government should only be used by researchers for the sole purpose of learning from this episode and planning for future occurrences. Otherwise, the data should be destroyed. This is crucial for maintaining public trust and hence public health. Without these controls the public is less likely to share data or work to actively subvert data collection methods.

For the rest of our answers to Congress’ questions, as well as our other teams’ important work monitoring and guiding the coronavirus response, look to the resources box on this page.

Michelle Richardson’s written testimony in this hearing.

The post Answers to Questions from Senate Commerce on Privacy & Big Data in Coronavirus Response appeared first on Center for Democracy and Technology.

]]>
CDT’s Letter to the District DOT Regarding Mobility Data https://cdt.org/insights/cdts-letter-to-the-district-dot-regarding-mobility-data/ Fri, 20 Mar 2020 16:24:24 +0000 https://cdt.org/?post_type=insight&p=86190 Shared mobility services, including ride shares, scooters and bikes, have experienced a growth in usage across the United States. In order to effectively manage their streets and public spaces, cities have begun compelling data from the operators of these services reflecting service usage, including in some cases real-time location data. Today the Center for Democracy […]

The post CDT’s Letter to the District DOT Regarding Mobility Data appeared first on Center for Democracy and Technology.

]]>
Shared mobility services, including ride shares, scooters and bikes, have experienced a growth in usage across the United States. In order to effectively manage their streets and public spaces, cities have begun compelling data from the operators of these services reflecting service usage, including in some cases real-time location data.

Today the Center for Democracy & Technology sent a letter to the District of Columbia’s Department of Transportation (DDOT), highlighting concerns we have with the DDOT’s decision to compel mobility service providers to disclose sensitive location information reflecting the travels of their customers. The DDOT decided to adopt the Mobility Data Specification (MDS), and we have been informed that they intend to also compel providers to disclose some of the data in real time, or near real time. We have many concerns with the adoption of MDS. Location data, even de-identified (not directly tied to a credit card or customer profile) is very difficult to anonymize, and we are concerned that the data the DDOT intends to compel could be associated with an individual traveler.

Location data is a sensitive category of personal data that “provides an intimate window into a person’s life, revealing not only his particular movements, but through them his ‘familial, political, professional, religious, and sexual associations.’” We urged the DDOT to adopt a different approach to data reporting, preferably one limited to the reporting of aggregated data, rather than individual trip level data. Properly aggregated, such data can serve legitimate city planning needs and protect privacy at the same time.

The post CDT’s Letter to the District DOT Regarding Mobility Data appeared first on Center for Democracy and Technology.

]]>
CDT Comments to the ITC on Competition in the Smart Speaker Market https://cdt.org/insights/cdt-comments-to-the-itc-on-competition-in-the-smart-speaker-market/ Thu, 23 Jan 2020 16:14:44 +0000 https://cdt.org/?post_type=insight&p=85761 Earlier this month, Sonos sued Google in federal court and filed a complaint at the International Trade Commission, alleging that Google stole its intellectual property and infringes its patents. The cases involve complicated patent questions and illustrate the challenges that tech companies have when working together to promote interoperability while simultaneously needing to protect their […]

The post CDT Comments to the ITC on Competition in the Smart Speaker Market appeared first on Center for Democracy and Technology.

]]>
Earlier this month, Sonos sued Google in federal court and filed a complaint at the International Trade Commission, alleging that Google stole its intellectual property and infringes its patents. The cases involve complicated patent questions and illustrate the challenges that tech companies have when working together to promote interoperability while simultaneously needing to protect their intellectual property rights.

The ITC asked the public for comments on whether its investigation should include an inquiry into whether excluding Google speakers (and related products) from importation would affect competition or other aspects of the public interest.

CDT takes no position on the underlying patent questions. But we feel strongly that the public’s interest in robust competition is an important question. So we filed comments with the ITC, urging it to investigate whether excluding Google speakers and related products at the border could harm competition in this nascent market. Connected speakers are increasingly becoming part of American consumers’ everyday lives, and as they acquire more and more devices that connect to smart speakers, the implications for competition also get more complicated. You can read our comments here.

The post CDT Comments to the ITC on Competition in the Smart Speaker Market appeared first on Center for Democracy and Technology.

]]>
A “Smart Wall” That Fails to Protect Privacy and Civil Liberties Is Not Smart https://cdt.org/insights/a-smart-wall-that-fails-to-protect-privacy-and-civil-liberties-is-not-smart/ Fri, 08 Feb 2019 22:22:25 +0000 https://cdt.org/?post_type=blog&p=82673 Congress is working on a border security spending package ahead of the February 15th deadline before another partial government shutdown. Democratic leadership’s vision for border security is a “smart wall,” a non-physical barrier composed of technology like drones and sensors. Last Thursday, Democrats published their initial border funding proposal, which provided more detail. It would […]

The post A “Smart Wall” That Fails to Protect Privacy and Civil Liberties Is Not Smart appeared first on Center for Democracy and Technology.

]]>
Congress is working on a border security spending package ahead of the February 15th deadline before another partial government shutdown. Democratic leadership’s vision for border security is a “smart wall,” a non-physical barrier composed of technology like drones and sensors. Last Thursday, Democrats published their initial border funding proposal, which provided more detail. It would provide U.S. Customs and Border Protection (CBP) $400 million for border security technology procurement and deployment. While the deal is negotiated, we urge Congress not to write CBP a no-strings attached check to build a “smart wall.” Technology is not a panacea for the problems at the border.

Congress should take note of the following:

Some Technology is Particularly Invasive

CBP currently deploys a variety of sensors, radar, thermal imaging devices, and cameras along the border, as well as drones and other types of aerial surveillance, automatic license plate readers, and biometric collection and identification systems like facial recognition technology. Identification systems can be particularly invasive, as these technologies allow the government to monitor, identify, and track individuals. Individuals living near the border currently bear the brunt of border surveillance, and should not be subject to the constant warrantless monitoring that such surveillance tools facilitate. These are the technologies that demand more careful deliberation before Congress authorizes continued or expanded funding.

Some of these surveillance technologies are ineffective or can lead to “false positive” identification of individual targets. A 2015 IG review of CBP’s drone program determined that drones are “dubious achievers” and expensive to operate. The IG concluded that “[n]otwithstanding the significant investment, we see no evidence that the drones contribute to a more secure border, and there is no reason to invest additional taxpayer funds at this time.” CBP has also solicited small drones armed with facial recognition technology. We urge Congress not to approve funding for this kind of technology. Accuracy concerns loom over facial recognition—research demonstrates that error rates are not evenly distributed across race and gender. CBP seeks to use this technology to identify the border crossers Border Patrol officers encounter in the field. A mistaken match could result in a fatality—the stakes are simply too high to green-light funding for this technology.

If Congress decides to fund technology to surveil people at the “border,” such funding must be conditioned upon safeguards to ensure the preservation of rights. One such condition is a geographic limitation on use. CBP claims the authority to operate in the entire border zone, which constitutes any land within 100 miles of the actual border of the U.S. —an area containing over 200 million people. Some states are entirely enveloped in the border zone. CBP conducts surveillance with few limitations in this area. For example, CBP’s Federal Aviation Administration (FAA) drone authorization allows CBP drones to operate along and within 100 miles of the northern border, and along and within 25 to 60 miles of the southern border. CBP operates automatic license plate readers (ALPRs) at ports of entry and checkpoints, and claims the authority to set up ALPRs anywhere within the border zone. This technology, which automatically collects  a car’s license plate number and location with a timestamp, can be used to create detailed maps of individual movements. CBP claims authority to gather and analyze all of this data without a warrant. Efforts have been made to limit CBP’s ability to operate these checkpoints to within 25 miles of the border. A similar restriction on drone surveillance would help limit how overbroad border security missions can, and have, become.

Congress should also limit the extent to which CBP shares and retasks its surveillance technology. CBP shares its drones with police departments, and retasks them to support non-Border Patrol missions. Data collected from these technologies must be subject to stringent collection, retention, sharing, and use limitations. Finally, any procurement and deployment of a technology should be subject to an independent evaluation of rights compliance and efficacy in accomplishing the purpose for which it was deployed.

Surveillance At The Border Will Not Stay at The Border

The tools we provide CBP today will be the tools of tomorrow’s law enforcement. Technology is tested on border crossers and border communities. CBP socializes the surveillance, and serves as a bridge for technology to leave war zones like Afghanistan and enter the interior of the United States. In the past, CBP has shared technology with law enforcement, which can lead to efforts by local police to procure their own. Drones and facial recognition technology are examples of this border-to-police pipeline. Congress should be aware that they are shaping not only border security, but also the future of law enforcement.

To this end, Congress should take steps to ensure that broader uses of technology in the interior are and will be rights respecting. The limitations discussed above would help achieve these goals—restricting retasking and use, as well as requiring a study of effectiveness. Additionally, Congress should require CBP to study the impact their use of technology has on the privacy rights of individuals living in the United States, and identify means of mitigating identified encroachments. For example, such a study could result in CBP identifying tools and techniques that are less invasive and continue to allow it to achieve its mission.

We’ve Been Here Before: Smart Wall, Meet Smart Fence

The fervor to solve border security with technology is not new. Before anyone had uttered the words “smart wall”, there was a “smart fence”. In 2005, DHS launched the Secure Border Initiative (SBI), which called for physical fencing at the U.S.-Mexico border, and complementing the physical fence with a “virtual fence” of cameras and sensors. This second layer, termed SBInet, would alert CBP whenever anyone hopped the fence. In 2010, the Government Accounting Office determined the project did not “live up to expectations” and SBInet was ultimately scrapped in 2011 after almost $1 billion in accrued expenses. The project failed due to poor management and the technology not functioning as promised. SBInet serves as a cautionary tale for overreliance on technology, and a reminder for DHS and CBP to adequately assess how technology meets operational needs. Reviewing the project, the DHS Inspector General observed that “SBInet clearly illustrates that poorly defined and documented operational requirements, and failure to adequately plan, results in missed milestones and wasted resources.” There’s little reason to believe the agencies have learned this painful lesson.

In 2017, the DHS IG said CBP should learn from SBI while planning for a number of acquisitions to secure the southern border: “because CBP lacks strong well-defined operational requirements and an overall strategy framework for securing the 2,000 miles of border, CBP may not properly focus and stabilize the direction of the acquisition.” Furthermore, CBP has not demonstrated an ability to measure the effectiveness of the technology it deploys. A March 2018 GAO report reviewing CBP’s technology deployment at one section of the border observed that “the Border Patrol has not yet used available data to determine the contribution of surveillance technologies to border security efforts.” CBP’s drone program is again an example of the agency’s failure to match technology with needs. The IG determined that drone surveillance assisted in fewer than 2% of CBP’s apprehensions at the border.

Congress needs to be smart about this “smart wall.” CBP’s history of grossly mismanaging technology projects, and its liberal use of surveillance tools beyond the physical border, caution against a hands-off approach. Any funding Congress provides to invasive border surveillance technologies should be conditioned on efficacy requirements and limitations on use that are designed to preserve the human and civil rights of those against whom they will be used.

The post A “Smart Wall” That Fails to Protect Privacy and Civil Liberties Is Not Smart appeared first on Center for Democracy and Technology.

]]>
Techsplanations: Travel Tech https://cdt.org/insights/techsplanations-travel-tech/ Fri, 21 Dec 2018 18:59:31 +0000 https://cdt.org/?post_type=blog&p=82475 We’ve created a new series of blogs and resources, entitled “Techsplanations,” with the goal of providing folks with a better understanding of the technologies that shape our everyday lives. As before, please refer to this glossary for quick reference to some of the key terms and concepts. Over this holiday season, many of us are traveling to visit family […]

The post Techsplanations: Travel Tech appeared first on Center for Democracy and Technology.

]]>
We’ve created a new series of blogs and resources, entitled “Techsplanations,” with the goal of providing folks with a better understanding of the technologies that shape our everyday lives. As before, please refer to this glossary for quick reference to some of the key terms and concepts.

Over this holiday season, many of us are traveling to visit family and loved ones. To help you take control of your online data as part of your 2019 New Year’s resolutions, we have compiled a “naughty and nice” list featuring privacy and security enhancing steps that CDT staff rely upon during their travels.

Naughty: Failing to protect your confidential work
Nice: Using a privacy screen

On a recent cross-country flight, I sat next to an associate from a major consulting firm who was editing slides that discussed the rollout of a new strategy for a Fortune 50 company. If I had been so inclined, I could have spent the full five hours reviewing the strategic slides right along with him, since he failed to use a privacy screen on his laptop. (Luckily for him, I did not do this!) A privacy screen is a thin sheet of polarized plastic that you can attach to the screen of your laptop, tablet, or phone, to prevent the contents on the screen from being viewed from side angles. They can either be more permanently affixed with adhesive or magnetized, so they can be easily removed as needed.

Naughty: Infecting your devices with viruses
Nice: Using a USB condom

There are an increasing number of public places offering USB sockets where you can plug in your device to recharge using a USB cord. While very convenient, these USB sockets can also be used to either extract data from your device or to pass along malware and viruses. A simple solution is to add a USB condom to your travel bag. While allowing electricity to flow through it, this device prevents data from being exchanged. (Plus, your mother will be pleased that you are practicing safe tech!)

Naughty: Using one password for all of your accounts
Nice: Using a password manager

It turns out that using “password” or “123456” as your online password does not provide much protection. Yet these were the most common passwords used in 2018. The reason for this is simple: these passwords are easy to remember. Unfortunately, such passwords are similarly easy for bad actors to guess. One solution is to move to a password manager. A password manager is a service that utilizes encrypted systems to store long passwords across hundreds of your accounts (even generating unique random passwords on your behalf), while also protecting all your vital online information, like answers to security questions. To access this account, you will need to remember only one master password – which should not be “password!” In return for memorizing one complex password, you will be much more secure online. Consumer Reports discusses some key attributes you should look for when selecting a password manager.

Naughty: Thieves stealing your complex passwords
Nice: 2FA making that more difficult

For your most sensitive information, passwords may no longer be enough. Turn on two-factor authentication (2FA) as an additional security layer. This authentication method can be summed up as a combination of “something you have and something you know.” There are several different forms of 2FA. For example, after you enter your password (especially from a new device or location), your default settings can require that you receive a one-time access code via email, SMS, or call. Only after you enter this code are you able to access your account. Other forms of 2FA involve a physical security key that you carry on your key chain and insert to a USB drive after entering your password to help verify your identity.

Naughty: RFID chips sharing your information with nearby scanners
Nice: Protecting your data with a Faraday bag

A Faraday bag protects RFID (radio frequency identification) chips from communicating externally. The bag generally consists of a foam padded nylon outer layer and a specially-designed RF shielding material. When you place devices such as cell phones, credit cards, laptops, or tablets inside the Faraday pouch, they are (generally) unable to receive or transmit potentially disruptive radio frequency signals. The Faraday bag will block cell signals, Wi-Fi, satellite, and Bluetooth frequencies. This pouch is critical for electronics when traveling through hostile areas, but I also keep my passport in a Faraday pouch at all times in an effort to protect the embedded data. (Sadly, I also stay at Starwood hotels.)

Naughty: Insecure public Wi-Fi networks
Nice: Using a strong VPN

We have discussed this before in Techsplanations, but VPNs are critical when accessing public Wi-Fi, so are worth another mention. Use one! Learn what they are and how to access them here.

Also Nice: Using a portable Wi-Fi hotspot

As an alternative to connecting to public Wi-Fi, you can travel with your own portable Wi-Fi hotspot and charger. If you purchase one of these devices, make sure that strangers do not jump onto your hotspot (where you can expose yourself to vulnerabilities, not to mention running out your data plan). You can do this by changing the SSID (the hotspot’s default network name) to something random, avoiding dictionary words, creating a strong network password, and enabling the hotspot’s port-filtering and blocking features. While not cheap, these devices can allow you to connect multiple devices, which may be cheaper than paying for all of your kids to connect to the hotel Wi-Fi.

Naughty: Spying on people through their web cameras
Nice: Using a web camera cover

I wish we lived in a world where devices were never hacked, but if you want to be sure no one is looking through the camera at your child using her tablet, it’s important to physically cover up the camera lens when it’s not in use. This can be done by simply sticking a small piece of tape over the camera (I personally use a small piece of a sticky note when I need a low-tech solution), or by affixing a sliding camera cover to the lens that can be slid back and forth as needed. I buy these in bulk and put them on all the phones, tablets and computers in my home. In my experience, they will last indefinitely on more stationary cameras, and fall off my cell phone every six months or so, where they are constantly being jarred in a pocket or bag.

Naughty: DNS providers that don’t care about your privacy
Nice: Using a responsible DNS provider

One relatively easy way to improve your privacy online, wherever you go, is to choose a privacy-enhancing DNS provider. DNS is what links the text-based web addresses in your browser’s bar, or the links you click, with the correct IP addresses. Unless you change the settings yourself, your connected devices use the local network’s choice of DNS, potentially leaving parts of your browsing history exposed to snoops or attackers. Fortunately, there are several free, public DNS providers that take extra precautions to protect your DNS queries, such as 9.9.9.9 and 1.1.1.1. One example is Cloudflare’s 1.1.1.1 because it’s private, secure, and fast. Plus there’s an app to make setup and control even easier!

Naughty: Searching and mirroring electronic devices at the U.S. border
Nice: Being prepared if you are crossing the U.S. border

The U.S. government has stepped up border searches of devices. Check out this post to learn more about what can and cannot be legally accessed at the border. Consider deleting sensitive materials (like work documents) or apps that you don’t want accessed or copied, then reinstall those apps after passing through security. For example, I delete my password manager from my devices before reentering the country.

But it’s not enough to take these steps on your own devices. We encourage you to use the holidays to help educate your relatives – especially children and grandparents – about what they can do to help stay secure online.

We wish you a safe and happy holiday season.

More Techsplanations

The post Techsplanations: Travel Tech appeared first on Center for Democracy and Technology.

]]>
Dockless Mobility Pilots Let Cities Scoot Away with Sensitive Data https://cdt.org/insights/dockless-mobility-pilots-let-cities-scoot-away-with-sensitive-data/ Wed, 28 Nov 2018 18:29:46 +0000 https://cdt.org/?post_type=blog&p=82320 In Washington, D.C., a day hardly goes by where I don’t come upon multiple scooters parked on street corners, near park benches, or outside my apartment building. Lime, Bird, Spin, Skip, JUMP, and Lyft all have “dockless mobility” operations in the capital. These services generate a tremendous amount of data that could potentially improve transportation […]

The post Dockless Mobility Pilots Let Cities Scoot Away with Sensitive Data appeared first on Center for Democracy and Technology.

]]>
In Washington, D.C., a day hardly goes by where I don’t come upon multiple scooters parked on street corners, near park benches, or outside my apartment building. Lime, Bird, Spin, Skip, JUMP, and Lyft all have “dockless mobility” operations in the capital. These services generate a tremendous amount of data that could potentially improve transportation infrastructure – and early evidence suggests they are already offering new transportation services to underserved communities in Washington – and cities like Detroit and Los Angeles are racing to create new data standards to collect and analyze mobility data.

These efforts raise important privacy and security concerns that deserve further consideration as cities across the country launch dockless mobility pilot programs. Next door to D.C., for example, is Alexandria and Arlington, Virginia, which have started their own pilots. These programs are attempting to find answers to new liability issues, ensure scooters are made available equitably, and set expectations about the scale and timeliness of data being provided to local transportation authorities. The Los Angeles Department of Transportation (LADOT) is currently undertaking its own pilot program, and the Department’s program highlights some of the relevant privacy and security issues involved.

LADOT is asking for ongoing, real-time access to trip data for scooters. While the city has suggested it is “respectful of user privacy” because its data standard asks “for no personally identifiable information about users directly,” this sort of trip data by itself is highly revealing. As Justice Sotomayor has acknowledged, tracing people’s movements reveals information that is “indisputably private in nature,” including their intimate relationships and visits to health care providers such as abortion clinics or HIV treatment centers. Monitoring location data also reveals First Amendment-protected activities such as religious and political affiliation. In the wrong hands, this information can be used to stalk or harass riders, compromising their physical safety. Ride-sharing APIs have been abused for things like spying on ex-partners, and a 2016 Associated Press study found that law enforcement officers across the country abused police databases to stalk romantic partners, journalists, and business associates. The risk of harm from exposing this information is particularly high for survivors of gender-based assault and hate-motivated violence.

This type of data collection raises the specter of surveillance and  warrants public discussion about what information must be made available to government officials and at what scale.

We also should acknowledge that scooter riders are likely to rely on their scooters for first- or last-mile transportation, taking it directly from their home and to their final destinations. This is different from car trips in cabs or Ubers that often begin or end some distance away from a user’s final destination. This type of data collection raises the specter of surveillance and  warrants public discussion about what information must be made available to government officials and at what scale.

For this reason, CDT has written to the Los Angeles Department of Transportation, which is mid-pilot program, to ask them to provide more information to the public about the privacy and security protections it intends to put in place around this data. LADOT views itself as a leader in dockless mobility, but its guidance for handling mobility data is largely limited.

Building on our earlier work on government data demands, we’ve called on transportation authority to adopt clear and robust privacy and security safeguards. These policies should build off of longstanding Fair Information Practices, include appropriate access controls, and address the availability of mobility data to researchers. Specifically, we recommend that LADOT should (1) limit access to and use of mobility data for clearly specified purposes, (2) establish a reasonable retention and deletion policy, (3) clarify how this data will be secured or obfuscated to protect against breaches and minimize the likelihood of disclosure of identifiable data, and (4) better communicate these policies and information to riders and the public.

We believe that these pilot programs provide an opportunity for transportation officials to assess how they can achieve legitimate aims with thinking about how to minimize the amount and granularity of data being collected. Cities must also take careful stock of the types and sensitivity of data for which it is asking and determine whether each data type is necessary for enforcement or how information can be obscured to minimize privacy risks. It should also consider the granularity of location information it needs.

For cities to exercise true leadership in dockless mobility, they must establish policies and procedures that can be followed by cities with fewer resources and less technical capacity or expertise. We hope LADOT will take on this challenge, and we look forward to seeing how dockless mobility programs roll out across the country.

The post Dockless Mobility Pilots Let Cities Scoot Away with Sensitive Data appeared first on Center for Democracy and Technology.

]]>
Ok Google, Can You Repeat That? https://cdt.org/insights/ok-google-can-you-repeat-that/ Wed, 27 Jun 2018 15:15:17 +0000 https://cdt.org/?post_type=blog&p=81644 Last month, Google launched Duplex, what it calls “a new technology for conducting natural conversations to carry out ‘real world’ tasks over the phone”, in an on-stage keynote presentation at its developer conference. CDT was part of a group of journalists and advocates invited over the past week to experience Duplex in a hands-on environment, […]

The post Ok Google, Can You Repeat That? appeared first on Center for Democracy and Technology.

]]>
Last month, Google launched Duplex, what it calls “a new technology for conducting natural conversations to carry out ‘real world’ tasks over the phone”, in an on-stage keynote presentation at its developer conference. CDT was part of a group of journalists and advocates invited over the past week to experience Duplex in a hands-on environment, as Google seeks to relaunch Duplex and address many of the concerns raised earlier. Based on the wide range of public reaction, both expected and unexpected, Google appears to have more fully considered the ethical and privacy critiques it received as it lays out its plan for the development and roll-out of Duplex.

Duplex allows the Google Assistant to make phone calls on behalf of a users’ request to complete specific tasks. The combination of speech recognition, parsing user intent, and voice synthesis by Duplex allows Assistant to engage with a human by phone using conversational AI. It is limited to an extremely narrow set of tasks (for now): confirming business holiday hours, making a reservation at a restaurant, and booking an appointment at a hair salon. But it is natural to infer how this technology could be used in a number of scenarios in which the user may not have the time, desire, or ability to engage in a straightforward transactional conversation like booking an appointment, placing an order, or requesting basic information.

This combination of unencumbered speculation about conversational AI and the magical technical capabilities Google put on display in a choreographed on-stage demo generated significant public interest. Most of the negative reaction centered around the ethics of Google going “too far” by trying to trick the call recipient into thinking that they were talking to a human; concerns were also raised that the user’s private data may be disclosed during those phone calls. Google found itself in the unenviable position of trying to answer questions about ethics and privacy from the deep end of the uncanny valley.

Today Google is having a relaunch of sorts of Duplex. This time it is showing that the company can react and iterate more quickly based on public concerns; and more importantly, that it can focus on communicating the thoughtfulness involved in the Duplex development process. The biggest change since the developer conference was Google’s announcement of its AI Principles that are supposed to guide its “responsible AI innovation.” Transparency and control are now heavily emphasized. Duplex announces its purpose, identifies itself, and states that the call will be recorded at the beginning of every call. The recipient can even decline to participate on a recorded call, and a business can opt-out Duplex interactions entirely. When that happens, or in any other scenario that Duplex cannot process, Duplex degrades rather gracefully by apologizing, ending the call and a human Google Assistant calls back to complete the request.

Significantly limiting tasks that Duplex can complete and being able to fail back to human assistants are only temporary measures, however. Google still aims toward fully automated completion of user requests that are likely to become increasingly diverse and complex. Ethical and privacy concerns will be deeply ingrained in those diverse, complex tasks. From a technical perspective, the speech disfluencies (“ums” and “uhs”) and natural-sounding voices lead to an increased success rate in completing the task. From an ethical perspective, those same features may be taking advantage of and reinforcing cultural biases. For example: by restricting the user’s ability to choose the Duplex voice, Google may decide to use a particular dialect or accent purely on a technical basis of increasing the likelihood of successfully making an appointment. However, that increased success rate may be based on the business employee’s existing bias for or against certain cultural groups based on voice alone. Whether or not Google decides to leverage or challenge existing social biases is a complex internal decision. The gender choice of several voice assistants may offer insight into that decision-making process.

Debate over participation in the call and the ownership of the call data and metadata may now involve four parties: the human user initiating the request, Google placing the call, the business receiving the call, and the human answering the call. That call data is currently subject to Google’s existing data retention and privacy policies for its users as well as state and federal call recording laws. However, there is an interesting question as to whether an employee should be subject to those user policies and whether they can or should be able to consent to having their voice recorded, saved, and analyzed by Google as a condition of performing their job duties. Duplex can be a tremendous enabling technology for people with disabilities or any person facing a physical or language barrier that makes talking on the phone a challenge. Would an employee or business declining to participate in a Duplex call alienate those users or perhaps even run afoul of protections offered by Title III of the Americans with Disabilities Actk?

We recently attended Google’s reintroduction of Duplex that included an ambitious live demo at an actual restaurant with unscripted participants. Duplex was capable of conversing with trained restaurant staff and other untrained call takers (I was certainly in the untrained group) in a natural and effective manner. Duplex even handled my unusual statement of “I’m not sure that I’m allowed to be on a recorded call” by transferring me to a human Google Assistant on an unrecorded line. This was quite a difference compared to the pre-recorded calls presented at the developers conference. It was a strong counter to the speculation about the authenticity of the original demo, the disclosures Duplex makes to call recipients, and the conversational capabilities of Duplex. Google should continue to be just as ambitious in demonstrating its commitment to offering users and third-parties transparency and control over their interactions with Google technologies like Duplex.

The post Ok Google, Can You Repeat That? appeared first on Center for Democracy and Technology.

]]>
Protecting Consumers in the Era of IoT – CDT Comments to the Consumer Product Safety Commission https://cdt.org/insights/protecting-consumers-in-the-era-of-iot-cdt-comments-to-the-consumer-product-safety-commission/ Tue, 26 Jun 2018 20:30:25 +0000 https://cdt.org/?post_type=blog&p=81641 Authored by CDT Summer Intern Dominic Contreras.  With its recall authority and broad mission to protect consumers, the Consumer Product Safety Commission (CPSC or the Commission) plays an essential role in protecting the public against hazards associated with products such as toys, refrigerators, and lawn mowers. Increasingly, such potential hazards are becoming digital, as products […]

The post Protecting Consumers in the Era of IoT – CDT Comments to the Consumer Product Safety Commission appeared first on Center for Democracy and Technology.

]]>
Authored by CDT Summer Intern Dominic Contreras

With its recall authority and broad mission to protect consumers, the Consumer Product Safety Commission (CPSC or the Commission) plays an essential role in protecting the public against hazards associated with products such as toys, refrigerators, and lawn mowers. Increasingly, such potential hazards are becoming digital, as products of all kinds incorporate computers and networks to make them “smarter.” As federal agencies explore their role in the digital realm, the CPSC should direct its authority to protecting consumers from the real and growing threats associated with these Internet of Things (IoT) devices.

CDT recently filed comments with the Commission in response to its hearings on IoT and consumer product hazards it held in May 2018. In our comments, we encourage the Commission to consider expanding its definition of hazardization – the process by which a product, which would otherwise be safe, poses a danger to consumers when connected to the internet through changes in its operational code – to include the interplay between network connectivity, software, hardware, and autonomous decision making capabilities.

Charged with overseeing the safety of consumer products, the CPSC has an important role to play in policing the wider IoT landscape. The Commission’s main activities involve standards development, oversight, and monitoring, and it is the only agency able to order the mandatory recall of hazardous products. Readers may be familiar with the CPSC – in October 2016, the Commission ordered the recall of approximately 1.9 million Samsung Galaxy Note7 smartphones amid reports of the devices overheating and catching fire.

Discussions about the risks associated with IoT devices often focus on how they can be co-opted for botnet attacks or used for spying and surveillance. As these debates move toward how to properly regulate the IoT, agencies have been pushing up against their authority and domain expertise. The FDA, for example, has focused on medical devices, while the NHTSA has focused on autonomous vehicles. Meanwhile, the CPSC has been interested in the physical hazards associated with connected devices – think smart toasters catching fire or internet-connected smoke detectors malfunctioning due to a security update.

As the CPSC considers how to mitigate hazards associated with IoT devices, we recommend that the Commission look to existing IoT standards to inform its work; for example, there already exist a number of industry and government endorsed IoT standards, and we encourage the Commission to consider their applicability in the consumer product space.

Our comments also highlight the consumer safety risks associated with unsupported or abandoned IoT devices. Product defects and hazards are not always readily apparent and the networked nature of IoT devices gives rise to the possibility that hazards could arise long after a device is no longer supported but still used. Accordingly, the Commission should consider how it will protect consumers and exercise its recall authority when such hazards arise.

Finally, we urge the the Commission to engage in enhanced monitoring and oversight of IoT devices. To quickly mitigate product safety hazards and protect consumers, CDT supports a mandatory “Bill of Materials” that lists the component parts for a given IoT device. We also encourage the Commission to to include an IoT designation in the National Electronic Injury Surveillance System and in the online form that consumers use to report unsafe products.

Traditionally, the CPSC has considered the myriad of data security and privacy issues that are posed by IoT to be outside its jurisdiction, and more effectively addressed by the Federal Trade Commission (FTC). But we believe having more data cops on the beat is a good thing in this case, and we urge the CPSC to working alongside the FTC to aggressively use its recall authority to address privacy and security harms associated with IoT devices.

The post Protecting Consumers in the Era of IoT – CDT Comments to the Consumer Product Safety Commission appeared first on Center for Democracy and Technology.

]]>
Comments to CPSC on the Internet of Things and Consumer Product Hazards https://cdt.org/insights/comments-to-cpsc-on-the-internet-of-things-and-consumer-product-hazards/ Fri, 15 Jun 2018 19:43:31 +0000 https://cdt.org/?post_type=insight&p=81606 CDT respectfully submits these comments in response to Consumer Product Safety Commission’s (CPSC, or the Commission) request for written comments on the Internet of Things (IoT) and consumer product hazards. While there is no doubt that the IoT presents enormous value, poorly designed and inadequately secured devices can present risks to consumers’ safety and can […]

The post Comments to CPSC on the Internet of Things and Consumer Product Hazards appeared first on Center for Democracy and Technology.

]]>
CDT respectfully submits these comments in response to Consumer Product Safety Commission’s (CPSC, or the Commission) request for written comments on the Internet of Things (IoT) and consumer product hazards. While there is no doubt that the IoT presents enormous value, poorly designed and inadequately secured devices can present risks to consumers’ safety and can be exploited for costly cyber-attacks.

As the CPSC explores potential safety issues and hazards in IoT, CDT recommends the Commission:

  • Recognize the unique scope and characterization of IoT devices and how this impacts hazardization considerations;
  • Identify existing IoT standards to bolster security practices across different consumer product domains;
  • Collaborate with relevant stakeholders to provide guidance to consumers and manufacturers on IoT-related informational harms;
  • Develop a plan for addressing hazards associated with abandoned and unsupported IoT devices;
  • Track IoT products, including component disclosures and IoT designations, for complaint databases.

The post Comments to CPSC on the Internet of Things and Consumer Product Hazards appeared first on Center for Democracy and Technology.

]]>
When IoT Kills: Preparing for Digital Products Liability https://cdt.org/insights/when-iot-kills-preparing-for-digital-products-liability/ Mon, 16 Apr 2018 20:07:08 +0000 https://cdt.org/?post_type=blog&p=81342 Increasingly, objects in our environment are computerized and networked, bringing both the promise and the peril of the internet to our everyday lives. We are starting to see serious harm resulting from errors, attacks, and misdesign of these systems. On the eve of 18 March, an Uber self-driving car hit and killed a pedestrian in […]

The post When IoT Kills: Preparing for Digital Products Liability appeared first on Center for Democracy and Technology.

]]>

Increasingly, objects in our environment are computerized and networked, bringing both the promise and the peril of the internet to our everyday lives. We are starting to see serious harm resulting from errors, attacks, and misdesign of these systems. On the eve of 18 March, an Uber self-driving car hit and killed a pedestrian in Tempe, Arizona. Tempe Police say the car was in self-driving mode and there was a safety driver behind the wheel of the car at the time of the incident. The Governor of Arizona has since suspended Uber’s testing of self-driving cars in the state, the National Transportation Safety Board (NTSB) has begun investigating the incident, and Uber has settled the matter with the victim’s family. It’s an unfortunate occasion to reflect about who will be held liable for the harm caused by the failure of products with autonomous capabilities.

Today we are releasing a paper that examines issues in product liability for Internet of Thing (IoT) devices to mark the start of a research agenda in this area. We expect that the digital technology industry is about to undergo a process of change akin to what the automobile industry experienced in the 1960s and 70s. Then, as now, insufficient security measures, dangerous design or adding-on of security features post-design were widely accepted industry practice. Those practices had to change as the perils of unsafe cars became obvious – as is increasingly the case today with IoT devices. We summarize the discussion of the paper in the remainder of this post.

Internet connectivity, software, and autonomous capabilities are increasingly integrated into all manner of devices and objects. This Internet of Things ranges from fitness trackers to household appliances, automobiles to critical infrastructure and beyond. The fundamental idea of the IoT is to create ‘smart’ objects with greater convenience and efficiency.

However, the benefits from these technological advances also come with risks. It may be more convenient to have a ‘smart’ Kettle, but consider if such a kettle has buggy software that inadvertently turns the kettle on (or all similar kettles at once!) and starts a fire in the kitchen? What happens if it fails because a factory-set default password allows it to be hacked remotely, starting a fire? How would we know the cause of the failure – who is responsible for the cause – and ultimately who is liable for the damages caused?

The IoT inherits long-standing cyber security risks.

The answers to these questions are complex and highly context dependent. For instance, some might argue that remote patching capabilities make the device ‘more secure’. Yet patching also introduces new risks, particularly if the software supply-chain is compromised. What is an acceptable balance of one risk against another in such situations? If there really are only two kinds of companies – “those who have been hacked and those who don’t know that they’ve been hacked” – it seems foreseeable that security measures will be circumvented by malicious third parties. Does this mean that device compromise is inevitable, which may render the devices involved unsafe by design? Finding answers to these and many other questions will be critical over the coming years if this wave of technological change is to deliver the maximum benefit possible without exposing society to unnecessary dangers.

The IoT inherits long-standing cyber security risks. For decades industry has ‘shipped-now, patched-later’ to fix bugs in software. Anti-virus software often has had to be purchased with a new computer, then kept up-to-date, to remain responsive to new threats. When these measures failed in the past, the outcome for users was typically inconvenience and lost time.

Failures of IoT devices, however, have a higher probability of physical injury, property damage or death – especially when these are so-called “cyber-physical systems” that use software and networking to control real-world physical objects, machines, and devices. This raises the possibility of application of law that has not up until now been widely applied to digital technologies: strict products liability.

Strict products liability arises when harm is caused by or threatened by unreasonably dangerous products. One of its purposes is to ensure that the costs of harm to a person – or property of that person or a third party – due to a product are borne by the producer of the product.

Digital technologies can be hijacked by malicious third-parties, involve complex and thus difficult to parse codebases, and possess interdependencies that can result in unpredictable outcomes.

Strict products liability legal cases will place an intense focus on various hereto under-examined elements of the cybersecurity of digital technologies. These cases are likely to examine whether there was a design or manufacturing defect in the IoT product in question (including the software), whether that defect caused the injury or property damage, whether adequate cost was incurred by the producer to identify bugs or implement security measures relative to the damage caused by device failure, and to what extent the incident could be foreseen by the producer (including malicious hacking), among others.

Answering these questions will not be straightforward. Digital technologies can be hijacked by malicious third-parties, involve complex and thus difficult to parse codebases, and possess interdependencies that can result in unpredictable outcomes. When autonomous capabilities are introduced, little understood risks associated with adversarial perturbations (small modifications to images or objects, which may be imperceptible to the human eye, that lead to misclassification by machine learning and artificially intelligent systems) are also introduced. Government agencies sometimes purchase or develop knowledge of software vulnerabilities – then may lose control of that information, resulting in large attacks when those flaws are maliciously weaponized. The many stakeholders implicated will wrestle with various other technical, legal, and economic issues, as well as contextual elements, as determinations are made as to who pays when smart devices do stupid things.

Questions such as these are already on the agenda of policymakers worldwide. The European Commission is considering whether it needs to revise its Product Liability Directive to respond to challenges created by IoT, robotics, and autonomous capabilities. The Japanese government’s Council on Investments has created draft guidelines governing autonomous cars, which represents a concrete step toward a legal framework and toward leading in the creation of international rules in this space.

If policymakers in the United States – at a federal and state level – wish for their country’s companies to remain at the edge of technological innovation, then these issues must be considered and dealt with. The good news is that some discussions are already taking place (e.g. the Consumer Products Safety Commission will soon hold a hearing on IoT risks) and some guidance to individuals on ways to reduce the risks they face has been developed and released.

As we suggest in the concluding section of our paper, a sea change will be required in software development practices so as to identify and remove defects. A minimum set of agreed upon security practices for IoT products will be required and these practices will have to be adjusted so as to be suitable to a wide range of contexts.  Development of safety standards for autonomous systems will be required, which will have to be based on a firmer understanding of the risks of such systems than we possess today. Finally, some difficult questions will have to be answered around the appropriateness of open versus closed source software in certain contexts. If these questions cannot be answered adequately, and the costs of these ‘smart’ devices are disproportionately placed on those least able to avoid or bear them, we may have to rethink whether making devices ‘smart’ is such a smart idea after all.

Read the Report

The post When IoT Kills: Preparing for Digital Products Liability appeared first on Center for Democracy and Technology.

]]>