Recap: Workshop on IoT Policy Issues

*Posted on behalf of Dr. Susan Landau (Tufts) and Dr. David Choffnes (Northeastern; ProperData faculty)*

The workshop began with a discussion of privacy principles, first the Fair Information Practice Principles (FIPPs), which undergird both US privacy regulation (especially states) and also Europe’s General Data Protection Regulation. At the national level, US privacy law is sectoral (FERPA, HIPAA, etc.), but there are increasing number of state laws that are more focused on general principles. Of late, data minimization—the idea that companies should only collect that is reasonably necessary to perform the action and store the data only so long as it is needed to do so—is being rediscovered in the US (it was never forgotten in Europe).

The FIPPs most central to Home IoT include Transparency (what data will you be collecting?) Minimization (which data elements will you collect? how long will you retain them?) Security (how will the data be secured?), and Accountability (how will compliance be ensured?).

GDPR, though a European regulation, has impact worldwide since the regulations apply to companies operating on data belonging to Europeans. The regulation is ramping up, with increasing implementation and higher fines for noncompliance. One approach that was mentioned was “Privacy by Design,” a fancy term that simply means designing with privacy principles in mind. Given GDPR, this is likely to be enforced through ex post facto actions.

The focus then shifted to considering Home IoT and its impact on individuals. Who controls the devices? What if there is abuse in the home? What does privacy mean in such a situation? This appears to be a somewhat unexplored area.

We then turned to national-security implications of smart devices. From such a perspective, concerns include (i) lack of vender diversity (especially important when factoring in who controls the network; there is danger of lockin on Home IoT by Tuya  and 5G by Huawei and (ii) the privacy risk of personal IoT becomes a national cybersecurity risk when everyone has personal/home IoT devices. Home IoT could also help increase a “splinternet effect” (regionally connected internets instead of one globally connected Internet) through the fact that in the US, the companies effectively own the personal data, in the EU, users own the personal data, and in China, the state does. This could play out in very complex ways geopolitically, e.g., at what point do countries refuse to connect to countries with different principles? This is already occurring to some extent, but will home/personal IoT be a further driver in this direction? 

Interoperability is also a national-security concern. Do we need to enforce a minimal mandate for interoperability of personal/home IoT devices?

We next turned to how these thoughts impact thinking about the design and development of ubiquitous IoT. Are we coming to a turning point—and if so, how quickly—where users must live with IoT devices (e.g., in a rental unit, a dorm, a shared housing situation such as a retirement community)? If it is no longer a choice for users—and no longer a choice for least powerful in society (e.g., low-income people)—what design principles are critical for such IoT devices? There are many threat models—the landlord, the abusive partner, the nation-state adversary—and all must be taken into account. Who are we designing systems for? Who are we protecting? How do you take into account differential impact on users (e.g., the elderly, the unsafe partner, etc.)? The issue is: who is in control of the IoT devices? One panelist remarked that if you’ve ever seen children fighting over the TV remote or people disagreeing on the office thermostat, you know that government regulation will be really hard.

What should we do as researchers? One answer was to force transparency into the system, but a key question is how to do so given that IoT devices have limited interfaces for doing so. Another suggestion is to ensure minimum standards for safety, security, and privacy akin to the Underwriters Limited (UL) model. Compliance tools were seen as another topic of interest.

We need to think about threat models: What part of the ecosystem is being targeted? Ad system? ML? Advertiser?

Another important topic is use of data collected from IoT. IoT devices provide additional potential for targeting individuals, but with such targeting comes the potential for discrimination, bias, and other unfair practices. When, if ever, should we allow protected features (e.g., gender, race) to be targeted directly? Can we trust an algorithm to target in ways that avoid biases?

We ended with more questions than answers.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s