Personalization of Smart-Devices: Between Users, Operators, and Prime-Operators

Your relationships with your devices are about to get complicated. Remote operability of smart-devices introduces new actors into the previously intimate relationship between the user and the device—the operators. The Internet of Things (IOT) also allows operators to personalize a specific smart-device for a specific user. This Article discusses the legal and social opportunities and challenges that remote operability and personalization of smart-devices bring forth.

Personalization of smart-devices combines the dynamic personalization of code with the influential personalization of physical space. It encourages operators to remotely modify the smart-device and influence specific users’ behaviors. This has significant implications for the creation and enforcement of law: personalization of smart-devices facilitates the application of law on spaces and activities that were previously unreachable, thereby also paving the way for the legalization of previously unregulated spaces and activities.

The Article also distinguishes between two kinds of smart-devices operators: ordinary and prime-operators. It identifies different kinds of ordinary operators and modes of constraints they can impose on users. It then normatively discusses the distribution of first-order and second-order legal powers between ordinary operators.

Finally, the Article introduces the prime-operators of smart-devices. Prime-operators have informational, computational, and economic advantages that uniquely enable them to influence millions of smart-devices and extract considerable social value from their operation. They also hold unique moderating powers—they govern how other operators and users operate the smart-devices, and thereby influence all interactions mediated by smart-devices. The Article discusses the nature and role of prime-operators and explores paths to regulate them.

Published in the DePaul Law Review, Vol. 70, Issue 3 (Spring 2021), pp. 497-549. This paper originated in the Global Tech Law: Selected Topics Seminar.

download

Transparency as a First Step to Regulating Data Brokers

Over the past few years a number of legislative bodies have turned their focus to ‘data brokers.’ Data brokers hold huge amounts of data, both personally identifiable and otherwise, but attempts at data regulation have failed to bring them sufficiently out of the shadows. A few recent regulations, however, aim to increase transparency in this secretive industry. While transparency alone will not fully address concerns surrounding the data brokerage industry without additional actionable consumer rights, it is an important and necessary first step.

These bills present a new course for legislatures interested in protecting consumer privacy. The primary effect of these measures is to heighten transparency. The data brokerage industry lacks transparency because these companies do not have direct relationships with the consumers whose data they buy, package, analyze, and resell, and there is no opportunity for the consumer to opt out, correct, or even know of the data that is being sold. For companies regulated by the Fair Credit Reporting Act, such as traditional credit bureaus, customers have the right to request their personal data and request corrections if anything is wrong. But most collectors of data are not covered by the FCRA, and in those instances consumers often agree to click-wrapped Terms of Service provisions that include buried provisions allowing the collecting company to resell their data. Customers are left unaware that they have signed up to have their data sold, and with no assurances that that data is accurate.

Concerns with data brokers center on brokers’ relative opacity and the lack of public scrutiny over their activities. They control data from consumers with which they have no relationship, and in turn, consumers do not know which data brokers may have their data, or what they are doing with it. Standard Terms of Service contracts allow the original data collector to sell collected data to third parties, and allow those buyers to sell the data in turn, which creates a rapid cascade in which consumers, agreeing to the terms of service of one company, have allowed their personal data to proliferate to numerous companies of whose existence they may not even be aware. Proposed legislation would increase consumers’ access to information about how their data is being used, shining a light on the data brokerage industry and enabling consumers to limit the unfettered sharing of their data.

This paper was published by the NYU Journal of Legislation & Public Policy. Dillon took the first iteration of the Global Data Law course and worked subsequently as a Student Research Assistant in the Global Data Law project.

Read the paper

The Global “Last Mile” Solution: High-Altitude Broadband Infrastructure

This paper explains the reasons for communications infrastructure underdevelopment historically, taking into account the myriad ways governments, usually through national universal service mechanisms, have attempted to correct the underprovision and positing why this opportunity to create global broadband infrastructure has surfaced. In essence, this portion of the paper explains the last mile problem that innovative infrastructure projects purport to solve. It then describes the broadband infrastructure projects, the consequences of multi-jurisdictional regulatory complexities for bringing the projects to market, and the disruptive potential of the infrastructure to change the economics of broadband access and provision. Lastly, it considers whether the companies are indeed solving the last mile problem beyond mere provision. Accordingly, the potential impacts of Internet access are surveyed using Amartya Sen’s capability approach, which seeks to place the individual and his or her freedom at the center of development.

The paper originated in what was then the IILJ Colloquium: “International Law of Google” and is now the Guarini Colloquium: Regulating Global Digital Corporations. It got published in the Georgetown Law Technology Review, Vol. 4 (2019), 47-123.

Download

Safe Sharing Sites

Lisa M. Autin & David Lie

In this Article, Lisa Austin and David Lie argue that data sharing is an activity that sits at the crossroads of privacy concerns and the broader challenges of data governance surrounding access and use. Using the Sidewalk Toronto “smart city” proposal as a starting point for discussion, we outline these concerns to include resistance to data monopolies, public control over data collected through the use of public infrastructure, public benefit from the generation of intellectual property, the desire to broadly share data for innovation in the public interest, social—rather than individual— surveillance and harms, and that data use be held to standards of fairness, justice, and accountability. Data sharing is sometimes the practice that generates these concerns and sometimes the practice that is involved in the solution to these concerns.

Their safe sharing site approach to data sharing focuses on resolving key risks associated with data sharing, including protecting the privacy and security of data subjects, but aims to do so in a manner that is independent of the various legal contexts of regulation and governance. Instead, we propose that safe sharing sites connect with these different contexts through a legal interface consisting of a registry that provides transparency in relation to key information that supports different forms of regulation. Safe sharing sites could also offer assurances and auditability regarding the data sharing, further supporting a range of regulatory interventions. It is therefore not an alternative to these interventions but an important tool that can enable effective regulation.

A central feature of a safe sharing site is that it offers an alternative to the strategy of de-identifying data and then releasing it, whether within an “open data” context or in a more controlled environment. In a safe sharing site, computations may be performed on the data in a secure and privacy-protective manner without releasing the raw data, and all data sharing is transparent and auditable. Transparency does not mean that all data sharing becomes a matter of “public” view, but rather that there is the ability to make these activities visible to organizations and regulators in appropriate circumstances while recognizing the potential confidentiality interests in data uses.

In this way, safe sharing sites facilitate data sharing in a manner that manages the complexities of sharing while reducing the risks and enabling a variety of forms of governance and regulation. As such, the safe sharing site offers a flexible and modular piece of legal-technical infrastructure for the new economy.

This paper was prepared for and presented at the NYU Law Review Symposium 2018 on “Data Law in a Global Digital Economy”. It was published by the NYU Law Review in Volume 94, Number 4 (October 2019), pp. 581-623.

Download

The False Promise of Health Data Ownership

In recent years there have been increasing calls by patient advocates, health law scholars, and would-be data intermediaries to recognize personal property interests in individual health information (IHI). While the propertization of IHI appeals to notions of individual autonomy, privacy, and distributive justice, the implementation of a workable property system for IHI presents significant challenges. This Article addresses the issues surrounding the propertization of IHI from a property law perspective. It first observes that IHI does not fit recognized judicial criteria for recognition as personal property, as IHI defies convenient definition, is difficult to possess exclusively, and lacks justifications for exclusive control. Second, it argues that if IHI property were structured along the lines of traditional common law property, as suggested by some propertization advocates, prohibitive costs could be imposed on socially valuable research and public health activity and IHI itself could become mired in unanticipated administrative complexities. Third, it discusses potential limitations and exceptions on the scope, duration, and enforceability of IHI property, both borrowed from intellectual property law and created de novo for IHI.

Yet even with these limitations, inherent risks arise when a new form of property is created. When owners are given broad rights of control, subject only to enumerated exceptions that seek to mitigate the worst effects of that control, constitutional constraints on governmental takings make the subsequent refinement of those rights difficult if not impossible, especially when rights are distributed broadly across the entire population. Moreover, embedding a host of limitations and exceptions into a new property system simply to avoid the worst effects of propertization begs the question whether a property system is needed at all, particularly when existing contract, privacy, and anti-discrimination rules already exist to protect individual privacy and autonomy in this area. It may be that one of the principal results of propertizing IHI is enriching would-be data intermediaries with little net benefit to individuals or public health. This Article concludes by recommending that the propertization of IHI be rejected in favor of sensible governmental regulation of IHI research coupled with existing liability rules to compensate individuals for violations of their privacy and abusive conduct by data handlers.

Ideas contained in this paper were discussed during the roundtable on data ownership at the NYU Law Review Symposium 2018 on “Data Law in a Global Digital Economy”. The paper was published by the NYU Law Review in Volume 94, Number 4 (October 2019), pp. 624-661.

download

Contracting for Personal Data

Is contracting for the collection, use, and transfer of data like contracting for the sale of a horse or a car or licensing a piece of software? Many are concerned that conventional principles of contract law are inadequate when some consumers may not know or misperceive the full consequences of their transactions. Such concerns have led to proposals for reform that deviate significantly from general rules of contract law. However, the merits of these proposals rest in part on testable empirical claims. We explore some of these claims using a hand-collected data set of privacy policies that dictate the terms of the collection, use, transfer, and security of personal data. We explore the extent to which those terms differ across markets before and after the adoption of the General Data Protection Regulation (GDPR). We find that compliance with the GDPR varies across markets in intuitive ways, indicating that firms take advantage of the flexibility offered by a contractual approach even when they must also comply with mandatory rules. We also compare terms offered to more and less sophisticated subjects to see whether firms may exploit information barriers by offering less favorable terms to more vulnerable subjects.

This paper was prepared for and presented at the NYU Law Review Symposium 2018 on “Data Law in a Global Digital Economy”. It was published by the NYU Law Review in Volume 94, Number 4 (October 2019), pp. 662-705.

Download

Machines as the New Oompa-Loompas: Trade Secrecy, the Cloud, Machine Learning, and Automation

In previous work, I wrote about how trade secrecy drives the plot of Roald Dahl’s novel Charlie and the Chocolate Factory, explaining how the Oompa-Loompas are the ideal solution to Willy Wonka’s competitive problems. Since publishing that piece I have been struck by the proliferating Oompa-Loompas in contemporary life: computing machines filled with software and fed on data. These computers, software, and data might not look like Oompa-Loompas, but they function as Wonka’s tribe does: holding their secrets tightly and internally for the businesses for which these machines are deployed.

Computing machines were not always such effective secret-keeping Oompa Loompas. As this Article describes, at least three recent shifts in the computing industry—cloud computing, the increasing primacy of data and machine learning, and automation—have turned these machines into the new Oompa-Loompas. While new technologies enabled this shift, trade secret law has played an important role here as well. Like other intellectual property rights, trade secret law has a body of built-in limitations to ensure that the incentives offered by the law’s protection do not become so great that they harm follow-on innovation—new innovation that builds on existing innovation—and competition. This Article argues that, in light of the technological shifts in computing, the incentives that trade secret law currently provides to develop these contemporary Oompa-Loompas are excessive in relation to their worrisome effects on follow-on innovation and competition by others. These technological shifts allow businesses to circumvent trade secret law’s central limitations, thereby overfortifying trade secrecy protection. The Article then addresses how trade secret law might be changed—by removing or diminishing its protection—to restore balance for the good of both competition and innovation.

Ideas contained in this paper were discussed during the roundtable on data ownership at the NYU Law Review Symposium 2018 on “Data Law in a Global Digital Economy”. The paper was published by the NYU Law Review in Volume 94, Number 4 (October 2019), pp. 706-736.

Download

Digital Megaregulation Uncontested? TPP’s Model for the Global Digital Economy

The United States championed the creation of new rules for the digital economy in TPP. Analyzing this effort as “digital megaregulation” foregrounds aspects that the conventional “digital trade” framing tends to conceal. On both accounts, TPP’s most consequential rules for the digital economy relate to questions of data governance. In this regard, TPP reflects the Silicon Valley Consensus of uninhibited data flows and permissive privacy regulation. The paper argues that the CPTPP parties endorsed the Silicon Valley Consensus due to a lack of alternatives and persistent misperceptions about the realities of the global digital economy, partly attributable to the dominant digital trade framing. It suggests a new approach for the inclusion of data governance provisions in future international trade agreements that offers more flexibility for innovative digital industrial policies and experimental data regulation.

This paper was published in Megaregulation Contested: Global Economic Ordering After TPP (edited by Benedict Kingsbury, David M. Malone, Paul Mertenskötter, Richard B. Stewart, Thomas Streinz, and Atsushi Sunami, Oxford University Press 2019), chapter 14 (pp. 312-342).

Read on Oxford Scholarship online
download from ssrn