The Beijing Effect: China's 'Digital Silk Road' as Transnational Data Governance

China shapes transnational data governance by supplying digital infrastructure to emerging markets. The prevailing explanation for this phenomenon is “digital authoritarianism” by which China exports not only its technology but also its values and governance system to host states. Contrary to the one-size-fits-all digital authoritarianism thesis, this Article theorizes a “Beijing Effect,” a combination of “push” and “pull” factors that explains China’s growing influence in data governance beyond its borders. Governments in emerging economies demand Chinese-built digital infrastructures and emulate China’s approach to data governance in pursuit of “data sovereignty” and digital development. China’s “Digital Silk Road,” a massive effort to build the physical components of digital infrastructure (e.g., fiber-optic cables, antennas, and data centers), to enhance the interoperability of digital ecosystems in such developing states materializes the Beijing Effect. Its main drivers are Chinese technology companies that increasingly provide telecommunication and e-commerce services across the globe. The Beijing Effect contrasts with the “Brussels Effect” whereby companies’ global operations gravitate towards the EU’s regulations. It also deviates from US efforts to shape global data governance through instruments of international economic law. Based on a study of normative documents and empirical fieldwork conducted in a key host state over a four-year period, we explain how the Beijing Effect works in practice and assess its impact on developing countries. We argue that “data sovereignty” is illusory as the Chinese party-state retains varying degrees of control over Chinese enterprises that supply digital infrastructure and urge development of legal infrastructures commensurate with digital development strategies.

Published in New York University Journal of International Law and Politics (JILP), Vol. 54, Issue 1, pp. 1-92.

NYU Law’s US-Asia Law Institute (USALI) published an essay about the paper entitled “Understanding China’s Growing Influence in Global Data Governance: Looking beyond US-China Relations”.

The paper draws on ideas from Guarini Global Law & Tech’s Global Data Law project and the MegaReg and InfraReg projects hosted by the Institute for International Law and Justice.

Potential Expropriation Claims Against Data Sharing Requirements

This paper explores potential expropriation claims against data sharing requirements. It finds that in formulating a viable claim of expropriation against mandatory data disclosures, the nature of the disclosure requirement matters. If the disclosure is likely to substantially affect the investor’s ability to benefit from the investment, it is likely to be considered an expropriation. As most data-driven businesses derive an economic benefit from their data through revenue and profit, it is likely an expropriation will be found where follow-on disclosure of data collected through a mandatory data disclosure regime to third parties substantially disrupts the investor from deriving revenue and profit from that data.

This paper was published as a commentary in the New York University Journal of International Law & Politics, Vol 54, Number 1 (Fall 2021), p. 249. The paper originated in the Global Data Law course.

Human Rights in a Use Case World

Digital engineers diagram ‘use cases’ to design software, based on practical needs of the quotidian product user rather than big normative claims. Human rights lawyers work in the reverse direction, starting from principles of universal application then applying these to hard cases. These two modes of thinking and practice have existed separately. Digital automation of government services using algorithms and AI is bringing them abruptly together and into mutual learning. The chapter examines controversies and court decisions over digital welfare state programmes in Australia (Robodebt), the Netherlands (Syri), and the United Kingdom (Universal Credit), highlighted by Philip Alston as UN Special Rapporteur. The normative practice of human rights must grapple with data concentration and computerized decisions wherever power is exercised. The chapter proposes ‘thinking infrastructurally’ as a path to bring human rights thinking into the fast-escaping public–private practices of algorithmic government and machine learning.

This paper has been published in The Struggle for Human Rights: Essays in honour of Philip Alston (Nehal Bhuta, Florian Hoffmann, Sarah Knuckey, Frédéric Mégret, and Margaret Satterthwaite eds., Oxford University Press 2021).

Milling the F/LOSS: Export Controls, Free and Open Source Software, and the Regulatory Future of the Internet

This Note investigates U.S. export controls as they relate to free and open source software (FOSS), arguing that the U.S. government has responded to the challenges of modern software by attempting to force an ill-fitting framework to accommodate FOSS. A contemporary reexamination of the state of export controls over FOSS can help in mapping out the responses generated by national security interests to the challenges of the internet. In particular, the Note offers a detailed account of the ways in which federal export controls have excluded FOSS from their regulatory purview through a powerful public availability exemption. In doing so, regulators have essentially labeled publicly available software as unthreatening to national security, regardless of the potential uses of any particular code.

This paper has been published by the NYU Journal of Legislation & Public Policy, Vol 23, Issue 3 (2021). It originated in the Guarini Colloquium: Regulating Global Digital Corporations and also contributed to the Open Source Software as Digital Infrastructure project.

Indicators 2.0: From Ranks and Reports to Dashboards and Databanks

The World Bank Headquarter Atrum as depicted by Jaakko H., licensed CC-BY-SA.

In September 2021, the World Bank Group’s management announced its decision to discontinue one of its most notable and controversial products – the Doing Business Report. Michael Riegner had welcomed the death of indicators as a technology of governance, noting that we are now in the era of “governance by data”. Proliferation of digital data, increased reliance on sensing technologies, creation of digital products by international organizations, and the funding of large-scale digital infrastructure projects (e.g., e-government, e-health) by the multilateral development banks, including the World Bank, are ushering new forms of global governance.  Riegner suggests that this turn to digital technologies and computational capacity for big data analytics is one of the reasons for indicators’ demise:

“why use aggregated indicators based on expert surveys when you can digitally collect and process actual raw data, disaggregated all the way down to the smallest unit of relevance?”

If by this question Riegner intimates that indicators – understood as “named collection of rank-ordered data that purports to represent the past or projected performance of different units…[wherein] data are generated through a process that simplifies raw data about a complex social phenomenon” (see here) – can be written off as a technology of governance, his dismissal may be too swift. First, the kind of “raw” data that would be required to make accurate assessments may not be readily available. Moreover, if commensurability is to be achieved, one would require access to roughly similar type of data for each unit of analysis – no small feat given the unequal availability and distribution of data across countries, within countries, and between public and private actors. Second, even if global governance actors increasingly embrace differentiated governance that is tailored to specific actors or entities, there will be continued demand for metrics and representations that simplify and translate complex data into legible and comparable information. Third, as Riegner himself acknowledges, other prominent indicators likes PISA, Human Development Index, Rule of Law Index, and Freedom Scores continue to exist. Whether their influence is declining, as Riegner suggests, remains to be seen.

The World Bank itself is showing no sign of giving up on the production of indicators. At the same time, how indicators are disseminated have changed: the World Bank has turned to dashboards as a means of presenting and contextualizing indicators, and has “datified” indicators, making them accessible as data through the DataBank. The Bank has also begun experimenting with new methodologies, embracing open-source “big data” to construct indicators.

These changes – dashboardization, datafication, and the turn to “big data” as a source for indicators – alter not only how indicators are produced and used (and by whom), but also how they govern, shifting and re-constituting the sites of expertise and power. The cancellation of the Doing Business report thus might not be evidence of the demise of the indicators but a consequence of a shift (begun several years earlier) towards a different process of indicator construction and dissemination that, in turn, implicates different means by which governance effects are achieved.

This blogpost was published by the Völkerrechtsblog as a response to Michael Riegner’s “The End of Indicators”. It draws on ideas developed in the Institute for International Law & Justice projects on indicators as a technology of global governance and on infrastructures-as-regulation.

The Evolution of European Data Law

This new chapter for the 3rd edition of Paul Craig and Gráinne de Búrca’s Evolution of EU Law conceptualizes European data law as an area of EU law that gravitates around but transcends data protection law. It traces the origins of the EU’s data protection law to national and international antecedents, stresses the significance of recognizing data protection and privacy in the EU’s Charter of Fundamental Rights, and explores the gradual institutionalization of data protection law through exceptionally independent data protection authorities, firmly embedded data protection officers, and emergent structures for supranational coordination. It then contrasts the EU law on personal data with the EU law on non-personal data and scrutinizes two other domains of European data law that intersect in complicated ways with data protection law: data ownership laws and access to data laws. European data protection law has been globally diffused through extraterritorial application, conditionalities for transfers of personal data, international agreements, and the “Brussels Effect” but whether the EU will retain its role as global data regulator is far from certain. As the European Commission is executing its data strategy, it needs to move beyond simplistic understandings of data as a resource, recognize the salience of data infrastructures, and confront the reality that data is more than a regulatory object.

The chapter draws on ideas from Guarini Global Law & Tech’s Global Data Law project.

Artificial Intelligence and International Economic Law

Shin-yi Peng, Ching-Fu Lin, and Thomas Streinz (eds.)

Artificial intelligence (AI) technologies are transforming economies, societies, and geopolitics. Enabled by the exponential increase of data that is collected, transmitted, and processed transnationally, these changes have important implications for international economic law (IEL). This edited volume examines the dynamic interplay between AI and IEL by addressing an array of critical new questions, including: How to conceptualize, categorize, and analyze AI for purposes of IEL? How is AI affecting established concepts and rubrics of IEL? Is there a need to reconfigure IEL, and if so, how? Contributors also respond to other cross-cutting issues, including digital inequality, data protection, algorithms and ethics, the regulation of AI-use cases (autonomous vehicles), and systemic shifts in e-commerce (digital trade) and industrial production (fourth industrial revolution).

This book is available as a physical object (hardcover) for purchase from Cambridge University Press and freely available (open access) as an electronic copy on Cambridge Core.

A book review by Anupam Chander and Noelle Wurst has been published by the Journal of International Economic Law. They conclude: “This book is an important contribution to our understanding of the way that international economic law governs AI. It will certainly be a foundational text for future work."

A further book review by Gabrielle Marceau and Federico Daniele has been published by the World Trade Review. They say: “… Artificial Intelligence and International Economic Law promises to become a seminal work on AI and international law and to open the path for future research and publishing on the matter."

China’s Influence in Global Data Governance Explained: The Beijing Effect

In today’s global economy, digital data enable transnational communication, serve as a resource for commercial gain and economic development, and facilitate the decision-making by private and public entities alike. As questions of control over digital data have become flashpoints in global governance, Chinese technology companies and the government of the People’s Republic of China (PRC) increasingly shape and influence these contests. The “Digital Silk Road” through which the PRC promises “connectedness” in the digital domain alongside the physical transport capacity of the land- and sea-based planks of the Belt and Road Initiative (BRI) manifests the PCR’s aspirations to facilitate digital development in host states. The prerequisite digital infrastructure investments are orchestrated by its gigantic technology companies, which are acquiring an increasingly prominent presence abroad.

In our article “The Beijing Effect: China’s ‘Digital Silk Road’ as Transnational Data Governance”, which is forthcoming with the New York University Journal of International Law and Politics, we analyze China’s growing influence in global data governance. The term “Beijing Effect” pays homage to Anu Bradford’s account of the EU’s global regulatory influence as the “Brussels Effect”, which is said to be particularly prominent in the digital domain, where the EU’s General Data Protection Regulation (GDPR) has been heralded as a global benchmark for multinational corporations and a template to be emulated by countries without comprehensive data protection laws. Even the PRC is sometimes following in the GDPR’s footsteps, as illustrated by the draft for a Personal Information Protection Law (PIPL) which – together with the Data Security Law – is set to complement China’s existing data governance framework which revolves around cybersecurity. Like the GDPR, the PIPL is set to apply to personal information handling outside PRC borders when the purpose is to provide products or services to people within the territory of the PRC or when conducting analysis or assessment of their activities. In this way, both the GDPR and the PIPL apply extraterritorially in recognition of the Internet’s cross-jurisdictional reach. While such parallels must be recognized, their effects must not be overstated or equated. We concur with Professor Bradford that Beijing will not be able to replicate the Brussels Effect which occurs when globally operating corporations choose to amplify European law. However, we posit that a Beijing Effect of a different kind is already materializing and might gain further strength since the COVID-19 pandemic has revealed the global economy’s reliance on digital infrastructures.

Our account of the Beijing Effect explains how the PRC is increasingly influencing data governance outside its borders, in particular in developing countries in need of digital infrastructures with only nascent data governance frameworks. Indeed, the most consequential vector may be the construction, operation, and maintenance of digital infrastructure by major Chinese technology companies. More than twenty years after Lawrence Lessig’s famous insight that “code is law,” the creators of the hardware and software that penetrate and regulate our increasingly digitally-mediated lives globally are increasingly based in Beijing, home to Baidu and ByteDance, Hangzhou, where Alibaba is based, or Shenzhen, where Huawei and Tencent are headquartered. As their digital infrastructures become ingrained in the social, economic, and legal structures of host states, they affect where and how data flows, and, by extension, how people communicate and transact with, and generally relate to, other individuals, the private sector, and public authorities.

At the same time, the PRC challenges the Silicon Valley Consensus which heralded the unconditional desirability of “free flow” of data and, instead, promotes “data sovereignty” as a leitmotif for international and domestic data governance. This tension materializes in the “digital trade” and “electronic commerce” chapters of recent megaregional trade agreement: While members of the Trans-Pacific Partnership (TPP) can challenge the necessity of data transfer restrictions and data localization requirements under threat of dispute settlement proceedings, the Regional Comprehensive Economic Partnership (RCEP) agreement allows its members to self-assess which restrictions they deem necessary.

As some governments in BRI host states seem drawn towards the dual promise of social control and economic development as reflected in the PRC’s transition towards a digitally-advanced techno-authoritarian society, a critical reevaluation of extant digital development narratives and China’s self-representation as an alternative center for global governance is warranted. Our account of the Beijing Effect is one piece in this larger puzzle, which requires more theoretically informed and empirically grounded research into China’s unique approach to law and development.

This blog post was initially published by the Machine Lawyering Blog hosted by the Chinese University of Hong Kong (CUHK). It is reposted here with permission since the original post is no longer available.


Personalization of Smart-Devices: Between Users, Operators, and Prime-Operators

Your relationships with your devices are about to get complicated. Remote operability of smart-devices introduces new actors into the previously intimate relationship between the user and the device—the operators. The Internet of Things (IOT) also allows operators to personalize a specific smart-device for a specific user. This Article discusses the legal and social opportunities and challenges that remote operability and personalization of smart-devices bring forth.

Personalization of smart-devices combines the dynamic personalization of code with the influential personalization of physical space. It encourages operators to remotely modify the smart-device and influence specific users’ behaviors. This has significant implications for the creation and enforcement of law: personalization of smart-devices facilitates the application of law on spaces and activities that were previously unreachable, thereby also paving the way for the legalization of previously unregulated spaces and activities.

The Article also distinguishes between two kinds of smart-devices operators: ordinary and prime-operators. It identifies different kinds of ordinary operators and modes of constraints they can impose on users. It then normatively discusses the distribution of first-order and second-order legal powers between ordinary operators.

Finally, the Article introduces the prime-operators of smart-devices. Prime-operators have informational, computational, and economic advantages that uniquely enable them to influence millions of smart-devices and extract considerable social value from their operation. They also hold unique moderating powers—they govern how other operators and users operate the smart-devices, and thereby influence all interactions mediated by smart-devices. The Article discusses the nature and role of prime-operators and explores paths to regulate them.

Published in the DePaul Law Review, Vol. 70, Issue 3 (Spring 2021), pp. 497-549. This paper originated in the Global Tech Law: Selected Topics Seminar.

Transparency as a First Step to Regulating Data Brokers

Over the past few years a number of legislative bodies have turned their focus to ‘data brokers.’ Data brokers hold huge amounts of data, both personally identifiable and otherwise, but attempts at data regulation have failed to bring them sufficiently out of the shadows. A few recent regulations, however, aim to increase transparency in this secretive industry. While transparency alone will not fully address concerns surrounding the data brokerage industry without additional actionable consumer rights, it is an important and necessary first step.

These bills present a new course for legislatures interested in protecting consumer privacy. The primary effect of these measures is to heighten transparency. The data brokerage industry lacks transparency because these companies do not have direct relationships with the consumers whose data they buy, package, analyze, and resell, and there is no opportunity for the consumer to opt out, correct, or even know of the data that is being sold. For companies regulated by the Fair Credit Reporting Act, such as traditional credit bureaus, customers have the right to request their personal data and request corrections if anything is wrong. But most collectors of data are not covered by the FCRA, and in those instances consumers often agree to click-wrapped Terms of Service provisions that include buried provisions allowing the collecting company to resell their data. Customers are left unaware that they have signed up to have their data sold, and with no assurances that that data is accurate.

Concerns with data brokers center on brokers’ relative opacity and the lack of public scrutiny over their activities. They control data from consumers with which they have no relationship, and in turn, consumers do not know which data brokers may have their data, or what they are doing with it. Standard Terms of Service contracts allow the original data collector to sell collected data to third parties, and allow those buyers to sell the data in turn, which creates a rapid cascade in which consumers, agreeing to the terms of service of one company, have allowed their personal data to proliferate to numerous companies of whose existence they may not even be aware. Proposed legislation would increase consumers’ access to information about how their data is being used, shining a light on the data brokerage industry and enabling consumers to limit the unfettered sharing of their data.

This paper was published by the NYU Journal of Legislation & Public Policy. Dillon took the first iteration of the Global Data Law course and worked subsequently as a Student Research Assistant in the Global Data Law project.