III. Cyber Governance & Global Accord

Given the long record of international agreements among states, it is reasonable to review three questions:

  1. Why do countries enter into international accords?
  2. What are structural barriers, or dilemmas, impeding our understanding of commitments and responsibilities of the signatories?
  3. What can be done to create more transparency and lift any "veil of uncertainty?"

To the first question, the answer is clear: states enter into global accords to avoid common aversions, and/or to pursue common interests. In the case of AI and its future, states are nearly uniform in seeking to avoid consequences that undermine security, safety, and sustainability.

To the second question, the answer is obvious but seldom acknowledged as such. All agreements (as all policies and laws) are written in text form – word after word, sentence after sentence, segment after segment – and thus all result in a linear representations of intents and commitments. Linearity per se does not represent that human existence, or social interactions, or agreements of any form on any issue. At a minimum, it is imperative to create transparency in the structure and content of the text in ways that accurately represent the details of accord.

This leads us to the third question: What can be done? The answer to this question begins with the use of analytics for policy – with special attention to visual representations – as we illustrated in section earlier.

 

As noted earlier, policy documents are usually written in linear text form – word after word, sentence after sentence, page after page, section after section, chapter after chapter – which often obscures some of their most critical features.

Text cannot easily situate interconnections among elements nor reveal any underlying, "hidden" features. In response, our current research focuses on a computational approach to policy documents, with application to seminal works situated at the intersection of cyberspace and international law.

In the course of developing the methods for the Science of Security project, noted above, we explored the use of diverse policy texts as the "raw data" for analysis, such as:

  • International Law for Cyber Operations: Tallinn Manual 2.0
  • European Union, General Data Protection Regulation
  • Budapest Convention on Cybercrime

Here we note briefly the analysis and results for the Tallinn Manual 2.0 and for the Convention on Cybercrime.

Tallinn Manual for Cyber Operations 2.0

Despite major innovations in the construction and management of the Internet (the core of cyberspace) – or perhaps because of the remarkable expansion of its global reach – the international community is now on the verge of a major challenge: how to frame the relationship between international law and cyberspace. Our analysis of Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations (Schmitt 2017) is undertaken through the lens of computational logic informed by complexity theory. This seminal work, of nearly 600 pages, can best be seen as an interpretation of international law. At this point, however, it is viewed as a set of norms. If these norms become widely accepted, they will assume the status of international law. 

Tallinn Manual 2.0 extends and supersedes the legal principals put forth in Tallinn Manual on the International Law Applicable to Cyber Warfare (Schmitt 2013) to include public international law operations in times of peace. Framed by a group of experts convened by the NATO Cooperative Cyber Defense Center of Excellence, Tallinn reflects the state of law at the point of its publication.

For our research, the value-added of complexity coupled with computation – generic in frame and in form – is to (a) create transparency in the system, for the "whole" and for its "parts," (b) generate new ways of analyzing policy texts, (c) extend conventional views of the policy system, and (d) explore contingencies such as, "what if...?" Our approach consists of a chain of computational moves, each intended to generate specific outputs, and each designed to identify different properties of the legal corpus.

The text of Tallinn Manual 2.0 serves as the "raw data" for our investigation. In a document of nearly 600 pages, text-as-conduit imposes a form of sequential logic in an otherwise complex and interconnected set of directives.

The results generate a range of system network views and point clearly to the statistical dominance of specific Rules, the centrality of select Rules or Articles, the Rules or Articles with autonomous standing, and the location of authority, as well as the feedback structure that holds the system together. None can be readily discernable in the text-form. Here we present one case to illustrate the process.

Here we note only one model of Rule dominance differentiates nodes by degree of centrality—determined by the eigenvector of a node that, itself, is shaped by the eigenvector of the Rules to which it is connected. The result, in the figure below, is a network model of Rule centrality, derived from "neighborhood," yielding system-wide salience. 

Network Model of Tallinn Manual 2.0.
Network Model of Tallinn Manual 2.0.
Source: Choucri and Agarwal (2020).

The figure highlights several notable features of the Manual.

First, the greatest concentration of high centrality nodes (five of the top ten) is located in Part I on International Law.

Second, although Part III – on Cyber Activities, Peace, and Security – hosts considerably fewer high-centrality Rules than Part I, some of its Rules attain notably high centrality scores. Rule 68 on "prohibition of threat of use of force" tops the list; Rule 66 on "intervention by states" ranks third, Rule 71 on "self-defence against armed attack" and Rule 76 on role of UN Security Council rank fifth and sixth respectively.

Third, only one high centrality Rule (of the top ten) is situated in Part IV on the "law of armed conflict," namely, Rule 92 located in Chapter 17 on "conduct of hostilities." This Rule defines a cyberattack as a cyber action that causes injury or death.

The logic of Tallinn Manual 2.0 assumes the absence of any significant difference between the structure of the international system and its legal principles on the one hand, and the networked system of cyberspace and its operational principles, on the other. In retrospect, it is clear that until very recently cyberspace has been a matter of low politics for the state system as a whole. Now that cyberspace has been catapulted to the highest levels of high politics, the international community as a whole faces a common dilemma: how to manage the cyber domain in a world where sovereignty is no longer the sole operating authority system.

Then, by definition, legal systems are structured to resist pressures for rapid change. Equally, by definition, all matters "cyber" transcend any efforts to limit the rates of change for any aspect thereof. We recognize that Tallinn Manual 2.0 was not designed to "fit" the characteristic features of cyberspace, rather to develop legal bases for its management in relations between states—during war and during peace. While states are increasingly able to control Internet access and content transmission, the principle of sovereignty is yet to be fully aligned with the extent to which global communication networks and cross-border information flows are managed by non-state entities.

More detailed analysis, in collaboration with Gaurav Agarwal, amply demonstrates how the principle of sovereignty pervades and dominates all aspects of International Law Applicable to Cyber Operations and, by extension, how authority is vested in the state (see Choucri and Agarwal [2020])

Reference:

Choucri Nazli, and Gaurav Agarwal. 2020. "Complexity and Transparency: Toward International Law for Cyber Operations." CyberPolitics@MIT Working Paper No: 2020:1. Cambridge, MA: Massachusetts Institute of Technology.

Convention on Cybercrime

The Convention on Cybercrime is the first international treaty on crimes that are committed via the Internet and includes violations of network security. It contains a set of active procedures enabling the search of computer systems and networks. Signed in 2001 and put into effect in 2004, it is a landmark agreement that can provide important directives for a future Global Accord on Artificial Intelligence. 

This figure below shows the components of the Convention and their connections. This is the simplest representation of text transformed into a network view. The nodes represent specific articles, and the lines are the connections as expressed in the text of individual articles. The color code on the right side provides the information required for differentiating between components of the Convention. Note that external documents are those referred to in the text.

Network View of Convention of Cybercrime.
Network View of Convention of Cybercrime.
Source: Choucri and Agarwal (2021).

Even with the most limited form of visualization, we can identify at least three critical imperatives of the Convention that signal important directives for robust international agreements. Specifically, the Convention:

  1. Adopts an inclusive definition of cyber harm and damages, with no loopholes and no exceptions
  2. Focuses on national action and internal initiatives to reduce disconnects, build capacity, and enable collaborative responses
  3. Reinforces the scope of international accords by referring to, and leveraging, previous agreements, as well as extending cross-domain scope.

These imperatives will be especially important if, and when, the international community moves toward framing the next round of accords.

Reference:

Chocuri, Nazli and Gaurav Agarwal. 2021. "The Dynamics of Cyberpolitics." CyberPolitics@MIT Working Paper No: 2021:2. Cambridge, MA: Massachusetts Institute of Technology.

Artificial Intelligence has become a global technology with global ramifications. It is pervasive, affecting, influencing, and empowering almost all aspects of human activity almost everywhere. It is not too early to consider its yet unexpected ramifications in unexpected areas with unexpected consequences. At the same time, without guidelines or directives, the undisciplined use of AI poses risks to the well-being of individuals and creates possibilities for economic, political, social, and criminal exploitation.

In the effort to formulate a viable approach to the management of AI and its future, the Boston Global Forum (BGF) has framed and explored several venues toward the framing of an international accord.

AIWS 7-Layer Model

In collaboration with Michael Dukakis, Nguyen Anh Tuan, Thomas Patterson, David Silbersweig, and John Savage, Boston Global Forum's AI World Society (AIWS) has developed the AIWS 7-Layer Model. This brief note supports the discussion of the Social Contract for the AI Age at the Riga Conference on November 12, 2020 and highlights the necessity of forging a Global Accord on Artificial Intelligence.

This model establishes a set of norms and best practices for the development, management, and use of AI so that this technology remains safe, humane, and beneficial to society. AIWS recognizes that we live in a chaotic world with differing and sometimes conflicting goals, values, and norms. Hence, the 7-Layer Model is aspirational and even idealistic. Nonetheless, it provides a baseline for guiding AI development to ensure positive outcomes and to reduce pervasive and realistic risks and relating harms that AI could pose to humanity.

The model is based on the assumption that humans are ultimately accountable for the development and use of AI, and must therefore preserve that accountability. Hence, it stresses the transparency of AI reasoning, applications, and decision-making, which will lead to auditability and validation regarding the uses of AI systems.

Social Contract for AI Age (with AIWS)

The Social Contract for the AI Age is designed to establish a common understanding for policy and practices, anchored in general principles to help maximize the "good" and minimize the "bad" associated with AI.

In collaboration with Michael Dukakis, Nguyen Anh Tuan, Thomas Patterson, and David Silbersweig, we framed the Social Contract for the AI Age – derived from the 18th century concept of a social contract – as an agreement among the members of society to cooperate for social benefits.

The Social Contract for the AI Age is focused on the conditions of the 21st century. It is a response to artificial intelligence, big data, the Internet of Things, and high-speed computation. It seeks to build a world where all are recognized and valued, and all forms of governance adhere to a set of values and are accountable and transparent. It is a world where global challenges are met by collective action and responsibility.