Yvonne Jooste
 Department of Jurisprudence, University of Pretoria

 Volume 55 2022 pp 143-154
 Download Article in PDF

1 Introduction

Increasingly, technology is used in the enforcement of legal rules. These changes, in addition to establishing new forms of regulation, have implications for the future functioning of the legal system. Most of the current debates around technology’s impact on existing legal frameworks centre around self-driving cars and aspects of liability. Other popular examples include the United States’ No Fly List that relies on data mining for predictive analysis regarding potential national security threats as well as the use of computer algorithms in judicial decisions relating to criminal sentencing and parole. In the South African context, there are plans to use smart technology including facial recognition to keep law and order (Swart “Eye on Crime” Daily Maverick 2021-03-03). In this regard, many computer scientists as well as those in the field of critical algorithm studies have pointed to the possibilities of false arrests, discrimination, and the targeting of innocent citizens by using technologies that reflect prejudice and bias rather than eliminating it. Further, technologies such as Blockchain and Machine Learning is progressively moving into the law’s domain (Hassan and De Filippi “The Expansion of Algorithmic Governance: From Code is Law to Law is Code” 2017 Field Actions Science Report 89). For example, “smart contracts” are contracts that transpose legal and contractual provisions into a block-chain based agreement that guarantees execution (as above).

“Law is/as code” is a wider term that is used to describe this form of regulation whereby technology is used in the enforcement of law (Hassan and De Filippi 2017 Field Actions Science Report 88). Law is/as code, or regulation by code, is quickly becoming a regulatory mechanism adopted by private entities as well as public actors in a number of contexts (as above). The use of technology in this way has in general been termed “techno-regulation” - simply, the idea of regulating human behaviour through technology. Therefore, the digital era has opened the door to new practices of regulation that can impose values by embedding them in technological artifacts such as algorithms (see Swart Daily Maverick 2021-03-03 with regards to predictive policing in South African context; Citron and Pasquale “The Scored Society: Due Process for Automated Predictions 2014 Washington Law Review 1-33; Dankert and Schulz “Governance by Things as a Challenge to Regulation by Law” 2016 Internet Policy Review 4-5; Lyon Surveillance as Social Sorting: Privacy, Risk, and Digital Discrimination (2003); see also Birhane “The Algorithmic Colonisation of Africa” 2020 Scripted 401with regards to the digitisation of lending in Africa).

In this note, I critically discuss the idea of law is/as code or the automation of legal rules. I firstly provide a brief overview of the idea of law is/as code and in this section highlight some of the major differences in the mode of existence of law as natural language and law as source code. The second section problematises the perceived objectivity, neutrality, and certainty of algorithms and this is further advanced in the third section by discussing the idea of law as a complex system - a system embedded and that operate within society. Before concluding, the idea of the “uncontract” is discussed in order to ask about the broader implications of law is/as code.

2 Regulation by code

Lawrence Lessig’s famous assertion that “code is law” refers to the fact that code is, ultimately, the architecture of the Internet and is, therefore, capable of constraining an individual’s actions via technological means (Hassan and De Filippi 2017 Field Actions Science Report 89; Lessig Code and Other Laws of Cyberspace (1999)). In general, computer code refers to a set of instructions or a system of rules written in a particular programming language (source code) that includes generating, profiling, and implementing algorithms, which act as a precise list of instructions that conduct certain actions systematically. Lessig’s assertion has been analysed through a number of lenses (See Lessig Code, Version 2.0 (2006)). The inverse, “law is code”, refers to the fact that, increasingly, our interactions are governed by software and, as such, technology has become a means to directly enforce different rules (Hassan and De Filippi 2017 Field Actions Science Report 88-89). Regulation by code can be described as an iteration of “law is code”. An example of regulation by code are digital rights management schemes that transposes the provisions of copyright law into technological measures of protection, thereby restricting the usage of copyrighted works (as above). Another well-known example, as mentioned above, is the use of “smart contracts” - these are self-executing contracts where the terms and agreements between the parties to the contract are directly written in lines of code (Hassan and De Filippi 2017 Field Actions Science Report 90). The agreement and the code exist across a distributed, decentralised blockchain network, and the code, therefore, controls the execution of the contract. Transactions are trackable and irreversible. Smart contracts thus allow for transactions and agreements to be carried out amongst anonymous and disparate parties without the need for a central authority such as the legal system or an external enforcement mechanism such as the courts.

The Internet of Things has also increasingly formed part of the discussion of regulation by code (Schulz and Dankert 2016 Internet Policy Review 1). It is argued that this phenomenon will in the future have an impact on all spheres of life, specifically, restrictions on and determination of our behaviour by an architecture of devices that form part of our everyday life (Zuboff The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (2019) 293-294). Technically, the Internet of Things refers to the combining of technologies such as sensors, actuators, data processing, and communication into a bundle with new usability (Dankert and Schulz 2016 Internet Policy Review 2). In this regard, legal academic discourse revolves around specific questions on existing legal frameworks and the possibility of the creation of new legal concepts, especially as it involves the legal concept of liability (as above). Self-driving cars, as mentioned, are often discussed in the context of liability and the necessity for new concepts (Dankert and Schulz 2016 Internet Policy Review 1). To be clear, algorithmic regulation, or regulation by code, already finds expression in a number of sectors (see Hassan and De Filippi 2017 Field Actions Science Report 88). The broader questions regarding regulation by code involve how the law should and can react to far-reaching technological developments, and for the purposes of this discussion, how human behaviour is to be regulated by these developments.

At this juncture, it is important to explain some of the main differences in the mode of existence of law as written text or language and coded language or algorithmic regulation. Firstly, law as written text function ex-post facto whilst coded or algorithmic regulation operate ex-anti. In this regard, Schulz and Dankert (2016 Internet Policy Review 2) explain that the normative factors that influence and determine human behaviour on the internet have been categorised according to at least four governance factors: social norms, law, contracts, and code. Social norms, law, and contract (as a surrogate of law) tell people what they should or should not do. If people act against these rules, they will either be sanctioned socially (through, for example, social isolation) or through the consequences that the law stipulates. Code, however, essentially describes the circumstances shaped by hardware and the software, thereby setting a framework of “behaviour in virtual spaces by defining options and limits of interaction” (Dankert and Schulz 2016 Internet Policy Review 2). Code can also nudge people towards certain behaviours to increase the likelihood of compliance (as above). Code is self-executing (Hassan and De Filippi 2017 Field Actions Science Report 90). Unlike social norms and law, it defines the environment for user behaviour. As opposed to traditional legal rules that determine what a person should or should not do, code, as technical rules, determine what a person can or cannot do (Dankert and Schulz 2016 Internet Policy Review 2). Therefore, a need for a third party (such as the courts or police forces) to intervene is eliminated - regulation by code is ex-anti (as opposed to the ex-post enforcement of law), making the possibility of breach or transgression nearly impossible. Hildebrandt (Smart Technologies and the End(s) of Law (2015); “Saved by Design? The Case of Legal Protection by Design” 2017 Nanoethics 308) explains that with the advent of computational technology, regulative rules are becoming constitutive rules - while regulative rules leave the option open to either follow the rules or ignore them, constitutive rules only permit the action to be taken if the criteria the rules define are fulfilled. Simply put, regulation by code essentially creates a digital environment where certain actions are permitted and others not, and this architecture is also designed to promote certain behaviours over others.

Secondly, there are a number of consequences that relate to the difference between the natural language of modern law and the language of computer code and algorithms. Law is written text or natural language and is, therefore, inherently flexible and ambiguous. Coded language consists of technical rules that are highly formalised. As such, it is argued that regulation by or through technology is possibly more effective than law or compensates for the so-called weakness of law (Dankert and Schulz 2016 Internet Policy Review 2); regulation by code, it is argued brings a variety of benefits, mostly related to its above-mentioned characteristics of automating law and the enforcement of rules and regulations a priori (Hassan and De Filippi 2017 Field Actions Science Report 89). Importantly, regulation by code, however, lacks the complex interaction between abstract norms and the unique case at hand that “makes each application of the law in itself a construction of law.” (Dankert and Schulz 2016 Internet Policy Review 8). Written laws and norms can be refined as each time a legal text’s meaning is newly construed in light of the specific facts of a case. Simply put, coded language, as highly technical and formalised language lacks the distance or discontinuity that is a consequence of natural language - i.e., the distance or discontinuity between legal rules and norms and their interpretation and application to an individual case.

The third aspect relates to the unique character of human decision-making. In human decision-making, psychologists distinguish between tacit and explicit knowledge (Dankert and Schulz 2016 Internet Policy Review 7-8). This refers to the fact that human decision-making is often based on tacit knowledge that is hard to verbalise, or as Polanyi explains, “we can know more than we can tell” (The Great Transformation (1965) 14). Theoretical models can be used to create an abstract idea about explicit and implicit knowledge; and “their interaction and their influence on gathering knowledge” (Dankert and Schulz 2016 Internet Policy Review 7-8). Algorithms always use explicit knowledge in reaching conclusions - “the fact that no-one can describe the human decision-making process in every detail, because it is for the greater part based on tacit knowledge, [leads] to the idea that complex interaction between humans and [between humans and] their environment cannot be overtaken by algorithms” (as above). Algorithms thus reaches its limit when rules must be reasonably ignored, recast, or redefined.

Following the discussions above, it has be argued that the restricted environment that code produces as well as its ex-anti nature can in some cases produce more just outcomes exactly because it relies on highly formalised and technical language, thereby avoiding the flexible and ambiguous nature of natural language and the discontinuity between its interpretation and application as well as circumventing the fallibility and unpredictability of human decision-making in judicial arbitration (Hassan and De Filippi 2017 Field Actions Science Report 88-90). This argument, however, assumes that coded and algorithmic language is more objective, unbiased, and neutral, and that algorithms can lead to more certain and controlled outcomes. The section below problematises these contentions.

3 Law is code and algorithmic opinions

Schulz and Dankert (2016 Internet Policy Review 10) state that every technology is a product of human work; “infallible technology will never be invented.” Cathy O’Neil shares the same sentiment in her influential work Weapons of Math Destruction ((2016) 21) when she states that “algorithms are opinions embedded in code”. As it concerns the wide use of algorithms, Berry (“Trouble Shooting Algorithms” 2020 McMaster Journal of Communication 92) explains that “algorithms control our smallest, most miniscule choices, to our largest, life-defining decisions”. From home-loan approvals to university rankings, online advertising, law enforcement, human resources, credit lending, insurance, social media, politics, and consumer marketing - algorithms operate within these systems, “collecting, segmenting, defining” in all spheres of human life (Berry 2020 McMaster Journal of Communication 92). The above assertions highlight the fact that like all technological artifacts, code is not neutral but inherently political as it can support certain structures or facilitate certain actions or behaviours over others (Winner “Do Artifacts Have Politics?” 1980 Daedalus 121-136). Further, the algorithmic era has brought to light the massive power amassed through algorithmic operations and its real consequences on our lives. Algorithms filter, curate and dictate information consumed by the public, and as such, profoundly shape “lives and outcomes as a consequence” (Willson “Algorithms (and the) Every Day” 2017 Information, Communication and Society 142). These consequences are a result of the “specific choices of platform operators and software engineers seeking to promote or prevent certain types of action” (Hassan and De Filippi 2017 Field Actions Science Report 89). Further, as Cinnamon (“Social Injustice in Surveillance Capitalism” 2017 Surveillance and Society 615) argues with regards to algorithmic classification in Big Data operations, “inaccuracies that might affect an individual are only problematic insofar as they are ineffective at accurately predicting patterns and behaviours, and as Cohen (“The Biopolitical Public Domain: the Legal Construction of the Surveillance Economy” 2017 Philosophy and Technology 14) argues: “As long as [the] project is effective on its own terms - an outcome that can be measured in hit rates or revenue increments - partial (or even complete) misalignments at the individual level are irrelevant.” As Cinnamon (2017 Surveillance and Society 611-612, 616) also states: “Inability to secure a loan, mortgage, job, or health insurance due to inaccurate placement in a ‘risk’ category is clearly unfair, however the accuracy of the classification is perhaps unimportant in the context of social justice - accurate or not, personal scoring systems ‘make up people’ [...] they produce new social categories of difference and restrict our ability to shape our own sense of self, a clear threat to parity of participation in social life.” This notion is sometimes also referred to as “data doubles” - the idea that each person has a digital duplicate of their lives captured in data and spread across the assemblages of information systems, which data doubles are consequential when it comes to the choices we can make (615).

In the work mentioned above, O’Neil (2016) calls for the need for transparency from the companies that build many consequential algorithms, arguing that algorithms need to be disassembled, scrutinised, and reassembled according to human and democratic values. Such operational transparency is urgently needed as many scholars have demonstrated that data-driven decision-making has been proven to be implicitly biased (Hardt “How Big Data is Unfair: Understanding Unintended Sources of Unfairness in Data Driven Decision Making” 2014 Medium 1-3). For example, allegedly neutral algorithms have been shown to systematically discriminate against certain groups of people in employing generalisations, and showing results which may be catalogued, for instance, as racist and sexist (Noble (2018); O’Neil (2016)). Algorithmic classification can easily reinforce oppressive social relationships and enact discriminatory models of profiling (Noble (2018); O’Neil (2016)). The emerging field of critical algorithm studies interrogates the hegemonic narratives and frameworks of corporate and state controlled digital communication technologies, algorithmic operations, and machine learning technologies (see Pasquale The Black Box Society: The Secret Algorithms That Control Money and Information (2015)). These technologies do not somehow operate above society, but rather frequently reflect the political, social, and cultural environment in which they operate; from training machine learning models on unrepresentative datasets and opaque access to the data upon which these models are built to algorithms that prey on the most vulnerable segments of society through predatory lending and advertising practices (O’Neil (2016) 3, 72). Algorithms, therefore, have become an issue of social concern. (Royakkers, Timmer, van Est and Kool “Societal and Ethical Issues of Digitization” 2018 Ethics and Information Technology 127-142). In this regard, Noble ((2018) 9) has contended that bias is not merely the result of “coding errors” but rather part of the very architecture and language of certain technologies, and, as such, is systemic and entangled with operations of discrimination. To put is simply, these technological innovations are not distinct or separate from the culture, politics, and ideologies within which they operate.

To be clear, scholarly research demonstrates that algorithmic injustice is not the result of a few rogue private entities that use predatory and discriminatory practices for profit-maximisation. Rather, because of the false belief that algorithms are neutral and value-free, algorithmic classification and profiling are rife in almost all economic and social sectors.

Critical algorithm scholars have thus demonstrated that algorithms have embedded values and bias and leads to social sorting and to discrimination; that algorithms align with and advance specific ideological worldviews; that they are fundamentally human in their use and design; and that with algorithmic classification comes automation, rationalisation, quantification and erasure of human judgment, complexity and context (Gillespie and Seaver “Critical Algorithm Studies Reading List” 2016 https://socialmediacollective.org/reading-lists/critical-algorithm-studies/#1.1 (last accessed 2022-05-01)). Importantly, even if algorithmic regulation were to be regulated so as to be operationally transparent, algorithms are complex technical assemblages that take considerable time and expertise to map. Further, as political and human artifacts, it doesn’t necessarily lead to more efficient, expedient, and definitive outcomes. The next section analyses this idea from the perspective of complex systems.

4 Law as a complex social system

In complex systems, the structure of a system emerges spontaneously as a result of the interactions of the component elements in the system as they encounter information. Although complexity theory developed in the natural sciences, it has been used and applied to a number of disciplines, including economics, social sciences and the law (Murray, Webb and Wheatley Complexity Theory and Law: Mapping an Emergent Jurisprudence (2019) 1-2). Complexity theory understands law as an “emergent, self-organising system in which an interactive network of many parts - actors, institutions and ‘systems’ - operate with no overall guiding hand, giving rise to complex collective behaviours that can be observed in patterns of law communications” (Murray, Webb and Wheatley (2019) 3).

According to this conceptual framework or way of thinking, legal complexity is a fact of the world and the tools we currently possess to make sense of the law are insufficient to understand the limits of our knowledge of the law. It is not possible to give a full account of legal complexity here. In its most basic terms, this view recognises that the system of law comprises of a myriad of different individual actors, institutions, communications, rules, principles, and norms that have a complex interplay that ultimately make up the functioning of the law. In this way, the system of law is continually defined and redefined by the complex interplay between all the “parts” of the law (courts, practitioners, legal rules, precedent, judges, clerks, documentation as communication etc.). As such, it becomes nearly impossible to describe every detail of the system or describe the functioning of law in its entirety. We are hard pressed to understand the interplay and the consequences of the innumerable actions and communications that is law. In more general terms, systems thinking requires understanding that there are limitations to our knowledge and that, as such, all models are essentially wrong (Sterman “All Models are Wrong: Reflections on Becoming a Systems Scientist” 2002 Systems Dynamic Review 501).

Malan and Cilliers (“Gilligan and Complexity: Reinterpreting the ‘Ethic of Care’” 2004 Acta Academica 1-20) explain that social systems, and by implication the legal system, are complex and embody the characteristics of complex systems in general. In complex systems, an enormous number of individuals interact constantly in a rich and dynamic way (Malan and Cilliers (2004) 3):

The importance of seeing society as a complex system - instead of as a chaos that needs to be ordered - is that it recognises and gives importance to the multitude of contingent relationships that exist in society and to the dynamic interaction between these relationships. Thus there is a move away from an overemphasis on universal issues to an ‘appreciation’ of the singular.

Malan and Cilliers further connects the idea of complexity to ethics - to underestimate or disregard the complexity of social and legal systems is not simply a technical error but and ethical mistake (as above; Malan and Cilliers relies on Jacques Derrida, see “Force of Law: The ‘Mythical Foundations of Authority’” in Cornell, Rosenfeld and Carson (eds) Deconstruction and the Possibility of Justice (1992)). The authors argue that complexity, as a matter of ethics, recognises that justice is concerned with continually redrawing the boundaries of the legal system and complexity ultimately recognises that society consists of complex relationships between individuals and is therefore too complex to merely be thought of in terms of rules and procedures (Malan and Cilliers (2004) 10-11). In a sense, the more abstract our understanding of rules, principles and systems become, the less we can recognise the complexity and contextuality at play in societies and appreciate societal and individual relationships.

As mentioned, the language of code is technical language, rigid and highly formalised as opposed to the flexibility and ambiguity of natural language. Hildebrandt 2017 Nanoethics 307) has pointed out that natural language “facilitates shared meaning [...] as well as the ability to disrupt such meaning by means of creative resistance or simply by generating successful misunderstandings that - in turn - lead to subtle or not so subtle shifts in meaning.” As mentioned, this implies that law as code lacks the complex interplay between abstract norms and the specific case at hand that result in each application of the law itself being a “construction of the law” (as above). Shifts in meaning and new legal constructions is how the legal system redraws its own boundaries. Therefore, natural language enables the contestation of received opinion because it builds on what Hildebrandt ((2018) 308) has termed “a semantics that is always on the move” that is not at all evident in the case of computer language or put differently, “[o]ne consequence of the specific hermeneutics of law is a certain flexibility that is at least not inherent to algorithms” (Dankert and Schulz 2016 Internet Policy Review 8). If this view is related back to coded or algorithmic regulation, from the perspective of complex systems, law is/as code represents even higher forms of abstraction, and erases, to a certain extent, the complexity, contextuality, and singularity involved in legal decision-making. Highly abstract coded rules that operate to establish architectures of possible choices and courses of action a priori circumvents to a large extent the interpretation of legal rules and norms and their application to specific and singular situations.

Before concluding, I discuss the idea of the “uncontract”. Zuboff’s explanation of this notion captures the complexity and contextuality of societal relationships and the functioning of the legal system, and it serves to ask: what is at stake in algorithmic and coded regulation?

5 The uncontract and the utopia of certainty

In 2014, Hal Varian, Chief Economist for Google Inc., (“Beyond Big Data” 2014 Business and Economics 27-31) described what he termed a “new contract form” (Zuboff (2019) 333). Varian referred to the example of when someone stops making their monthly car payments, stating that “nowadays it’s a lot easier just to instruct the vehicular monitoring system not to allow the car to be started and to signal the location where it can be picked up” (Zuboff (2019) 333; Varian (2014) Business and Economics 29). To be sure, the use of technology as described in Varian’s example circumvent entirely a number of costly and time-consuming formal legal processes. Zuboff, however, reflects on Varian’s assertion by referring to the institution of the contract as the “making of a promise in the joining of wills” (as above), an idea that has little to do with efficiency. Contracts originated, according to Zuboff (as above), as shared “islands of predictability” intended to mitigate uncertainty for the human community, and contract law supports and shapes the social practice of making and keeping promises and agreements. Further, contract law “reflects a moral ideal of equal respect for persons” and this is way it can produces genuine legal obligations instead of it merely being a system of coercion (Zuboff (2019) 333).

For Zuboff, the new contract form described by Varian is in reality an “uncontract” that abandons the “human world of legally binding promises and substitutes instead the positivist calculations of automated machine processes” (as above). Of course, as Zuboff asserts, the institution of the contract has been abused in every age - from the slave contract to the exploitation of property ownership, “as incumbent power imposes painful inequalities that drain the meaning, and indeed the very possibility, of mutual promising” (as above). However, for Zuboff, the new contract form described by Varian bypasses human promises and social engagement and aims instead for a condition of “contract utopia” - “a state of perfect information known to perfectly rational people who always perform exactly as promised.” (Zuboff (2019) 334). For Zuboff (as above), the sociality of the institution of the contract may at times entail conflict, oppression, cohesion or anger, but it also produces human trust, cooperation, cohesion, and adaption. In this sense, the “uncontract” transforms the human, legal, and economic risks of contracts into plans constructed and maintained for the sake of guaranteed outcomes (as above).

Zuboff relays the story of an old married couple in Illinois, U.S. that owed their credit union $350 for their car, a 1998 Buick (Zuboff (2019) 335). The credit union enlisted the help of a repossession service. The local man charged with repossessing the car was disturbed to find that the elderly couple could not make their payments on the Buick as they had to buy expensive medications for health conditions. The local man, Jim Ford, offered to pay the couple’s debt and started an online fund-raising appeal that led to financial help for the couple. Zuboff poses the question of the consequences of Jim Ford’s involvement and the possibility of the new contract form that would have merely instructed the vehicular system of the car to not start - “[t]he algorithm tasked to eliminate the messy, unpredictable, untrustworthy eruptions of human will would have seized the old Buick” and the logic behind the uncontract is the “drive toward certainty [that] fills the space once occupied by all human work of building and replenishing social trust, which is now reinterpreted as unnecessary friction in the march toward guaranteed outcomes” (as above). The story mentioned above ultimately went ‘viral’ and for Zuboff (as above) it reminded of the “most cherished requirements of [human] life: our shared assertion [that finds] expression in the joining of wills in mutual commitment to dialogue, problem solving, and empathy.”

Further, in relation to the “uncontract”, Zuboff ((2019) 339) has stated the following:

Uncertainty is not chaos but rather the necessary habit of the present tense. We choose the fallibility of shared promises and problem solving over the certain tyranny imposed by the dominant plan because this is the price we pay for freedom to will, which founds our right to the future tense. In the absence of this freedom, the future collapses into an infinite present of mere behaviour, in which there can be no subjects and no projects: only objects.

Zuboff’s ((2019) 351-382) concerns above relates to an “instrumentarian power” - the power of governments and corporations to use technology and digital infrastructures to shape human behaviour in predictable ways, and that seeks certainty, expediency, and efficiency. In this way, the uncontract represents a certain instrumentality that erases uncertainty and complexity in human behaviour and, by implication, our ability to make certain decisions. For Zuboff, this instrumentality threatens freedom or what she terms “the will to will”: “Most simply put, there is no freedom without uncertainty; it is the medium in which human will is expressed in promises - which is the right to the future tense” (333). This sentiment is echoed in complex systems thinking that resists seeing society as chaos in need of ordering. Zuboff ((2019) 398-399) essentially calls for a reclaiming of the freedom to promise, to will, and to resist “a utopia of certainty” - digital architectures that determine the choices we can make and the ways in which we can live.

It should be noted that the type of instrumentality and drive to certainty that she describes closely relates to the project of datafication - the rampant contemporary phenomenon that seeks to quantify human life through digital information or the wider transformation of human life so that its elements can be a continual source of data (Mejias and Couldry “Datafication” 2019 Internet Policy Review 1-10, 2). Datafication can also be viewed within the framework of neo-liberal logic, which in general involves the quantification and economisation of societies, neutering uncertainty (see Brown Undoing the Demos: Neoliberalism’s Stealth Revolution (2015) 17, 31-32). These frameworks are outside of the scope of this discussion but nonetheless point to broader corporate and governmental tendencies to regulate, classify, and sort individuals and their activities through technological means. Within these broader frameworks and their drive toward certainty, the automation of legal rules appears to be inevitable given the rampant tendency for ‘efficiency’ and for solving complex social, political, and economic problems through technological means (see also Birhane “The Algorithmic Colonisation of Africa” 2020 Scripted 398).

6 Conclusion

Law is/as code is primarily discussed around the desirability of the following broader categories or characteristics: ex-anti v ex-post facto enforcement; the positivist and highly abstract language of code v the ambiguity and flexibility of natural language; pre-emptive computing v the fallibility of judicial decision-making; and perceived expediency, efficiency, and certainty v the complexity, interplay and uncertainty that characterises human relationships. Of course, it is more likely that with technological advancement, automated law or law is/as code and existing (or traditional) legal frameworks will operate and function simultaneously and in a manner that interrelates and interconnects all the characteristics described above. At the very least, currently, law is/as code allows us the opportunity for renewed debate regarding the meaning(s) of justice and the means, ends, and purposes of legal systems, specifically regarding the positivistic and calculated nature of coded and automated legal rules as opposed to more transcendental conceptions of the relationship between law and justice that highlight the affordances of natural language in recasting, redrawing, and reinventing legal norms and rules as well as the law’s discursive contestability for our understanding of justice. It also allows us to ask: What is at stake in the reduction of legal rules to an instrument for the achievement of objectives?

To be clear, the idea of different legal realities or different modes of the existence of legal rules such as automated legal rules cannot be dismissed from the outset. Historically, the technology of written text and its subsequent accessibility changed external environments and humans’ cognitive abilities (see Hildebrandt Smart Technologies and the End(s) of Law (2015) 48-49). The written word reconfigured all aspects of civilisation including the law. Hildebrandt (as above) explains that a number of aspects of legality have been inherited from the affordances (the constraints and possibilities associated with a specific technology) of the printing press, including the discursive contestability of law. Similarly, law is/as code and the broader advances in machine learning, algorithmic operation, and artificial intelligence will result in major consequential shifts in the external environment, our cognitive abilities, and the means, ends, and functioning of legal systems. As technologies change, the mode of existence of law will also change.

My contention here is that we should approach with scepticism advances in automating the law that is based on assertions of neutrality, objectivity, and certainty. As with the natural language of modern law, critical legal studies, critical race theory, feminist legal studies and other theoretical perspectives have demonstrated the legal system’s political embeddedness and its advancement of certain hegemonic worldviews. So too have those in the field of critical algorithm studies demonstrated that algorithmic functioning and its source code reflects bias and advances existing architectures of oppression. These technologies are not separate from the culture, politics, and ideologies within which they operate, and what is more do no produce perfect and definitive outcomes. They are fundamentally human in their operation and design. Further, it is necessary to ask about the broader implications of increased technological regulation, instrumentality, and datafication through digital infrastructure, specifically as it concerns our understanding of freedom and our ability to decide on our own courses of action.