黑料不打烊

Event

Characteristica Universalis Lex: Artificial Intelligence and the Ghosts of LegalTech Past

Friday, October 30, 2020 13:00to14:00
Zoom: https://mcgill.zoom.us/j/81640916943
Price: 
Free.

For our first AI and Law talk of the university year, we welcome Dr Christopher Markou, Faculty of Law, University of Cambridge.

The conference will take place on Zoom:

Abstract

The question 鈥榠s law computable?鈥 immediately recalls the classic jurisprudential question: 鈥榳hat is law?鈥 鈥 a question posed by both legal pragmatists and idealists. For tough-minded pragmatists, the question 鈥榳hat is law鈥 might entail little more than a prediction of whether those in authority will or will not stop a planned action or penalise a completed one. This pragmatic approach appeals to business鈥攊ncluding the burgeoning LegalTech industry鈥攂ecause efficiency (read: throughput) is the name of the game. After all, commercial clients don鈥檛 concern themselves with esoteric legal values, and non-lawyer clients may not even recognise them. Rather, the question is really whether some law enforcement body or judge will stop, penalise, or reward the action. If the law is reframed as the task of predicting behaviours and proactively intervening, the skills needed to practice law may become similarly circumscribed, more formulaic, and more readily computable.

But what does computable law really portend about the future of legal regimes premised on due process, equality of arms, and fairness?

Thought leaders in the field of computational legal studies or those straddling the line between legal academics and entrepreneurship are quick to tout the abilities of their models to best human experts at some narrow game of foretelling the future by doing yesterday鈥檚 homework. Most often this involves predicting whether the U.S. Supreme Court or European Court of Human Rights, for instance, will affirm an appealed judgment based on some set of variables about the relevant jurists. For reductionist projects in computational law (particularly those that seek to replace cases before them rather than complement legal practitioners), traces of the legal process are equivalent to the process itself. If a machine produces a judgement that is in some way persuasive, we should accept it, goes one refrain.

But do we not teach our students that in law the process of exercising legal judgement is inseparable from the resulting judgement? Isn鈥檛 the process the exercise?

For enthusiastic LegalTech developers, the answer is 鈥渘o鈥. The words in a complaint and an opinion, for instance, are taken to be the essence of the proceeding, and variables gleaned from decisionmakers鈥 past actions and affiliations determine their subsequent ones. In this behaviouristic rendering, litigants present pages of words to the decisionmaker, and some set of pages better matches the decisionmaker鈥檚 preferences, and then the decisionmaker tries to write a justification of the decision sufficient not to be overruled by higher levels of appeal. From the perspective of the client, predictions that are 30 per cent more accurate than a coin flip, or 20 per cent more accurate than casually consulted experts, are not just useful; they are seen as the future. But there is more to law and legal process than can be computationally imputed, and limits to public trust and acceptance of so-called 鈥楻obot Judges鈥 and automating ever more aspects of legal process and judgement. The human and repetitional toll, however, of automating human discretional authority 鈥榦ut of the loop鈥 has become acutely clear from the Australian 鈥楻oboDebt鈥 fiasco, the UK's use of a proprietary algorithm to award marks for classes curtailed by COVID, and Canada鈥檚 use of biometrics to assess refugee claims.

This talk will first examine the history of computers and AI in legal contexts, focusing specifically on the hype around Legal Expert Systems (LES) in the 1980s-1990s to the current generation of LegalTech applications. Drawing on first-hand accounts from lawyers, developers and researchers the talk will survey the technical, practical, and theoretical seeds of failure and what can be learned from it. The talk will then turn to an examination of how concurrent developments in neuroscience, physics, biology and data science are actualising a machinic ontology of the world whereby everything, including law, is computable. The talk will conclude with recommendations for research priorities in computational legal studies and suggestions for where to draw 鈥榬ed lines鈥 for automating legal process or judgement.

About the speaker

Dr Christopher Markou is Leverhulme Fellow and Lecturer in the Faculty of Law, University of Cambridge, Associate at the Cambridge Centre for Business Research (CBR), Director of the AI, Law & Society LLM at King鈥檚 College London, and Fellow of the Royal Society of the Arts. He writes widely on emerging technologies policy and governance, with work featured in outlets such as Scientific American, Newsweek, and Wired, among others. Christopher has been a keynote speaker at the Cheltenham Science Festival, Cambridge Festival of Ideas, Ted Talks, and World Congress on Information Technology. He is co-editor (with Professor Simon Deakin, Cambridge) of the forthcoming volume "" (Hart 2020) and author of the forthcoming monograph Lex Ex Machina: From Rule of Law to Legal Singularity. Twitter:

The AI and Law Series

The AI and Law Series is brought to you by the Montreal Cyberjustice Laboratory; the 黑料不打烊 Student Collective on Technology and Law; the Private Justice and the Rule of Law Research Group; and the Autonomy Through Cyberjustice Technologies Project.

This event is eligible for inclusion as 1 hour of continuing legal education as reported by members of the Barreau du Qu茅bec.

Back to top