JJUA55235U Artificial Intelligence and Legal Disruption
Society stands on the cusp of unprecedented, even unfathomable, change as the maturation of decades of scientific research and technological development promises to unleash waves of brilliant technologies in the near future. Few fields hold the prospect of seismic societal disruption like artificial intelligence and robotic technologies: their impending shift from science fiction to daily reality holds the potential to inundate society with flood of fundamental challenges. It is important to emphasise that these projected disruptions to almost every sphere of human activity originate from a relatively tight cluster of emerging technologies. That this broad array of challenges emanate from a single-source origin provides unique opportunities and problems for engaging with and address AI and its attendant disruptions to society.
Converging with these conventional conceptions of artificial intelligences based upon silicon substrates and the computer sciences, are models of artificial intelligence arising from progress in neuroscience that are biologically-inspired or which integrate the biological with the artificial. Thus, an expanded conception of artificial intelligence encompasses the spectrum from produced (manufactured) to reproduced (replicated) versions, with hybrid intelligences occupying the space in between. This range creates further challenges for developing appropriate regulatory responses, but also greater opportunity for providing regulatory feedback by testing the consistency and coherence of legal and policy responses which have been framed by unspoken presumptions and implicit characteristics.
As human beings have long been accustomed to being the dominant form of intelligence on earth, there has been little consideration for how to accommodate other intelligent entities into human (and therefore anthropocentric) societies. This sets the stage for artificial intelligences to disrupt forms of human-centred organisation, such as law. Such disruption can be particular areas, in varied and unforeseen ways, but can also be systemic. An important subset of anthropocentric modalities of organisation are those concerned with regulation and governance, and in particular the legal systems which are relied upon in these domains. Why might the prospect of artificial intelligence disrupt the law, regulation and governance? Why might particular manifestations and applications of artificial intelligence disrupt discrete legal areas? And why might legal disruptions be more subtle and pervasive than it might appear at first glance?
It is this potential for artificial intelligences to disrupt legal principles, processes and procedures that forms the focal point of evaluation and examination in this course. Legal disruption forms the filter through which the issues embraced in this course percolate through: only artificial intelligences, or their manifestations, which are capable of fundamentally displacing legal presumptions or which systemically distort the functioning of the regulatory system will be considered. Thus, artificial intelligences and their manifestations must raise structural or systemic challenges to governance, to be included in this course. This is a necessarily high threshold, but in order to test whether an artificial intelligence or its impact passes muster, we will of course also discuss issues which might ultimately fall short of the mark.
Given the emphasis upon legal disruption, this course constantly aims at a dynamic target: as legal and policy responses to challenges posed by artificial intelligences are overcome or otherwise settled, these issues lose their disruptive effect and fall out of the ambit of this course. What loses controversy also loses interest for us. But the vantage point granted by legal disruption offers a mix of horizon-scanning for the next generation of challenges, and a measure of foresighting future issues which we will be able to prepare law and policy responses. As such, the perspective in this course celebrates the unknown and the incomplete, as a way of formulating more robust and resilient regulatory models as a response to these brilliant technologies.
Finally, the legal disruption approach allows us to deploy AI and robotic technologies as a mirror to the legal and regulatory system. It offers a rare chance to step outside of contemporary legal processes, principles and presumptions and to test their continuing efficacy, validity and tenability. As such, the promise of investigations at the intersection of AI and the law with such a conceptual framework, is that it might ‘illuminate the whole law’. From a separate vantage point, it is also possible to see any flaws or inconsistencies more clearly. It also provides a rare opportunity to improve the law, by updating its doctrine to reflect the contemporary scientific paradigm.
This course is a continuously improved-upon version of previous editions of the ‘AI and Legal Disruption’, and a significantly updated version of the ‘RoboLaw’, courses convened over the past several years. This course is heavily research-based, taking departure in cutting-edge work that is taking place here at the Law Faculty through the AI-LeD Research Group, and other leading institutions. It is also a research-integrated course: the unique approach to a wide and relatively open academic field means that there is much scope for students to engage in work that will push forward thinking in these areas. Previous final written assignments produced for this course have gone on to be published, and several former students are now undertaking their PhDs and beyond. There are amble opportunities for those of you who might be interested in pursuing research in the future to get a taster of what it might be like to write your Master’s Thesis or PhD Dissertation in this area.
This course is likely to be unique and unlike any other course that you have, or will take, during your time at university. This is because AI-LeD has a problem-finding orientation (in contradistinction to the more orthodox problem-solving approach). The difference is subtle, yet fundamental: in problem-finding, it is the question that is the central concern; for problem-solving, the main objective is in producing answers. In other words, the overarching objective of this course is to open up new research trajectories, and to probe the ‘unknown unknowns’ (what we do not know that we do not know). As a result, this course is run like a true research workshop, operating at the very edge of contemporary research and thinking. It invites students into the process of developing thinking approaches to these challenging contemporary issues as we collectively explore the potential problem and opportunity space for law, regulation, and governance that is opened up by AI and its applications. This is usually very difficult at the start, but it has always gotten easier as the term progresses. What you need to bear in mind for this course is that there are no answers, no solutions: at best we can hope for re-solutions that we will need to continually and iteratively revisit.
The seminars in this course will be divided into four broad sections, after the initial introductory sessions.
The aim of section A is to ask the question: given how law has dealt with new technology in the past, does the rise of artificial intelligence and its applications actually create the need for a new, separate field of regulation? We will critically consider the competing views offered, on both the extent of the challenge posed by AI, and the nature of the necessary and appropriate regulatory responses. What can we learn from history, and why? Is there actually anything new under the sun, and why? This section thus anchors the course in the past trajectories of technology regulation, and in debates that have taken place to date about how best to regulate AI. Thus, this first section explores whether AI and its applications actually introduce new challenges to the law, and illustrates this by exploring the existing debates surrounding the use of autonomous weapons systems in armed conflict.
Having discussed the relevance of reconsidering regulation in the context of AI, sets the stage for section B, which aims to provide a toolbox of sorts for the course. It does so by exploring a range of different approaches to legal disruption, and the prospect for artificial intelligence to trigger it. Why are new technologies, including AI, the focus of regulatory attention? Why should we treat technology as a target for regulation? Are there other approaches which are more nuanced to the challenges and opportunities presented in an AI-infused world?
This section starts by pivoting away from treating technology as a regulatory target, towards regulating for technologically-driven changes in the sociotechnical landscape. Yet, this seems to swing the pendulum too far in the other direction, because taken to its logical conclusion, it suggests that technology and technological artefacts play no role and have no impact in the regulatory sphere. To introduce nuance into these discussions we dip briefly into Science and Technology Studies (STS), with its focus at the intersection between new technologies and societal change.
Yet, why focus on the legal disruption model? We hone in on this question by reading the conceptual framing paper, and round it out by analysing a proposal for four levels of AI governance (both of these initiatives are homegrown, AI-LeD projects). This section then continues through a different approach, those revolving around ignorance, agnotology and uncertainty, proposing the course’s core orientation of problem-finding approaches, in the process. Why do we not know what will be the problems that we are confronted with as a technology is adopted and integrated into society? As this section is somewhat abstract and conceptual, the final two sessions seek to ground what we have covered within the debates on military AI, and autonomous weapons systems in particular, to work out the lessons that we might draw from these various approaches. We should conclude this section with new abilities to depart from the well-worn regulatory debates concerning how to regulate or govern AI, and launch forays into new ways of framing question or problems that can then be researched further.
Section C comprises of explorations and examinations into artificial intelligence and legal disruption. Again, the aim is to take a range of different approaches from Section B, and to work these through different areas of activity or application to unearth surprising problems or unintuitive conclusions. The unifying threads running through this Section are thus an active problem-finding orientation and attempts to identify alternative, valid, framings of the potential impact of AI applications on the sociotechnical landscape. There is clearly a vast potential area to be covered under Section C, so we draw clusters of related perspectives and problems as exemplars.
It may be worth noting that many of the best final written assessments in past editions of this course essentially identified and carved off areas of Section C and then subjected these areas to intense critical scrutiny with the different approaches that we introduced in Section B.
Section D is geared towards student-driven content and peer-to-peer teaching and learning, and the class is expected to be clustered into groups which will design and lead a particular session. This means that you will self-organise into groups according common interest or approach, and will as a group will be responsible for conducting a full seminar on this topic. This includes finding suitable readings, disseminating these to the rest of the class through Absalon, and actually conducting the class. It should be noted that, since it is designed along the peer-to-peer learning and teaching framework, that attention must be paid to mutual respect: continued engagement and participation is therefore mandatory and may affect your final grade.
This course seeks to jolt students out of lazy thinking habits and does this through de-familiarisation and participatory engagement. This overturns the temptation, when it comes to legal problems, to automatically reach for standard legal solutions and responses. Find the relevant area of the law, see what the rules stipulate, and apply those rules to the given facts to solve the problem. In Roger Brownsword’s terminology, this is the ‘coherentist’ legal mindset’.
As the approach of legal disruption maintains our focus on the edges of uncertainty, so this type of approach cannot work, almost by definition. While most legal education is aimed at resolving problems, closing gaps and reducing or eliminating uncertainty, the approach taken in this course is the direct opposite. Here, we seek to find the challenge, problem, or anomaly, and take that as a point of departure for a ‘deep dive’ that explores what the identified issue can teach us about the theory and practice of law as a whole. Thus, we exploit legal anomalies rendered perceptible by AI and robotics as a portal to exploring the structure and system of the law itself.
The relevant literature for the course is constantly updated. Students will be provided with a detailed reading list in the syllabus on Absalon.
There is no assigned textbook for the class, and all materials should be available via the library or online.
Willingness to challenge received knowledge, and to consider a range of possible approaches.
The aim, therefore, is to have advanced our collective thinking about a particular issue through the process of participating in the class. Attendance is therefore necessary, but insufficient, for good grades simply because of the additional requirement of active engagement in the discussions. This does not mean that you must feel compelled to speak out during the sessions, but it does mean that you need to critically reflect upon the discussion in a way that transforms your thinking about a given area. This means that classes are more about knowledge generation and thinking development, hence why traditional modes of teaching are not able to achieve the stated aims of the course.
Note: the course title changed from 'RoboLaw: Law, Robotics and Artificial Intelligence'.
- 15 ECTS
- Type of assessment
- Written assignmentAssessment: Independent written assignment
The exam consists of submitting a paper which has been produced during the semester. There is a firm deadline for submitting the paper and students should follow the official instructions for submission provided by Faculty administration for the interests of anonymity, fairness and efficiency. Any questions related to the formal (non-substantive) aspects of the assessment process should be directed to the Faculty administration. Students should also note that there is no need to be physically present for the examination submission process, since this is all handled virtually. There are no restrictions as to when you can begin to write the assignment during the semester, and students are encouraged to get an early start to be able to go through several iterations and thus have a more advanced and nuanced analytical discussion.
Given the problem-finding purpose of this course the final written assignment gives you creative latitude to explore, in a sustained and robust fashion, a relevant area of your choice that has direct relevance to the substantive topic areas covered by this course. You may also choose to adopt the problem-finding, legal disruption, approach to other new and emerging technologies to explore their potential impact upon the law, regulation, and governance, but you are encouraged to discuss the gist of such projects with me first.
Students are this encouraged (to the point of expectation) to formulate their own thesis statement (akin to a hypothesis) or research question in an area significantly intersecting with the content and approach of the course. As this is a course in the Law Faculty, the focus should be upon legal, regulatory or governance issues (with an emphasis upon innovation, interruption and disruption of the relevant principles and processes): interdisciplinary approaches are welcomed provided they are anchored in legally-relevant questions or consequences.
I should emphasise here that the aim is for you to tackle issues that raise (at least the spectre of) legal disruption arising from AI and its applications. This really means asking why the legal system is unlikely to be able to cope, as it stands now, to the challenges that AI and robotics bring. Alternatively, this can also be framed as asking why the law is structurally or systemically unbalanced by the prospect of AI and robotics being integrated into society at large or a particular sector or activity.
You are expected to conduct the bulk of this research independently, and to write it up in a systematic and comprehensive manner. It would be best to formulate a position or an argument in relation to your topic, and to critically evaluate and defend your position in relation to other authors and commentators. In other words, a thesis statement or research question should be the lodestone of your project, which you can refine as your research progresses. It may also be useful to adopt the ‘through-line’ model for writing – to have a consistent thread that unites your argument from introduction and problem statement all the way through until your conclusion and proposal for future work. Remember that relevance is key (rather than mere interest): constantly think of what it is that you are trying to communicate, why you are writing what you are writing, and how it relates to your thesis statement or research question. It often helps to think about law and policy propositions which might go some way to addressing the problem you have analysed, and the very best papers have involved a self-reflective critique of the proposals made at the end (thus leading towards the next iteration of research questions… and possibly setting you up for a Master’s Thesis).
A good way to push you towards a strong analytical paper is to ask ‘why’ questions (rather than how? and what? type questions). Why makes you delve deeper, opens up the field and encourages exploration. How and what questions tend to lead you towards answers which are superficial and factual, with which close up the question since the aim is to reach some sort of equilibrium. It should be obvious that analytically sophisticated papers seek to answer iterations of follow-up questions rather than arrive at definitive answers quickly.
- Marking scale
- 7-point grading scale
- Censorship form
- No external censorship
- Exam period
Autumn: submission date: December 16, 2020
Spring: submission date: June 2, 2021
Autumn: submission date: February 5, 2021
Spring: submission date: August 18, 2021