Ugo Pagallo is Professor of Jurisprudence at the Department of Law, University of Turin, Italy. His main fields of interest are Artificial Intelligence and law, network theory, robotics, and information technology law. In his 30 years of research, he has published books, countless papers and participated in projects of the European Union. In a three-part interview series, we talk with him about what has been accomplished and what lies ahead. In part 2, he speaks about new levels of regulation.
Interview: Jonathan Mehlfeldt
Rechtverblüffend: Professor Pagallo, approaching our main topic of the Law on New Technologies and AI now, I would like to ask you to first define “Artificial Intelligence” and “Law on New Technologies”. What exactly are those?
Ugo Pagallo: Concerning the definitions of AI, the reference book is written by Stuart Russell and Peter Norvig: “Artificial Intelligence: A Modern Approach”. They present a quadrant with two specific notions: To think and to act. Moreover, they differentiate between thinking and acting as a human and thinking and acting rationally. You can map all possible definitions of AI within this framework. If you still have problems with definitions of AI, just check the website of IJCAI, the international joint conference on AI, and you have the state of the art there [laughs].
Rechtverblüffend: How would you explain this in non-technical terms?
Pagallo: I would say that we are dealing with an intelligent machine without any claim to surpass the Turing test.
[The Turing Test is a procedure, named after the British mathematician Alan Turing, that is designed to distinguish an AI from a human being. An AI able to pass the test would present intelligent behaviour equivalent to a human being.]
Whatever that machine says or does will be surprising to you because you would usually expect that behaviour from a human being, not from a machine. On this basis, the definition of the Law on New Technologies risks bringing us back to the old debate in the late 1990s on the “Law of the Horse”.
[The “Law of the Horse” is object of an argument brought forth by Frank H. Easterbrook on identifying cyberlaw as a separate area of law: “The best way to learn the law applicable to specialized endeavours is to study general rules. Lots of cases deal with sales of horses; others deal with people kicked by horses; still more deal with the licensing and racing of horses, or with the care veterinarians give to horses, or with prizes at horse shows. Any effort to collect these strands into a course on 'The Law of the Horse' is doomed to be shallow and to miss unifying principles.” (Easterbrook, Frank H. (1996): “Cyberspace and the Law of the Horse”, University of Chicago Legal Forum 207, page 1) In 1999, Lawrence Lessig then stressed that although general principles should be of central importance, the specific connection between law and cyberspace could have significant implications on law as a regulatory tool.]
Rechtverblüffend: Didn´t you write and publish another „Law of the Horse” with your “The Laws of Robots”?
Pagallo: You could say that. And, of course, I also present myself as an expert on information technology law. But this stresses the fact that, first of all, the target or subject matter of legal regulation is not simply technology in general. Technology can also be the means of legal regulation, such as meta-technology [technological infrastructures that allow institutions to regulate services and their content] and techno-regulation [implementation of values, norms and principles directly in technological devices].
Rechtverblüffend: On the other hand, the legal regulation of emerging technologies raises some specific problems of their own.
Pagallo: And that’s the reason why I am convinced that it’s a good idea to organise classes on Law and Technology specifically and to classify legal regulation of technology separately without running any risk of teaching the “Law of the Horse”. Among the many possible instances of this argument, consider the fact that simple top-down instructions of law makers often fall short in dealing with the challenges of technologies. And that’s the reason why the European Commission and the EU institutions are adopting new forms of regulation just to take the specificities of technologies into account.
Rechtverblüffend: What is the risk here?
Pagallo: The risk is that, once you have set up your regulation, it may turn out to be unusable overnight. You might recall the “e-money-directive”: The EU institutions initially considered e-money as a traditional form of payment and then, suddenly, Elon Musk came up with Paypal. They had to amend the act. You need to match legal certainty – the rule of law – on the one hand and the flexibility of the law on the other in order to deal with the progress of technology.
Rechtverblüffend: Does this lead to any new forms of regulation?
Pagallo: Yes, and Article 5 GDPR is a good illustration of such a co-regulatory approach. But there are further variants of co-regulatory approaches, for example the latest draft of the European Commission on the Data Governance Act. I would also stress the role of soft law [a set of legal instruments with semi-normative value, lacking binding effect. Examples are technical standards, codes of conduct or guidelines.] in these cases. You cannot establish a technological standard with a simple decree. Concerning standards, you don’t only have one problem.
Rechtverblüffend: What are the problems there?
Pagallo: There are three problems at once: First of all, the speed of the technological progress is such that technological standards have to be set up as soon as possible. Think about the Internet of Things, for example. Or think about the standards of inter-operability for drones or for autonomous vehicles. At the same time, we have the problem of fluctuating social standards. This is a typical problem in the US regarding their constitutional doctrine of the “reasonable expectation of privacy” [Pursuant to the Fourth Amendment to the US Constitution, an individual might have a reasonable expectation of privacy regarding a situation, location, data etc. if her expectation of privacy towards the situation, location, data etc. conforms with society’s expectation of privacy]: What can you reasonably expect when technology triggers profound changes in a population?
Rechtverblüffend: And the third problem?
Pagallo: Finally, you have the problem of legal standards: Burdens of proof and the like. So, yes, legal regulation of technology is a world of its own because of the specificities of the new technologies. And remember that, at the beginning of this interview, I stressed that all of this didn’t exist twenty years ago! We had to create what we are discussing now.
"We need different models for different situations."
Rechtverblüffend: You have already mentioned co-regulation, soft law and technological approaches. Is there one specific model of regulation that is most promising or would you say that we need different models of regulation for different situations?
Pagallo: Yes, of course we need different models for different situations. Another very interesting aspect of our problem lies in putting together a legal overview about the issues we are dealing with regarding the context-dependency of the law, which means that we have to take the specificity of every legal field into account.
Rechtverblüffend: Could you give us an example?
Pagallo: For starters, consider the difference between criminal and civil – as opposed to criminal – law. In criminal law, we have Article 7 of the European Convention on Human Rights – the principle of legality and a pillar of the rule of law: No crime, no punishment without a specific criminal norm.
We have already had a new generation of computer crimes in the early 1990s. Sooner or later, we are also going to have a new generation of – let’s call them robotic crimes. In these cases, the legal regulation of technology leaves only one option, which is a top-down regulatory approach by setting up new criminal norms. There is no issue about that.
On the other hand, you have many other civil law cases in which you need co-regulatory approaches, which – mind! – are co-regulatory, not self-regulatory. They do not include, for example, the ethical charter that Microsoft gave itself. I’m rather talking about lawmakers setting up or establishing certain principles and rules, such as Section 1 of Article 5 GDPR, and then leaving it up to the stakeholders to decide how to organise themselves in order to be compliant. This is the GDPR in 30 seconds, more or less. In other cases, you have softer co-regulatory options, for example when dealing with the development of technological standards, as I suggested before.
Rechtverblüffend: You are also currently working on health law. Is there a connection to that field as well?
Pagallo: Yes, this is another example. When talking about the safety of a vaccine, we tend to follow a strict top-down regulatory approach. However, when dealing with the plethora of further legal issues in health law, for example the underuse of technology in hospitals – which is tragedy if you think about it – there is no way to solve problems of trust with a decree. We have to rely, and this is the magical formula for co-regulatory approaches, on the power of coordination mechanisms. Therefore, when dealing with problems of underuse of technology in hospitals, what we need is either to set up or to further develop such coordination mechanisms. And apparently, this is a good argument: The World Health Organization backed this point, but now, as you know, they have had other problems since the beginning of the COVID-19 spread.