La Croix: Are advances in robotics giving rise to a form of free will in robots?
Jean-Michel Besnier: Roboticists think so, on the grounds that they would allow their objects to make binary decisions, for example on the basis of a facial recognition process. A warrior robot would for example be supposed to have free will to distinguish between the enemy and the friend. Let’s be clear: this has nothing to do with free will. If you ask me if robots are likely to make decisions that resemble those that humans make, I say no. And to say the opposite seems to me to want to lend human faculties to machines.
Alain Bensoussan: It’s all about definition. As a lawyer specializing in the field, I am faced with concrete situations. And I see an autonomy, that is to say that the robot will make a decision thanks to a set of algorithms and databases from which it will draw. But the decision itself is made by the robot, and in this decision-making, what matters a lot is the experience accumulated by the robot.
Take the case of self-driving cars. Two identical cars driving in Cairo, London or Paris will not have the same behavior or the same decisions when faced with the same situations. These cars forge an experience which is the logical continuation of their artificial life. For my part, I am not going towards the concept of free will, because it does not interest me. I will not go into the field of consciousness either. I am simply observing that these robots have decision-making autonomy.
Doesn’t that lead you to say that there is a responsibility of the robots?
J.-M. B. : What do we mean by “responsibility”? In human individuals that we are, we are more or less clear: it is the possibility of being the author of what we do, of being able to say “I” and therefore of having a subjectivity. Responsibility involves having the power to take ownership of what you do. So subjectivity and intentionality are the two minimum ingredients. These two ingredients, I do not find them in robots.
A. B. : If we stick to the law, the schema of human responsibility is based on two concepts: that of intention and that of consequence of acts, which is called causality. You can tell that there is neither intention nor causality in robots.
J.-M. B. : So you agree that the robot does not intend to do what it does?
A. B. : I did not say that. Actually, I said I don’t need this concept. My position is that the robot is always responsible. That’s why I don’t need to go deep into her soul to look for an intention.
J.-M. B. : There, you do not do in philosophy, but in sophistry.
A. B. : No. I apply the principle of the Badinter law of 1985 which establishes that in the event of an accident, the driver of a car is always responsible towards a pedestrian, whatever the circumstances of the accident. Likewise, I consider that the robot is always responsible towards a human: when an autonomous car is involved in an accident, it is at fault. I am for a mechanism of irresponsibility of humans. It is a no-fault liability regime and therefore without morals. This leads us to transfer moral responsibility to insurance. The robot has no intention, but its action has consequences, the damage of which must be compensated with regard to a process of insurance pooling. We must therefore be able to summon these robots before a specialized justice.
J.-M. B. : You are at the limit of playing with words. It reminds me of animal trials in the Middle Ages. The situation you describe is equivalent. At the time, an episcopal judge summoned before his court the weevils against which the peasants sued. We were dealing with litigants who only very exceptionally attended the summons. And for good reason ! But these trials were based on the idea that one had to decide between the legitimate interests of the peasant and those, just as legitimate, of the weevil. All these animal trials did not necessarily win the case for the peasants. This was possible because, according to a form of natural law, God then held all creatures in his hand. But since then, we have discovered, in modern times, subjectivity and the power of intention.
By considering trials of robots, are we not returning to these trials carried out in the Middle Ages? Are we ready to say that the idea of a God who would be behind all of this would allow us to bring two creatures of different natures together? Can we say that robots can be better than men?
A. B. : To me, the robot is neither a “plus object” nor a “human less”. He is not an animal either. Because not only is he not endowed with sensitivity, but he also understands what is said to him. Unlike an animal, if I summon a robot to court, it will go there.
As a lawyer, you invoke law a lot, but do you base this law on values?
A. B. : Law is only the foam of values. When law is contrary to values, it collapses. What we now call robot ethics, which in my opinion will be the model for building a robot law, is formalized in 24 charters published around the world. Many, despite the differences between countries, have similar rules. This means that values are driven by technology.
If all humans are people, not all people are human. For example, in law, there are legal persons. For my part, I invented the concept of the robot person, to which I attach a certain number of rights. For example, in the event that humans depend on a robotic decision, should humans always be in control as a last resort? The question is valid. And the answer is not completely obvious to me. In addition, it seems fundamental to me that a robot cannot arbitrate over human life. Third rule: a robot must be respected and respectable. He must, from my point of view, act with dignity. I also believe that robots must have a nationality: a French robot will not have the same values as a robot of another nationality.
J.-M. B. : Earlier, I liked hearing you admit that values are, from your point of view, more and more dictated by technological formats. This is a big problem because if we say that, we consider that technology is a fact to which undoubtedly values are attached. However, to speak of the ethics of robots is another thing than to speak of their aptitude to discriminate between white and black … Ethics is the attempt to answer the question: “how to live well, alone or with others? “
So I appreciate the fact that you say that you are on a legal ground, which means that you do not need the concept of consciousness, of intention. You only need mechanisms. You are comfortable with a conception of the law that could be assumed from end to end by automatisms. Basically, you say that robots will change the world for the better and me, for the debatable. But then do you think that we are the toys of evolutions on which we cannot intervene, that we are subjected to a form of fate which makes that the robots will take power?
A. B. : I note that robots contaminate all societies. All countries, all cultures are affected. Because robots have the capacity to make fewer mistakes than humans, in the medical field or otherwise. We cannot therefore leave them without rights. And there is urgency.
On the other hand, I would not say that we are doomed to see the world change. For my part, I want to build a world with robots. After 2030, I believe that men will increase. That’s why I’m going far enough: I think human-robot fusion will happen soon. Man has always tried to do this, for example by dressing with prostheses. It will become robotized because the prostheses are smaller, more intelligent. To be honest, I don’t see what would stop us from going to this augmented man.
→ Bioethics in debate (3/5) – What lineage for assisted reproduction?
Jean-Michel Besnier. Professor Emeritus of Philosophy at Sorbonne University, Jean-Michel Besnier is a speaker at the Chair of Philosophy of Information and Communication Technologies.
He headed the Scientific Council of the Graduate Institute for Science and Technology. He is notably the co-author of a book of crossed interviews with Laurent Alexandre, Do robots make love? Transhumanism in 12 questions (Dunod, 2016, 144 p.).
Alain Bensoussan. Lawyer, Alain Bensoussan has been a specialist in new technologies and information law since the end of the 1970s. Within his firm, he created a department specializing in robot law. He is notably the author of a ” charter on the rights and duties of robots ”. He spoke in April 2019 at the Plenary Assembly of the Conference of Bishops of France, as part of a reflection on artificial intelligence.