They automatically make decisions before the person affected would have some chance to make an appeal to a person who has the ability to act in a flexible and accommodating way.
That is why I think the ability to appeal to a person is at the core of accountability.
Anti-discrimination law is also key. Biased data can disfavor and discriminate against many groups. Also, predictive technologies paint a really frightening future if they can influence persons’ livelihoods and opportunities in a black box fashion. I think this leads to my final point about accountability. We have to build into these algorithms all sorts of protection that are going to realize human rights and recognize key human interests. We need to avoid systems where human beings are subject to powerlessness and meaninglessness, two important foundations of alienation. I think that addressing this form of alienation may sometimes leads us to limit the use of black box AI in several contexts.
For less advanced algorithms, let’s at least make sure they are accurate before they are deployed, and that they are not discriminating against certain groups. For example, a facial recognition system might be unable to recognize the faces of certain racial groups. As I argue in New Laws of Robotics, that should disqualify its use in many contexts.
One of the key differences between the ongoing Digital Revolution and previous industrial revolutions is that this one poses a more philosophical dilemma between improving and replacing human capabilities. Your recently published book, New Laws of Robotics: Defending Human Expertise in the Age of AI explores this dilemma. Where, in your view, can be the line drawn between “improvement” and “replacement” in terms of algorithmic decision making in business, in law, in health care and in other areas of social lives? What, in your view, should be the role of regulators?
Let me give an example from the first few chapters in the book. In medicine, there’s an interesting set of partnerships that are developing between nurses and robots. One of these is the Robear, and that is a robot designed to help nurses to lift patients, especially heavier patients, from the bed. This is a really important innovation because a lot of nurses have orthopaedic problems because they are lifting extremely heavy patients or patients that are very vulnerable and need to be lifted very carefully. The robot is designed to enable a transfer of the patient from, say a bed to another bed, or a bed to a chair without demanding excessive physical exertion from nurses. So this I think is a very good example of what I call the first new law of robotics in my book; AI and robotic systems should complement professionals and not substitute for them. It is a relatively narrow and well-defined task, the nurse is always present, and brings in the robot to assist. I think we are going to see more and more of these examples of robots particularly in routinized tasks; for example, bringing drugs around hospitals.
Now, in terms of challenges, clearly there are challenges in terms of this type of AI; is it too costly? Does it get in the way? Can we include more and more tasks in a robot like Robear? There are going to be really interesting questions for the future, and they deserve regulatory attention. Some AI developers want to develop very extensive AI systems that are not just doing manual tasks but are also taking on roles like care, trying to offer something like empathy or to look empathetic. So you can imagine a robot that tries to look sad when a patient is in pain or look happy when a patient, say, takes a few steps beyond what they normally would.
That’s the situation where I think that the robot has gone beyond complementing a professional to substituting for them, and more importantly from my book’s perspective counterfeiting humanity: the robot is faking feeling. What I mean by that is the robot is mimicking human emotions, even though robots can’t actually feel human emotions. That is disservice to patients and to nurses, who as professionals are trained in terms of how to expressively connect with patients, and empathize with patients. Persons can authentically do that because they have experienced pain and disappointments of their own lives, and also joy and sense of accomplishment. A robot cannot do that. So there may be regulation of such emotional AI. And many other examples in the realm of health and education come to mind.