Impeachment-eljárást indítana Joe Biden ellen a magyarbarát amerikai szenátor
A republikánus szenátor szerint „nincs más választás, mint Biden felelősségre vonása”.
Artificial Intelligence poses the challenge of automating the cognitive domain. Where we define the line between what is helpful in extending our capabilities and what is simply replacing them is I think a fundamental question of philosophy, but one that has direct challenges for deliberative democracy and the possibility of normative discourse – explain Christopher Markou Professor of Law at Cambridge University in a conversation with Lénárd Sándor, researcher at the American Studies Research Institute of the National University of Public Service, located in Budapest.
The 21st century is defined by the rising influence of “information globalization” that is increasingly powered by Artificial Intelligence. What in your view are the major societal impacts of this phenomenon?
Well, first of all, I would disagree that it is Artificial Intelligence that is driving the most important changes in law or in society. When you talk about “information globalization”, it is actually the phenomenon that allow information to take the form it has that is digital. Artificial Intelligence is just the latest manifestation of the highest reaches technology can do. However, what is really a driving force is the fact that while throughout history building physical infrastructures was always expensive and time consuming; with digitalization everything has become much cheaper and faster. Furthermore, digitization allows non-state actors to become centers of powers in an information society.
So how do you see the role of Artificial Intelligence?
The greatest benefit of Artificial Intelligence is the ability to make sense of large amount of information quickly, efficiently and cheaply. Even though the law is a very data intensive domain as we have many court cases and statues, but I really think it is more the medical rounds and areas such that concern the quality of life improvements where we can see the biggest benefits of Artificial Intelligence. However, there are very specific challenges about the integration of Artificial Intelligence and algorithmic decision-making in the core societal context like the legal system or political decision-making. Therefore, the use of Artificial Intelligence is more problematic in those areas than in the commercial side.
Chief Justice John Roberts recently warned against the dangers of fake news in an era where social media is increasingly dominating the news market. How in your view do social media and the use of algorithmic decision-making influence our cherished fundamental rights such as the freedom of speech and the privacy of citizens?
I was just reading earlier this day that Facebook is taking steps to combat the so called “deepfakes” in its News Feed which is the use of deep generic networks that create fake videos and make people seem as if they are speaking things they do not actually say. This is a welcome step by the tech company. However, cracking down on the underlying problem of what people put online and how you can verify the content and what is genuine and what is not, I think it is a wicked problem in many respects. They pose real and pernicious challenges to deliberative democracy and the notion of being able to come together whether it is online or in person in order to conduct debates on questions of public interests. Even though these companies might have a noble vision to invigorate public debates, I do not think it is working. This phenomenon has become increasingly troubled.
How can governments step up against the potential erosion of these rights?
The companies themselves are trying to make efforts to combat these phenomena by prohibiting political advertising or “deepfakes”. As far as the state level is concerned, it would require cooperation. However, since so much content is now being generated and just dumped online, it has become a sophistic endeavor to tame the Internet in some ways. The Internet was born as the network and “global village” where people would connect together but it has obviously turned into something that can be used as weapon for political purposes and disinformation. This is the biggest challenge we are facing today. The Internet and the cyberspace have always been a battleground for state and non-state actors as well. How can we govern it? I am not sure, although there are good solutions to it other than people have to be really skeptical about what they read and where they get their information from and probably they should seek to get less information. I am very much in favor of whatever makes it possible for people to come together in a civilized discourse on matters of the day without participating in this toxic online environment that everything seems to become now.
What are the possible impacts of the so called “LegalTech” industry along with the algorithmic decision-making on the legal profession as well as on the law itself? What are the possible advantages and shortcomings of introducing such a technology in the legal services as well as in policing or in prosecutorial or judicial or decision making?
I think it is the same questions that people used to ask throughout the 1970s and 1990s, the last time when the legal technology and legal expert system exerted a similar kind of influence on the legal profession to change to the times by adapting them to the technology of today. It certainly offers certain benefits as it could automate a lot of the grunt work or the boring and uninteresting stuff. So it allows the lawyers to do their work much better in the same way as Microsoft Word did many years ago with drafting documents. I am less concerned about how law firms do in improving their process. These are the same research analytic tools that benefit a lawyer or benefit an accountant or someone in the insurance industry.
However, I am more interested in the algorithmic decision-making itself. I am concerned about the use of so-called algorithmic systems in the public domain. This is what Estonia is doing with the “Robot Judge” trials to decide civil cases under 7000 Euros. But it also involves insurance claims, school applications, job applications or seeing encroachment of algorithmic systems providing a sort of a layer between individuals and society in a certain way. They mediate what jobs and opportunities they can get based upon all the data that are collected by the companies and aggregated for informational purposes.
What is your view on the “Robot Judge” experience in Estonia?
I was in Estonia last year when they launched their initiatives. My question as a researcher always remains what context would you want these algorithmic systems even if you assume they work by some metric. What context would you feel uncomfortable? I think most people agree that they would feel uncomfortable with the use of a “Robot Judge” in criminal court, in family court or in human rights courts. Why do we want a human decision maker in such contexts? Which context do we think that for overarching moral reasons should algorithmic systems never be used or restricted in their use? I think this is the question we have to start asking ourselves.
You are leading a project called Leverhulme Research Project: 2018-2021: Lex Ex Machina: From Rule of Law to Legal Singularity. Can you summarize the mission as well as the major objectives of this project? The preliminary question of the project is what does it mean for law to be computable?
This is a question about what extent can we design computer systems that are able to understand not just language but the language as it is used in legal context and in legal argumentation and reasoning. Besides this technical question, there is the overarching political question about even if we assume that the technical side is possible, why and in which context do we want that these decisions to be made by algorithmic systems or where they should be prohibited? To me this is a question that requires public engagement and really speaking to people where we live, especially here, in “Brexitland”. I think there is a lot of cynicism now coming about decision-making, the role of judges, the courts and politicians. There has always been cynicism about politics in general. I think what is important is to focus on the role of the courts and judges and the importance of improving human decision making systems as opposed to just assuming that the computers will do it for us.
Is there a future threat of algorithmic rule in terms of adjudication and even lawmaking? Where can be the line drawn between “improvement” and “replacement” in terms of using the algorithmic decision-making in the law?
That is a fascinating question. One of the most fundamental questions in philosophy and sociology is where we define the line between us and something that is adjunctive to ourselves. We might look at something such as the calculator and say that the calculator gives us power to extend our cognition and our ability. However, something that can replace human cognition is different. And for very practical purposes I think this question matters not just philosophically. For example, the last, so-called industrial revolution at the beginning of the 19th century, particularly here in the United Kingdom, hit the north of this country exceptionally hard. Once manufacturing was replaced by machines that had replaced labor, people ended up having their skills automated because machines could do even tenuous work like weaving and cloth work that was done in the North of England better, faster and cheaper. Therefore, the last industrial revolution rendered the physical domain of human labor largely redundant. The machines ever since are able to calculate, move and build things and so forth better than humans. However, Artificial Intelligence as you suggest, poses a different challenge. It poses the automation of what I call the cognitive domain or the mental work. Where we define the line between what is helpful in extending our capabilities and what is simply replacing them is I think a fundamental question of philosophy but one that has really direct challenges for deliberative democracy and the possibility of normative discourse.
What would be your advice to policymakers?
I think there are lessons to be learned by previous ways of technological innovation. This is not the first time we face new technology, however Artificial Intelligence allows us to transcend the human conditions along with all of our wonderful capabilities. Many of the challenges that we are facing especially in the legal realm are actually quite practical challenges. However, when you automate more and more aspects of the human decision making capabilities, that is also the core is the rule of law, then you end up with something different. You might call it the rule of technology. Therefore, understanding that line between “extension” and “replacement” is critical and this is what I am exploring in my project.