The Society of Algorithms - conversation with Frank Pasquale

2021. május 16. 22:33

I think that the connection between antitrust law and constitutional regime is important; however, it is rarely made. To the extent that these platforms are dominant and thus have complete control over some aspects of digital social life like Twitter for microblogging or Google for search or Facebook for social networking, they start to approximate to a state -- Professor Frank Pasquale pointed out in a conversation with Lénárd Sándor, researcher at the National University of Public Service.

2021. május 16. 22:33
null
Dr. Sándor Lénárd
Frank PASQUALE is a Professor of Law at Brooklyn Law School and an internationally recognized expert on the law of artificial intelligence (AI), algorithms, and machine learning. Pasquale’s book, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2015), has been recognized internationally as a landmark study on information asymmetries. His latest book, New Laws of Robotics: Defending Human Expertise in the Age of AI (Harvard University Press, 2020) analyzes the law and policy influencing the adoption of AI in varied professional fields. Pasquale has advised business and government leaders in the healthcare, Internet, and finance sectors. Pasquale’s work on “algorithmic accountability” has helped bring the insights and demands of social justice movements to AI law and policy. In media and communication studies, he has developed a comprehensive legal analysis of barriers to, and opportunities for, regulation of Internet platforms.

 

Karol Čapek, the famous Czech playwright, invented the word “robot” a hundred years ago while he was working on the famous science fiction play entitled “R.U.R”. During the last century, this science fiction has been gradually becoming an everyday reality and the Digital Revolution increasingly permeates every walk of life of the 21st century world. In one of your books titled The Black Box Society: The Secret Algorithms That Control Money and Information, you explored the some of the effects of algorithms on our life. How, in your view does technology transform our lives and our societies?

Digitalization affects our daily life in so many ways. I think there are a few key ways in which this power shift has occurred and affects us daily. One of them is a trend toward centralization. The irony is that the original boosters of the Internet used to focus on its supposedly enormous decentralizing role. So the default position was to assume disintermediation. But now we see constant re-intermediation. Massive U.S. and Chinese companies have begun to dominate the public sphere and many of the digitalized spheres of commerce

I think one of the biggest decisions that must be made in the future by policymakers in the U.S., EU and China is whether to try to fight the centripetal force of centralization and consolidation by these massive firms by breaking them up or to regulate them.

The first path entails separating the platforms from commerce, as FTC-nominee Lina Khan proposes. In the second, as my colleague Sabeel Rahman has proposed, they could be treated as monopolies and then the government could impose certain public interest obligations and taxes on them.

Due to the growing influence of digital platforms in communications, societies around the World gravitate towards a more customized information age. However, as recent events increasingly show, this development tends to limit our deliberative capabilities. How do you see this current dilemma?

The automated public sphere profoundly impairs collective will formation. I think that this impairment happens on a few different levels. One is the extreme personalization, which can lead to a fragmentation of massaging and also fragmentation of understanding.

This goes beyond the “filter bubble” type argument for which I am not terribly concerned about.

People can build authentic political community in groups of varied sizes. But it’s much harder to do so when political manipulation is micro-targeted. So two people in the same household might see completely different ads or two people on the same street have utterly different information about the same political figures—some of which is just pure lies or propaganda.

The problems occur on multiple levels, too: internet intermediaries, media itself, and digital parties that manipulate public understanding and distract from real problem solving. However, I am also cautious not to romanticize the past. I do think there were serious problems in democratic will formation, which I wrote about way back in the 1990s (in my undergraduate and masters theses at Harvard and Oxford), with respect to newspapers and television stations in the U.S. foregoing many of the public interest obligations. It is hard to compare because in some ways these two public spheres are incommensurable--but in the end I think that we are in a position of greater peril now, given the fragmentation of public understanding.

The large tech companies’ liability is a Catch-22 dilemma. If one tries to hold them responsible as publishers, and they will say they are platforms. However, if one demands access to their platforms and they will insist that they are publishers. What role, in your view should regulators have with regards to these companies?

One of the fundamental errors that is made in this area of law in terms of regulating harmful content, hate speech or fake news, is to think that platforms should be categorized either as publisher or as an intermediary. This distinction made sense from the 1930s until the early 2000s. However, when you have platforms that are specifically prioritizing content both for their own profit model and for other reasons, that leads me to think that

they are something between a publisher and an intermediary.

That should lead to a new set of obligations and responsibilities that balance free expression values against the enormous risks posed by disinformation about public health, elections, and minority groups.

That is one reason for the emerging popularity of legal regimes in which platforms promoting particularly dangerous content must take it down quickly—anywhere between 2 and 24 hours—and if they fail, then comes liability. Such a legal regime would not be perfect but it can significantly stop the spread of the worst content. To put it plainly, media regulation should be creative in dealing with these types of new challenges.

What are the major advantages and shortcomings of the European vs. American approach in terms of regulation that might originate in the difference of their constitutional views of free speech?

The US approach promotes the ability of almost anyone to say almost anything. So that avoids bitter debates over the bounds of political discourse. However, if one compares at the margin what is permitted in the US, and forbidden in the EU, that margin is not very large—and so much of what the EU (or member states) are confronting is deeply troubling. There are certain elements of public discourse that are particularly damaging and the EU is seriously seeking to stop that. As a result, it does have to make many more difficult judgment calls. The key problem with the US approach is that it could more easily lead to the rise of demagogues who unravel free expression protections.

One other thing I should praise Europe for is the general interest in cultural funding.

That positive support for the press and cultural in general is also vital to assessing the health of a public sphere.

The famous American Justice, Louis Brandeis warned against the “curse of bigness” as he pointed out, when dominant trusts are not only economically inefficient but their concentrated power also poses a menace to the rights and to the political system itself. How do you see the role of antitrust regulation in remedying the adverse impacts of Big Tech constitutional values?

I think that the connection between antitrust law and constitutional regime is important; however, it is rarely made.

To the extent that these platforms are dominant and thus have complete control over some aspects of digital social life like Twitter for microblogging or Google for search or Facebook for social networking, they start to approximate to a state.

I pointed out this phenomenon in an article on “functional sovereignty” I wrote for the Friedrich Ebert Stiftung. They are operating in societies and monopolize control over certain functions in the public sphere. I think that what antitrust law ideally would do is to ensure that there is an exit option for each of those entities, so that persons can freely depart if the terms of the platform become too oppressive.

Another basic step would be to break up Facebook, WhatsApp and Instagram, to stop those entities’ 360-degree surveillance of so many users’ lives.Another governmental dimension involves the deep tension between extreme speech and the preservation of democracy. In the U.S. when Twitter banned Donald Trump, the initial response from a lot of centrists, liberal and center-right people were to say, well this is a private company so it can do whatever it wants. That makes the analysis too easy. In reality,

the company has actually taken up a public role and therefore it is tantamount to a state decision.

But that does not mean it is not the right state-like decision to make this judgment call, particularly after Trump poisoned the well of American democracy by convincing a majority of Republicans that the election was stolen, with an endless stream of baseless lies.

Algorithmic decision making influences a wide variety of fundamental rights other than free speech. Depending on their use, they can have quite serious impacts on privacy, fair trial or equal rights under the law. How can societies and regulators advance “algorithmic accountability” and introduce ethical considerations in the use of AI?

As soon as The Black Box Society came out, there was pushback from scholars who said “what you are really concerned is the sociotechnical system and not just the algorithm.” That point is valid and I think I latently recognized that in the book. It should be foregrounded, though, because I think that what this criticism helps illuminate is that the algorithms often come into play when certain individuals or corporations want to diagnose at a distance, assess at a distance, apply force at a distance. Rather than directly deciding on individual cases, talking to persons involved, decisionmakers create a formula that reduces persons to a set of data points—and often a small, partial, and inaccurate set of data at that.

This is what many tech companies are doing all the time, both to those who are using their platforms.

They automatically make decisions before the person affected would have some chance to make an appeal to a person who has the ability to act in a flexible and accommodating way.

That is why I think the ability to appeal to a person is at the core of accountability.

Anti-discrimination law is also key. Biased data can disfavor and discriminate against many  groups. Also, predictive technologies paint a really frightening future if they can influence persons’ livelihoods and opportunities in a black box fashion. I think this leads to my final point about accountability. We have to build into these algorithms all sorts of protection that are going to realize human rights and recognize key human interests. We need to avoid systems where human beings are subject to powerlessness and meaninglessness, two important foundations of alienation.  I think that addressing this form of alienation may sometimes leads us to limit the use of black box AI in several contexts.

For less advanced algorithms, let’s at least make sure they are accurate before they are deployed, and that they are not discriminating against certain groups. For example, a facial recognition system might be unable to recognize the faces of certain racial groups. As I argue in New Laws of Robotics, that should disqualify its use in many contexts.

One of the key differences between the ongoing Digital Revolution and previous industrial revolutions is that this one poses a more philosophical dilemma between improving and replacing human capabilities. Your recently published book, New Laws of Robotics: Defending Human Expertise in the Age of AI explores this dilemma. Where, in your view, can be the line drawn between “improvement” and “replacement” in terms of algorithmic decision making in business, in law, in health care and in other areas of social lives? What, in your view, should be the role of regulators?

Let me give an example from the first few chapters in the book. In medicine, there’s an interesting set of partnerships that are developing between nurses and robots. One of these is the Robear, and that is a robot designed to help nurses to lift patients, especially heavier patients, from the bed. This is a really important innovation because a lot of nurses have orthopaedic problems because they are lifting extremely heavy patients or patients that are very vulnerable and need to be lifted very carefully. The robot is designed to enable a transfer of the patient from, say a bed to another bed, or a bed to a chair without demanding excessive physical exertion from nurses. So this I think is a very good example of what I call the first new law of robotics in my book; AI and robotic systems should complement professionals and not substitute for them. It is a relatively narrow and well-defined task, the nurse is always present, and brings in the robot to assist. I think we are going to see more and more of these examples of robots particularly in routinized tasks; for example, bringing drugs around hospitals.

Now, in terms of challenges, clearly there are challenges in terms of this type of AI; is it too costly? Does it get in the way? Can we include more and more tasks in a robot like Robear? There are going to be really interesting questions for the future, and they deserve regulatory attention. Some AI developers want to develop very extensive AI systems that are not just doing manual tasks but are also taking on roles like care, trying to offer something like empathy or to look empathetic. So you can imagine a robot that tries to look sad when a patient is in pain or look happy when a patient, say, takes a few steps beyond what they normally would.

That’s the situation where I think that the robot has gone beyond complementing a professional to substituting for them, and more importantly from my book’s perspective counterfeiting humanity: the robot is faking feeling. What I mean by that is the robot is mimicking human emotions, even though robots can’t actually feel human emotions. That is disservice to patients and to nurses, who as professionals are trained in terms of how to expressively connect with patients, and empathize with patients. Persons can authentically do that because they have experienced pain and disappointments of their own lives, and also joy and sense of accomplishment. A robot cannot do that. So there may be regulation of such emotional AI. And many other examples in the realm of health and education come to mind. 

Összesen 0 komment

A kommentek nem szerkesztett tartalmak, tartalmuk a szerzőjük álláspontját tükrözi. Mielőtt hozzászólna, kérjük, olvassa el a kommentszabályzatot.
Sorrend:
Jelenleg csak a hozzászólások egy kis részét látja. Hozzászóláshoz és a további kommentek megtekintéséhez lépjen be, vagy regisztráljon!