The Case for Outsourcing Morality to AI

It all started with an obscure article in an obscure journal, published just as the last AI winter was beginning to thaw. In 2004, Andreas Matthias wrote an article with the enigmatic title, “The responsibility gap: Ascribing responsibility for the actions of learning automata.” In it, he highlighted a new problem with modern AI systems based on machine learning principles. 

Once, it made sense to hold the manufacturer or operator of a machine responsible if the machine caused harm, but with the advent of machines that could learn from their interactions with the world, this practice made less sense. Learning automata (to use Matthias’ terminology) could do things that were neither predictable nor reasonably foreseeable by their human overseers. What’s more, they could do these things without direct human supervision or control. It would no longer be morally fair or legally just to hold humans responsible for the actions of machines. Matthias argued that this left humanity in a dilemma: Prevent the development of learning automata or embrace the responsibility “gaps” that resulted from their deployment.

Fast forward to 2023 and Matthias’ dilemma is no longer of mere academic concern. It is a real, practical issue. AI systems have been, at least causally, responsible for numerous harms, including discrimination in AI-based sentencing and hiring, and fatal crashes in self-driving vehicles. The academic and policy literature on “responsibility gaps” has unsurprisingly ballooned. Matthias’ article has been cited over 650 times (an exceptionally high figure for a philosophy paper), and lawyers and policymakers have been hard at work trying to clarify and close the gap that Matthias identified. 

What is interesting about the responsibility gap debate, however, is the assumption most of its participants share: that human responsibility is a good thing. It is a good thing that people take responsibility for their actions and that they are held responsible when something goes wrong. Contrariwise, it would be a bad thing if AI systems wreaked havoc in the world without anyone taking responsibility or being held responsible for that havoc. We must, therefore, find some way to plug or dissolve responsibility gaps, either by stretching existing legal/moral standards for responsibility, or introducing stricter standards of responsibility for the deployment of AI systems.

But perhaps responsibility is not always a good thing. Perhaps, to follow Matthias’s original suggestion, some responsibility gaps ought to be embraced.

It is worth bearing in mind two features of our world. First, our responsibility practices (as in, our norms and habits of blaming, shaming, and punishing one another) have their dark side. Second, our everyday lives are replete with “tragic choices,” or  situations in which we have to choose between two morally equal or close-to-equally-weighted actions. Both features have implications for the responsibility gap debate. 

On the dark side of responsibility, an entire school of thought has emerged that is critical of our responsibility practices, particularly as they pertain to criminal justice. Gregg Caruso, a philosophy professor at the State University of New York, is one of the leading lights in this school of thought. In conversation with me, he argued that if you “look closely … you will find that there are lifetimes of trauma, poverty, and social disadvantage that fill the prison system.” Unfortunately, our current responsibility practices, premised on the ideal of free will and retributive justice, does nothing to seriously address this trauma. As Caruso put it, this system “sees criminal behavior as primarily a matter of individual responsibility and ends the investigation at precisely the point it should begin.” If we abandoned our system of retributive justice, we could “adopt more humane and effective practices and policies.” Caruso also pointed that our emotions associated with responsibility—what philosophers call ‘reactive attitudes’ such as resentment, anger, indignation, and blame, are “often counterproductive and corrosive to our interpersonal relationships” because they “give rise to defensive or offensive reactions rather than reform and reconciliation.”

Source

Author: showrunner