Empathy in the Age of AI

If you think your dog loves you, you’re a fool. If you feel a kinship with a tree, you’re a hippie. And if you over-empathize with a wild animal, you must be wearing cheetah prints and a flower crown, because you are Carole Baskin. The imperative to be on guard against anthropomorphism infuses almost every aspect of modern life. Yet many people would struggle to articulate why, exactly, attributing human qualities to nonhuman entities—from gorillas to large language models—is so woefully naive. 

Anti-anthropomorphism has deep roots. In the 20th century, scientists sallied forth on a quixotic quest to see animals objectively. To do it, they tried to strip away human assumptions about biology, social structure, animal behavior, and more. Eventually, this ideal became a dominant ideology, says ecologist Carl Safina. At one point, anthropomorphism was called the “worst of ethological sins” and a danger to the animal world. But the next generation of field ecologists, including Jane Goodall and Frans De Waal, pushed back, infusing their observation with empathy. “I don’t know people anymore who study animals and insist that anthropomorphism is out of bounds,” says ecologist Carl Safina.

Still, play-acting a vigilant anti-anthropomorphism still comes off as enlightened in certain circles—in conversations about animals and, increasingly, about artificial intelligence. As machines get better and better at mimicking humans, from the artistry of DALL-E to the life-like interlocutor ChatGPT, we appear more inclined to see our ghost in every machine. Do existing technologies really “think” or “see”? Did the Amazon Echo really need a human name? According to some scholars, projecting our humanity onto AI could have real consequences, from further obscuring the way these minds actually function to reinforcing a dubious notion of the human mind as a sole, or superior, model of intelligence.

But anthropomorphism is a tool like any other—used to better and worse ends, in humanity’s endless pursuit to understand a complicated world. Figuring out when and how to apply such a tool is more urgent than ever, as the mass extinction snuffs out nonhuman intelligence, and new artificial systems come on line every day. How we interact with these entities, both animal and artificial, is fast becoming one of the defining challenges of this century. 

At its most basic, anthropomorphism is a form of metaphorical thinking that enables us to draw comparisons between ourselves and the world around us. It can also be understood as one of countless byproducts of what neuroscientists called theory of mind—the ability to distinguish one’s mind from the minds of others, and then infer what those others are thinking or feeling. 

Theory of mind is an important precept in all kinds of human social interaction, from empathy to deception. Even so, it remains an imperfect instrument. “The easiest access we have is to ourselves,” says Heather Roff, a researcher focused on the ethics of emerging technology. “I have a theory of mind because I know me, and you are sufficiently like me.” But an n of 1 is a fragile thing, and anyone can find themselves stumped by an individual they deem “unreadable” or by the “shock” of a culture very different from their own. 

Despite these challenges, humans appear to be driven to see others as minded (or, put another way, to perceive persons). We seem to reflexively believe that other entities have their own thoughts and emotions. At the same time, many people internalize beliefs that contradict the capacity for identifying personhood and routinely deny the mindedness of children, women, people of color, people with mental illness or developmental disability, and nonhuman animals. 

Source

Author: showrunner