As Machines Get Smarter, How Will We Relate to Them?

Bicycling in a hilly, busy city like San Francisco provides a cognitive as well as a physical workout. I survive in traffic by flexing not only my quadriceps but my theory of mind, the capacity to imagine the thoughts and intentions of others: Will the guy riding a Bird scooter swerve to avoid that pothole? Will the UPS driver try to run that yellow light? But self-driving cars stump me.

Last year, when General Motors stepped up testing of its Cruise autonomous vehicles, I began to encounter the sporty white hatchbacks with rooftop sensors once or more each day. At first the cars were overcautious and twitchy, earning angry honks from human drivers for unnecessary braking and hesitant turns. With time, I felt able to read and even exploit these timorous robots. If I strayed from the bike lane, they would hang back, giving me extra room to maneuver. At four-way stops, they tended to dither, allowing me to dart ahead.

Then a couple of Cruise vehicles surprised me one week with displays of more confident driving. Rather than meekly waiting behind bikes, they zipped past. My theory of robot mind was vaporized, replaced by a feeling of unease: As AI grows more capable and assertive, how will we relate to it?

Generally speaking, people adapt well to new technologies. We steer hunks of speeding metal and communicate via tiny icons with élan. But more complex and dynamic AI systems, like robot cars, will challenge us in new ways. Millennia of biological and cultural evolution have given us brains and societies primed to read the behaviors, quirks, and transgressions of other people. With thinking machines, says Iyad Rahwan, director of the Max Planck Institute for Human Development in Berlin, “we’re sort of stumbling in the dark.”

Our tendency is to assume, perhaps without realizing it, that AI systems have minds somewhat like ours. In the 1960s, MIT professor Joseph Weizenbaum created the world’s first chatbot, ELIZA, and programmed it to parody a therapist by responding to typed statements by rephrasing them into questions. To Weizenbaum’s shock, his human subjects sensed humanlike intelligence and emotion in the bot. “What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people,” he wrote.

The hazards of not thinking clearly about AI have grown since then; soon, they will become momentous. The perky feminine-coded personas of virtual assistants like Amazon‘s Alexa divert us from considering the risks of allowing large corporations to record in our intimate spaces. The way that drivers, cyclists, and pedestrians understand and react to robot vehicles is a matter of life or death.

Even when there’s more than a split second to mull an AI system’s decisions, its behavior may be impossible to fully explain. The machine-learning algorithms behind many recent AI milestones can’t be programmed or reverse-engineered in the same way as conventional software. Experts call these systems black boxes, because even their creators cannot fully explain how they work. You may one day have to make a life-changing medical decision based on advice from a doctor that was in turn based on advice from an AI system built on methods and resources no human or regulatory body could check. Artificial intelligence is alien intelligence, perceiving and processing the world in ways fundamentally different from the way we do.

Keep Reading

Misjudging AI systems may lead us to misjudge people. Madeleine Clare Elish, an anthropologist at Data & Society, a research institute, has studied accidents involving automation, and says moral blame for system failures often lands unfairly on humans who didn’t create them. After an Uber self-driving car killed a jaywalking pedestrian in Arizona in 2018, police focused public attention on the safety driver, who appeared to be distracted in video from the car. Federal investigators later found that Uber had disabled the car’s emergency braking system and programmed its algorithms to look for pedestrians only at crosswalks. Uber stepped up safety features and can no longer test in Arizona, but it has been cleared of criminal liability; the safety driver may yet face charges.

People may find it even harder to clearly see the functions and failings of more sophisticated AI systems that continually adapt to their surroundings and experiences. “What does it mean to understand what a system does if it is dynamic and learning and we can’t count on our previous knowledge?” Elish asks. As we interact with more AI systems, perhaps our own remarkable capacity for learning will help us develop a theory of machine mind, to intuit their motivations and behavior. Or perhaps the solution lies in the machines, not us. Engineers of future AI systems might need to spend as much time testing how well they play with humans as on adding to their electronic IQs.


This article appears in the June issue. Subscribe now.

Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.


Special Series: The Future of Thinking Machines

Source

Author: showrunner