What to Do About a Problem Like AI

As the field of artificial intelligence (AI) has grown from vague hopes to striking realities, as seen in technologies like ChatGPT, so too have alarms grown clearer. Geoffrey Hinton, the so-called “godfather of AI,” left Google to speak more clearly about the threat posed by the technology, which may one day grow smarter than its creators. He strictly opposes allowing the military to use AI and worries that an artificial hyperintelligence could one day manipulate human beings.

How do we convince our machines to behave ethically, even when we’re not watching? A forthcoming book by Eve Poole called Robot Souls: Programming in Humanity argues that we have to make them more like us, and that means imbuing them with empathy and compassion, even if it means reducing their efficiency. This would be no simple program update, and as with humans, empathy and caring would remain specific to the individual, a kind of artificial subjectivity.


Read More: Can AI Read Your Mind?


What to Do About AI

Like Hinton, Poole argues for an immediate ban on autonomous weapons and a rule that no machine intelligence should ever be allowed to take a human life. She also proposes that AI should be given a path to legal personhood, which needs to be redefined with the technology in mind. As this happens, we ought to give AI the parts of ourselves that we previously denied machines, the “junk code” of emotions, free will and a sense of purpose.

“It is this ‘junk’ which is at the heart of humanity,” she says in a press release. “Our junk code consists of human emotions, our propensity for mistakes, our inclination to tell stories, our uncanny sixth sense, our capacity to cope with uncertainty, an unshakeable sense of our own free will, and our ability to see meaning in the world around us.”

Poole adds that “behind all of these flaky and whimsical properties lies a coordinated attempt to keep our species safe.”

Left on the Cutting Room Floor

When scientists developed AI, they believed the subjective and most human parts of ourselves were superfluous, but Poole says that machines need them, too, if we want them to do right.

“It was assumed that robots with features like emotions and intuition, that made mistakes and looked for meaning and purpose, would not work as well,” Poole says. “But it is actually this ‘junk’ code that makes us human and promotes the kind of reciprocal altruism that keeps humanity alive and thriving.”


Read More: The Jobs Most Vulnerable to ChatGPT


Source

Author: showrunner