Since the tech industry began its love affair with machine learning about a decade ago, US lawmakers have chattered about the potential need for regulation to rein in the technology. No proposal to regulate corporate AI projects has got close to becoming law—but OpenAI’s release of ChatGPT in November has convinced some senators there is now an urgent need to do something to protect people’s rights against the potential harms of AI technology.
At a hearing held by a Senate Judiciary subcommittee yesterday attendees heard a terrifying laundry list of ways artificial intelligence can harm people and democracy. Senators from both parties spoke in support of the idea of creating a new arm of the US government dedicated to regulating AI. The idea even got the backing of Sam Altman, CEO of OpenAI.
“My worst fear is that we—the field, the technology, the industry—cause significant harm to the world,” Altman said. He also endorsed the idea of AI companies submitting their AI models to testing by outsiders and said a US AI regulator should have the power to grant or revoke licenses for creating AI above a certain threshold of capability.
A number of US federal agencies including the Federal Trade Commission and the Food and Drug Administration already regulate how companies use AI today. But senator Peter Welch said his time in Congress has convinced him that it can’t keep up with the pace of technological change.
“Unless we have an agency that is going to address these questions from social media and AI, we really don’t have much of a defense against the bad stuff, and the bad stuff will come,” says Welch, a Democrat. “We absolutely have to have an agency.”
Richard Blumenthal, a fellow Democrat who chaired the hearing, said that a new AI regulator may be necessary because Congress has shown it often fails to keep pace with new technology. US lawmakers’ spotty track record on digital privacy and social media were mentioned frequently during the hearing.
But Blumenthal also expressed concern that a new federal AI agency could struggle to match the tech industry’s speed and power. “Without proper funding you’ll run circles around those regulators,” he told Altman and his fellow witness from the industry, Christina Montgomery, IBM’s chief privacy and trust officer. Altman and Montgomery were joined by psychology professor turned AI commentator Gary Marcus, who advocated for the creation of an international body to monitor AI progress and encourage safe development of the technology.
Blumenthal opened the hearing with an AI voice clone of himself reciting text written by ChatGPT to highlight that AI can produce convincing results.
The senators did not suggest a name for the prospective agency or map out its possible functions in detail. They also also discussed less radical regulatory responses to recent progress in AI.
Those included endorsing the idea of requiring public documentation of AI systems’ limitations or the datasets used to create them, akin to an AI nutrition label, ideas introduced years ago by researchers like former Google Ethical AI team lead Timnit Gebru who was ousted from the company after a dispute about a prescient research paper warning about the limitations and dangers of large language models.