What’s This? A Bipartisan Plan for AI and National Security

US representatives Will Hurd and Robin Kelly are from opposite sides of the ever-widening aisle, but they share a concern that the US may lose its grip on artificial intelligence, threatening the American economy and the balance of world power.

Thursday, Hurd (R-Texas) and Kelly (D-Illinois) offered suggestions to prevent the US from falling behind China, especially, on applications of AI to defense and national security. They want to cut off China’s access to AI-specific silicon chips and push Congress and federal agencies to devote more resources to advancing and safely deploying AI technology.

Although Capitol Hill is increasingly divided, the bipartisan duo claim to see an emerging consensus that China poses a serious threat and that supporting US tech development is a vital remedy.

“American leadership and advanced technology has been critical to our success since World War II, and we are in a race with the government of China,” Hurd says. “It’s time for Congress to play its role.” Kelly, a member of the Congressional Black Caucus, says that she has found many Republicans, not just Hurd, the only Black Republican in the House, open to working together on tech issues. “I think people in Congress now understand that we need to do more than we have been doing,” she says.

The Pentagon’s National Defense Strategy, updated in 2018, says AI will be key to staying ahead of rivals such as China and Russia. Thursday’s report lays out recommendations on how Congress and the Pentagon should support and direct use of the technology in areas such as autonomous military vehicles. It was written in collaboration with the Bipartisan Policy Center and Georgetown’s Center for Security and Emerging Technology, which consulted experts from government, industry, and academia.

The report says the US should work more closely with allies on AI development and standards, while restricting exports to China of technology such as new computer chips to power machine learning. Such hardware has enabled many recent advances by leading corporate labs, such as at Google. The report also urges federal agencies to hand out more money and computing power to support AI development across government, industry, and academia. The Pentagon is asked to think about how court martials will handle questions of liability when autonomous systems are used in war, and talk more about its commitment to ethical uses of AI.

Hurd and Kelly say military AI is so potentially powerful that America should engage in a kind of AI diplomacy to prevent dangerous misunderstandings. One of the report’s 25 recommendations is that the US establish AI-specific communication procedures with China and Russia to allow human-to-human dialog to defuse any accidental escalation caused by algorithms. The suggestion has echoes of the Moscow-Washington hotline installed in 1963 during the Cold War. “Imagine in a high stakes issue: What does a Cuban missile crisis look like with the use of AI?” asks Hurd, who is retiring from Congress at the end of the year.

Beyond such worst-case scenarios, the report includes more sober ideas that could help dismantle some hype around military AI and killer robots. It urges the Pentagon to do more to test the robustness of technologies such as machine learning, which can fail in unpredictable ways in fast-changing situations such as a battlefield. Intelligence agencies and the military should focus AI deployment on back-office and noncritical uses until reliability improves, the report says. That could presage fat new contracts to leading computing companies such as Amazon, Microsoft, and Google.

Helen Toner, director of strategy at the Georgetown center, says although the Pentagon and intelligence community are trying to build AI systems that are reliable and responsible, “there’s a question of whether they will have the ability or institutional support.” Congressional funding and oversight would help them get it right, she says.


Author: showrunner