Joe Biden Wants Hackers’ Help to Keep AI Chatbots In Check

ChatGPT has stoked new hopes about the potential of artificial intelligence—but also new fears. Today the White House joined the chorus of concern, announcing it will support a mass hacking exercise at the Defcon security conference this summer to probe generative AI systems from companies including Google.

The White House Office of Science and Technology Policy also said that $140 million will be diverted towards launching seven new National AI Research Institutes focused on developing ethical, transformative AI for the public good, bringing the total number to 25 nationwide.

The announcement came hours before a meeting on the opportunities and risks presented by AI between vice president Kamala Harris and executives from Google and Microsoft as well as the startups Anthropic and OpenAI, which created ChatGPT.

The White House AI intervention comes as appetite for regulating the technology is growing around the world, fueled by the hype and investment sparked by ChatGPT. In the parliament of the European Union, lawmakers are negotiating final updates to a sweeping AI Act that will restrict and even ban some uses of AI, including adding coverage of generative AI. Brazilian lawmakers are also considering regulation geared toward protecting human rights in the age of AI. Draft generative AI regulation was announced by China’s government last month.

In Washington, DC, last week, Democrat senator Michael Bennett introduced a bill that would create an AI task force focused on protecting citizens’ privacy and civil rights. Also last week, four US regulatory agencies including the Federal Trade Commission and Department of Justice jointly pledged to use existing laws to protect the rights of American citizens in the age of AI. This week, the office of Democrat senator Ron Wyden confirmed plans to try again to pass a law called the Algorithmic Accountability Act, which would require companies to assess their algorithms and disclose when an automated system is in use.

Arati Prabhakar, director of the White House Office of Science and Technology Policy, said in March at an event hosted by Axios that government scrutiny of AI was necessary of the technology was to be beneficial. “If we are going to seize these opportunities we have to start by wrestling with the risks,” Prabhakar said.

The White House supported hacking exercise designed to expose weaknesses in generative AI systems will take place this summer at the Defcon security conference. Thousands of participants including hackers and policy experts will be asked to explore how generative models from companies including Google, Nvidia, and Stability AI align with the Biden administration’s AI Bill of Rights announced in 2022 and a National Institute of Standards and Technology risk management framework released earlier this year.

Points will be awarded under a “Capture the Flag” format to encourage participants to test for a wide range of bugs or unsavory behavior from the AI systems. The event will be carried out in consultation with Microsoft, nonprofit SeedAI, the AI Vulnerability Database, and Humane Intelligence, a nonprofit created by data and social scientist Rumman Chowdhury. She previously led a group at Twitter working on ethics and machine learning, and hosted a bias bounty that uncovered bias in the social network’s automatic photo cropping. 

Source

Author: showrunner