The Global Battle to Regulate AI Is Just Beginning

By the end of April, the European Parliament had zeroed in on a list of practices to be prohibited: social scoring, predictive policing, algorithms that indiscriminately scrape the internet for photographs, and real-time biometric recognition in public spaces. However, on Thursday, parliament members from the conservative European People’s Party were still questioning whether the biometric ban should be taken out. “It’s a strongly divisive political issue, because some political forces and groups see it as a crime-fighting force and others, like the progressives, we see that as a system of social control,” says Brando Benifei, co-rapporteur and an Italian MEP from the Socialists and Democrats political group.

Next came talks about the types of AI that should be flagged as high-risk, such as algorithms used to manage a company’s workforce or by a government to manage migration. These are not banned. “But because of their potential implications—and I underline the word potential—on our rights and interests, they are to go through some compliance requirements, to make sure those risks are properly mitigated,” says Nechita’s boss, the Romanian MEP and co-rapporteur Dragoș Tudorache, adding that most of these requirements are principally to do with transparency. Developers have to show what data they’ve used to train their AI, and they must demonstrate how they have proactively tried to eliminate bias. There would also be a new AI body set up to create a central hub for enforcement.

Companies deploying generative AI tools such as ChatGPT would have to disclose if their models have been trained on copyrighted material—making lawsuits more likely. And text or image generators, such as MidJourney, would also be required to identify themselves as machines and mark their content in a way that shows it’s artificially generated. They should also ensure that their tools do not produce child abuse, terrorism, or hate speech, or any other type of content that violates EU law.

One person, who asked to remain anonymous because they did not want to attract negative attention from lobbying groups, said some of the rules for general-purpose AI systems were watered down at the start of May following lobbying by tech giants. Requirements for foundation models—which form the basis of tools like ChatGPT—to be audited by independent experts were taken out.

However the parliament did agree that foundation models should be registered in a database before being released to the market, so companies would have to inform the EU of what they have started selling. “That’s a good start,” says Nicolas Moës, director of European AI governance at the Future Society, a think tank.

The lobbying by Big Tech companies, including Alphabet and Microsoft, is something that lawmakers worldwide will need to be wary of, says Sarah Myers West, managing director of the AI Now Institute, another think tank. “I think we’re seeing an emerging playbook for how they’re trying to tilt the policy environment in their favor,” she says.

What the European Parliament has ended up with is an agreement that tries to please everyone. “It’s a true compromise,” says a parliament official, who asked not to be named because they are not authorized to speak publicly. “Everybody’s equally unhappy.”


Author: showrunner