Press play to listen to this article
Voiced by artificial intelligence.
Artificial intelligence’s newest sensation — the gabby chatbot-on-steroids ChatGPT — is sending European rulemakers back to the drawing board on how to regulate AI.
The chatbot dazzled the internet in past months with its rapid-fire production of human-like prose. It declared its love for a New York Times journalist. It wrote a haiku about monkeys breaking free from a laboratory. It even got to the floor of the European Parliament, where two German members gave speeches drafted by ChatGPT to highlight the need to rein in AI technology.
But after months of internet lolz — and doomsaying from critics — the technology is now confronting European Union regulators with a puzzling question: How do we bring this thing under control?
The technology has already upended work done by the European Commission, European Parliament and EU Council on the bloc’s draft artificial intelligence rulebook, the Artificial Intelligence Act. The regulation, proposed by the Commission in 2021, was designed to ban some AI applications like social scoring, manipulation and some instances of facial recognition. It would also designate some specific uses of AI as “high-risk,” binding developers to stricter requirements of transparency, safety and human oversight.
You may like
The catch? ChatGPT can serve both the benign and the malignant.
This type of AI, called a large language model, has no single intended use: People can prompt it to write songs, novels and poems, but also computer code, policy briefs, fake news reports or, as a Colombian judge has admitted, court rulings. Other models trained on images rather than text can generate everything from cartoons to false pictures of politicians, sparking disinformation fears.
In one case, the new Bing search engine powered by ChatGPT’s technology threatened a researcher with “hack[ing]” and “ruin.” In another, an AI-powered app to transform pictures into cartoons called Lensa hypersexualized photos of Asian women.
“These systems have no ethical understanding of the world, have no sense of truth, and they’re not reliable,” said Gary Marcus, an AI expert and vocal critic.
These AIs “are like engines. They are very powerful engines and algorithms that can do quite a number of things and which themselves are not yet allocated to a purpose,” said Dragoș Tudorache, a Liberal Romanian lawmaker who, together with S&D Italian lawmaker Brando Benifei, is tasked with shepherding the AI Act through the European Parliament.
Already, the tech has prompted EU institutions to rewrite their draft plans. The EU Council, which represents national capitals, approved its version of the draft AI Act in December, which would entrust the Commission with establishing cybersecurity, transparency and risk-management requirements for general-purpose AIs.
The rise of ChatGPT is now forcing the European Parliament to follow suit. In February the lead lawmakers on the AI Act, Benifei and Tudorache, proposed that AI systems generating complex texts without human oversight should be part of the “high-risk” list — an effort to stop ChatGPT from churning out disinformation at scale.
The idea was met with skepticism by right-leaning political groups in the European Parliament, and even parts of Tudorache’s own Liberal group. Axel Voss, a prominent center-right lawmaker who has a formal say over Parliament’s position, said that the amendment “would make numerous activities high-risk, that are not risky at all.”
In contrast, activists and observers feel that the proposal was just scratching the surface of the general-purpose AI conundrum. “It’s not great to just put text-making systems on the high-risk list: you have other general-purpose AI systems that present risks and also ought to be regulated,” said Mark Brakel, a director of policy at the Future of Life Institute, a nonprofit focused on AI policy.
The two lead Parliament lawmakers are also working to impose stricter requirements on both developers and users of ChatGPT and similar AI models, including managing the risk of the technology and being transparent about its workings. They are also trying to slap tougher restrictions on large service providers while keeping a lighter-tough regime for everyday users playing around with the technology.
Professionals in sectors like education, employment, banking and law enforcement have to be aware “of what it entails to use this kind of system for purposes that have a significant risk for the fundamental rights of individuals,” Benifei said.
If Parliament has trouble wrapping its head around ChatGPT regulation, Brussels is bracing itself for the negotiations that will come after.
The European Commission, EU Council and Parliament will hash out the details of a final AI Act in three-way negotiations, expected to start in April at the earliest. There, ChatGPT could well cause negotiators to hit a deadlock, as the three parties work out a common solution to the shiny new technology.
On the sidelines, Big Tech firms — especially those with skin in the game, like Microsoft and Google — are closely watching.
The EU’s AI Act should “maintain its focus on high-risk use cases,” said Microsoft’s Chief Responsible AI Officer Natasha Crampton, suggesting that general-purpose AI systems such as ChatGPT are hardly being used for risky activities, and instead are used mostly for drafting documents and helping with writing code.
“We want to make sure that high-value, low-risk use cases continue to be available for Europeans,” Crampton said. (ChatGPT, created by U.S. research group OpenAI, has Microsoft as an investor and is now seen as a core element in its strategy to revive its search engine Bing. OpenAI did not respond to a request for comment.)
A recent investigation by transparency activist group Corporate Europe Observatory also said industry actors, including Microsoft and Google, had doggedly lobbied EU policymakers to exclude general-purpose AI like ChatGPT from the obligations imposed on high-risk AI systems.
Could the bot itself come to EU rulemakers’ rescue, perhaps?
ChatGPT told POLITICO it thinks it might need regulating: “The EU should consider designating generative AI and large language models as ‘high risk’ technologies, given their potential to create harmful and misleading content,” the chatbot responded when questioned on whether it should fall under the AI Act’s scope.
“The EU should consider implementing a framework for responsible development, deployment, and use of these technologies, which includes appropriate safeguards, monitoring, and oversight mechanisms,” it said.
The EU, however, has follow-up questions.