OpenAI could pull out of Europe due to new AI rules, warns CEO Altman

OPenAI CEO Sam Altman said on Wednesday that his company could “stop operating” in the European Union if it fails to comply with provisions of new artificial intelligence legislation the bloc is preparing.

“We’re going to try to deliver,” Altman said on the sidelines of a panel discussion at University College London, part of an ongoing tour of European countries. He said he had met with EU regulators to discuss the AI ​​law as part of his tour, and added that OpenAI has received “a lot” of criticism over the way the law is currently worded.

Altman said OpenAI’s skepticism centered on the EU law’s designation of “high risk” systems as it currently stands. The law is still undergoing revisions, but as it stands it could require large AI models such as OpenAI’s ChatGPT and GPT-4 to be designated as “high risk”, forcing the companies behind them to comply with additional requirements. of security. OpenAI has previously argued that its general purpose systems are not inherently high risk.

“Either we will be able to resolve these requirements or we will not,” Altman said of the EU AI Act’s provisions for high-risk systems. “If we can comply, we will comply, if we cannot, we will stop operating… Let’s try. But there are technical limits to what is possible.”

The law, Altman said, “wasn’t inherently flawed,” but he went on to say that “the subtle details here really matter.” During an onstage interview earlier in the day, Altman said his preference for regulation was “somewhere between the traditional European approach and the traditional US approach.”

Altman also said onstage that he was concerned about the risks posed by artificial intelligence, highlighting the possibility of AI-generated misinformation designed to appeal to an individual’s own personal biases. For example, AI-generated disinformation could have an impact on the upcoming US election in 2024, he said. But he suggested that social media platforms were more important drivers of misinformation than AI language models. “You can generate all the disinformation you want with GPT-4, but if it’s not spread, it’s not going to do much good,” he said.

Overall, Altman presented an optimistic view to the London public of a potential future where the benefits of technology far outweighed its risks. “I’m an optimist,” he said.

In a foray into socioeconomic policy, Altman raised the prospect of the need for wealth redistribution in an AI-driven future. “We’re going to have to think about the distribution of wealth differently than we do today, and that’s okay,” Altman said onstage. “We think about it a little differently after every technology revolution.”

Altman told TIME after the keynote that OpenAI was preparing, in 2024, to start making public interventions on the issue of wealth redistribution, much as it is currently doing in AI regulatory policy. “Let’s try,” he said. “This is a project for next year for us.” OpenAI is currently conducting a five-year study on universal basic income, he said, which will be completed next year. “It will be a good time to do that,” Altman said.

Altman’s appearance at the University of London attracted some negative attention. Outside the packed auditorium, a handful of protesters huddled around talking to people unable to enter. One protester carried a sign reading: “Stop AGI Suicide Run”. (AGI stands for “Artificial General Intelligence,” a hypothetical superintelligent AI that OpenAI has said it intends to one day build.) Protesters passed out fliers urging people to “oppose Sam Altman’s dangerous vision for the future.”


Protesters gather outside a UCL lecture hall in London, on May 24, 2023, during a visit by OpenAI CEO Sam Altman.

Billy Perrigo by TIME

“It’s time for the public to step up and say: it’s our future and we should have a choice about it,” said Gideon Futterman, 20, one of the protesters, who said he was a student studying solar geoengineering and existential risk at the University of Oxford. “We shouldn’t let Silicon Valley multimillionaires with messiah complexes decide what we want.”

“What we’re doing is a deal with the devil,” says Futterman. “A very large number of people who think that these systems are on the right path for AGI also think that a bad future is more likely than a good future.”

Futterman told TIME that Altman showed up and had a brief conversation with him and the other protesters after his panel appearance.

“He said he understands our concerns, but feels that security and capabilities cannot be separated from one another,” Futterman said. “He said that OpenAI is not a player in the [AI] race, despite the fact that this is so clearly what they are doing. He basically said he doesn’t think this development can be stopped and said he has confidence in their safety.”

More must-reads from TIME


write to Billy Perrigo at billy.perrigo@time.com.

Leave a Reply

Your email address will not be published. Required fields are marked *