Page Nav

HIDE

Grid

GRID_STYLE

Trending News

latest

OpenAI Warns To Leave Europe Over Regulation.

Sam Altman peaking to reporters during a visit to London this week / FT. OpenAI chief Sam Altman has warned that Brussels’ efforts to regula...

Sam Altman peaking to reporters during a visit to London this week / FT.
OpenAI chief Sam Altman has warned that Brussels’ efforts to regulate artificial intelligence could lead the maker of ChatGPT to pull its services from the EU, in the starkest sign yet of a growing transatlantic rift over how to control the technology.

Speaking to reporters during a visit to London this week, Altman said he had “many concerns” about the EU’s planned AI Act, which is due to be finalised next year. In particular, he pointed to the European parliament’s move this month to expand its proposed regulations to include the latest wave of general purpose AI technology, including large language models such as OpenAI’s GPT-4.

“The details really matter,” Altman said. “We will try to comply, but if we can’t comply we will cease operating.” Altman’s warning comes as US tech companies prepare for what some predict will be a drawn-out battle with European regulators over a technology that has shaken up the industry this year. Google’s chief executive Sundar Pichai has also toured European capitals this week, seeking to influence policymakers as they develop “guardrails” to regulate AI.

The EU’s AI Act was initially designed to deal with specific, high-risk uses of artificial intelligence including its use in regulated products such as medical equipment or when companies use it in important decisions for granting loans and making hiring decisions.

However, the sensation caused by the launch of ChatGPT late last year has caused a rethink, with the European parliament this month setting out extra rules for widely used systems that have general applications beyond the cases previously targeted. The proposal still needs to be negotiated with member states and the European Commission before the law comes into force by 2025.

The latest plan would require makers of “foundation models” — the large systems that stand behind services such as ChatGPT — to identify and try to reduce risks that their technology could pose in a wide range of settings. The new requirement would make the companies that develop the models, including OpenAI and Google, partly responsible for how their AI systems are used, even if they have no control over the particular applications the technology has been embedded in.

The latest rules would also force tech companies to publish summaries of copyrighted data that had been used to train their AI models, opening the way for artists and others to try to claim compensation for the use of their material.

The attempt to regulate generative AI while the technology is still in its infancy showed a “fear on the part of lawmakers, who are reading the headlines like everyone else”, said Christian Borggreen, European head of the Washington-based Computer and Communications Industry Association. US tech companies had supported the EU’s earlier plan to regulate AI before the “knee-jerk” reaction to ChatGPT, he added.

US tech companies have urged Brussels to move more cautiously when it comes to regulating the latest AI, arguing Europe should take longer to study the technology and work out how to balance the opportunities and risks. 

Pichai met officials in Brussels on Wednesday to discuss AI policy, including Brando Benifei and Dragoş Tudorache, the leading MEPs in charge of the AI Act. Google’s CEO emphasised the need for appropriate regulation for the technology that did not stifle innovation, said three people present at these meetings.

Pichai also met Thierry Breton, the EU’s digital chief overseeing the AI Act. Breton told the Financial Times they discussed introducing an “AI pact” — an informal set of guidelines for AI companies to adhere to, before formal rules are put into effect because there was “no time to lose in the AI race to build a safe online environment”.

US critics claim the EU’s AI Act will impose broad new responsibilities to control risks from the latest AI systems without at the same time laying down specific standards they are expected to meet. While it is too early to predict the practical effects, the open-ended nature of the law could lead some US tech companies to rethink their involvement in Europe, said Peter Schwartz, senior vice-president of strategic planning at software company Salesforce.

He added Brussels “will act without reference to reality, as it has before” and that, without any European companies leading the charge in advanced AI, the bloc’s politicians have little incentive to support the growth of the industry. “It will basically be European regulators regulating American companies, as it has been throughout the IT era.”

The European proposals would prove workable if they led to “continuing requirements on companies to keep up with the latest research [on AI safety] and the need to continually identify and reduce risks”, said Alex Engler, a fellow at the Brookings Institution in Washington. “Some of the vagueness could be filled in by the [commission] and by standards bodies later.”

While the law appeared to be targeted at only large systems such as ChatGPT and Google’s Bard chatbot, there was a risk that it “will hit open-source models and non-profit use” of the latest AI, Engler said. Executives from OpenAI and Google have said in recent days that they back eventual regulation of AI, though they have called for further investigation and debate.

Kent Walker, Google’s president of global affairs, said in a blog post last week that the company supported efforts to set standards and reach broad policy agreement on AI, like those under way in the US, UK and Singapore — while pointedly avoiding making comment on the EU, which is the furthest along in adopting specific rules.

The political timetable means Brussels may choose to move ahead with its current proposal rather than try to hammer out more specific rules as generative AI develops, said Engler. Taking longer to refine the AI Act would risk delaying it beyond the term of the current EU presidency, something that could return the whole plan to the drawing board, he added.