AI safety: How close is global regulation of artificial intelligence really?

As more than 100 attendees from civil society, the world’s leading tech companies, and governments gathered in an English stately home, there was some tension.

This was the world’s first AI Safety Summit convened by a major international country, and was intended to help chart the future direction of the technology. And it was held at Bletchley Park, where more than 80 years ago, a group of British scientists broke the German Enigma code.

The term “artificial intelligence” has been around since 1956, with the concept of machine learning existing for centuries before that – albeit often more in theory than in practice. In November, Collins Dictionary named the term as its “word of the year” as it had been the talking point of 2023. (See this simple BBC guide to AI to learn more about how it works.)

AI already dictates large parts of our lives without us knowing it, and has for years. But the October 2022 release of ChatGPT by OpenAI, a company still less than a decade old, changed the paradigm.

Generative AI – where artificial intelligence takes its collective knowledge and an input, often in the form of a question or conversation starter, and produces new text, images or other content – has changed the game. And the speed of its development has alarmed the public and politicians who legislate for them.

Consensus has been agreed on the need for regulation: 28 countries including the US, UK and China, alongside the European Union, signed the Bletchley Declaration, a world-first global agreement at the UK’s AI Safety Summit, saying as much.

 

But what should be done, and by whom, is still up for debate.

The UK government announced on the first day of the summit it was launching a UK AI Safety Institute. But this was followed immediately after by the US unveiling its own version.

Gina Raimondo, the US commerce secretary, acknowledged the competition. “Even as nations compete vigorously, we can and must search for global solutions to global problems,” she said in a speech at the UK summit.

The day before the Bletchley Park summit began, the US unveiled an executive order outlining how it planned to regulate AI. It was a deliberate decision, believe experts watching the developments play out, for the US to grab control.

And around the same time in Japan, the G7 group of industrialised countries issued a joint statement on the importance of regulating AI that seemed timed to remind the world that they, too, had a stake in the debate.

Plenty of processes are running in parallel across the world to try and rein in AI companies from unsafely developing their tools. But they’re not always working together.

I think there’s pretty divergent views across various countries around what exactly to do – Deb Raji

That much was clear at the UK summit itself, according to Deb Raji, a fellow at the Mozilla Foundation, who attended many of the meetings at Bletchley Park. “There was a global consensus on the thought that AI was something to reflect on and to regulate,” she says. “But I think there’s pretty divergent views across various countries around what exactly to do.”

Raji points out that in some of the sessions she attended, even member states from within the European Union were advocating for different, competing and contradictory approaches to regulating AI that were also in opposition to the act the countries within the trading bloc agreed earlier this year. “Even within that more coordinated discourse around AI, there was a shocking amount of diversity in the perspectives involved,” she says.

Such diversity and such competition is natural, says Margaret Mitchell, a researcher and chief ethics scientist at Hugging Face, an AI company lobbying for a more thoughtful, humanity-focused approach to AI. “Governments will seek to protect their national interests, and many of them will seek to establish themselves as leaders,” she says.

“That kind of competition to each be the leader falls out, I think, from the tendencies of humankind and the kinds of personalities that are promoted and empowered.”

Mitchell says that much of the jostling to dictate global AI rules is down to “alpha manoeuvres” that are a function of who is in power in different countries worldwide. However, both Mitchell, Raji and Mike Katell, an ethics fellow at the Alan Turing Institute, highlighted the issue that voices within the Global North and economically more developed countries were given more prominence at the table this week, rather than from elsewhere in the world.

“You can look at maps where the AI regulation is happening around the world,” says Katell. “There are big gaps in the Global South. There’s very little happening in Africa.”

Katell says that “if there’s a single word that correctly describes the AI landscape, it’s competition”. That brings challenges with duplication and contradiction, believes Raji. “I think it’s going to be really hard to come up with a consensus,” she says.

Some forward steps are happening despite that competition. “The actions across countries this week are important steps in what is a difficult regulatory process on a global level,” says David Haber, co-founder and chief executive of Lakera, an AI safety firm, who was an adviser to the European Union as it passed its own AI Act.

I am a little bit more optimistic than I was before this summit and the move by the Americans – Mike Katell

However, Haber acknowledges that it’s progress, rather than action – and there’s plenty of crosstalk and duplication (and occasionally contradiction) depending on the jurisdiction. Until some baseline global consensus is reached, companies will have to self-police, he says.

“While governments triangulate their policies, the real responsibility will continue to remain with industrial players,” he says. “AI risks will continue to evolve quickly. We cannot wait for the technology to fully mature before applying standard security and safety practices.”

As for who will win out, and what it means, the experts aren’t sure. “I’m not really holding my breath that it will amount to much, but I am a little bit more optimistic than I was before this summit and the move by the Americans,” says Katell.

Raji believes the Americans are leading the way. Their intervention this week “was a much more meaty regulatory intervention, and I think them putting that out was an important signal,” she says. However, she hopes that the US alone doesn’t have the final say. “I hope there is an opportunity for various countries to enter the conversation and influence each other,” she says.

 

Credit: bbc.com

LEAVE A REPLY

Please enter your comment!
Please enter your name here