Published daily by the Lowy Institute

The Bletchley Park artificial intelligence summit: Good optics, less substance

A jumbled collection of commonplace platitudes does not a remarkable outcome make.

The AI Safety Summit at Bletchley Park, 2 November (Simon Walker/No 10 Downing Street/Flickr)
The AI Safety Summit at Bletchley Park, 2 November (Simon Walker/No 10 Downing Street/Flickr)

The Lowy Institute’s Lydia Khalil gave us a guide in these pages to UK Prime Minister Rishi Sunak’s two-day artificial intelligence summit, hosted by the United Kingdom this month at the appropriately historic Bletchley Park.

The United Kingdom should have some regulatory clout: it has a top-tier position in frontier AI research, and still has a promising ecosystem of AI-related companies and research and some leading think tanks. But since the European Union published its proposed AI Act in April of 2021, the United Kingdom has found itself in a land betwixt and between, of repeated ambitious-sounding declarations, undermined by limited delivery on AI governance.

Broader factors contribute to the United Kingdom’s difficult position. The UK government does not want to align itself with the European Union, and has proved unable, unsurprisingly, to shape EU norms. It is not seen as a relevant partner by the United States: President Joe Biden effectively pre-empted the summit by signing a President’s Executive Order on AI with significant detail and some specific actions only a couple of days before. Together, the United States and the European Union constitute two “like-minded blocks”, which are investing heavily in having a regulatory ecosystem compatible across the Atlantic. The United Kingdom appears to play no real role in these efforts.

Australia is almost alone in still paying some deference to London’s views.

To make things worse, UK relevance in the East is dwindling. Traditional close ally and active AI policymaker Singapore is exploring its own path in cooperation with the Organisation for Economic Cooperation and Development and the European Union, while paying limited (if any) attention to what is coming from London. Any UK regulatory vision for AI is irrelevant in other regional powerhouses: Japan, South Korea, China.

Australia is almost alone in still paying some deference to London’s views. The Ministry for Industry and Science’s paper on Safe and responsible AI in Australia describes the UK’s AI governance initiatives in some detail: as much as Canada (which has been far more active in regulatory terms). Even so, Australian discussions have dedicated far more attention to developments in the European Union and the United States.

Within this geopolitical context, the Bletchley Park Summit was a potential breakthrough move, designed to give the United Kingdom momentum, and a seat at the global table. Some of the reporting led people to believe it worked – but the impression does not hold up to deeper scrutiny.

First, the optics. Many important commercial and government players were in the room. Having US Commerce Secretary Gina Raimondo taking to the stage alongside China's Vice Minister of Science Wu Zhaohui was a good omen. But even this is hardly new. Global, high-level discussions and even mutual agreement had already occurred at the UN level, with a November 2021 UNESCO “Recommendation on the Ethics of Artificial Intelligence” adopted by all 193 Member States. More was needed than high-level expressions of goodwill to make the summit a success.

The more high-profile of the two documents released following the summit was The Bletchley Declaration signed by the 27 states (including China) plus the European Union. This document is, at best, a rather jumbled collection of commonplace platitudes about AI. The only “commitment” is that states “resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI that is safe, and supports the good of all”, and even then the declaration merely points repeatedly to existing international initiatives to achieve this goal. This is no advance on what we already have in the UNESCO Recommendation or OECD Ethical AI Principles. No new global mechanisms or anything concrete is added to the rather fractured global picture on AI governance.

The second document is the “landmark” deal on AI testing agreed by a subgroup of attendees: Australia, Canada, France, Germany, Italy, Japan, Korea, Singapore, the United States, the United Kingdom and the European Union. This is an agreement to cooperate on testing leading companies’ AI models. This could have more substance than the Declaration. Public statements from the big AI firms accept the need to give deeper access to public sector AI safety institutes and researchers to test safety aspects of their most advanced frontier models. Biden’s Executive Order outlines active steps in the United States in that direction; this statement may be a move to internationalise that activity.

While detail is yet to be provided, it seems that Sunak’s plan is to develop with the United States an institute specific for this purpose, based in the United Kingdom. That would offer the United Kingdom a win for its relationship with the United States. The real test will be to make this happen in an operational timeframe, and secure genuine cooperation from the companies.

For now, we can expect more direct regulatory and governance consequences from the US Executive Order, or the European Union with the legislative package for AI. Longer term, genuine cooperation on testing would be something. Here too, however, the question will remain: how global will such efforts be – and what about China and the Chinese companies?




You may also be interested in