info@ipag.org
The Institute for Policy, Advocacy, and Governance
  • Please wait..

U.K. summit shows why the U.N. should take lead on AI issues

Prof. Syed Munir Khasru

Nikkei Asia (Japan)
November 16, 2023

Link: https://asia.nikkei.com/Opinion/U.K.-summit-shows-why-the-U.N.-should-take-lead-on-AI-issues#

__________________

The AI Safety Summit convened two weeks ago by British Prime Minister Rishi Sunak was supposed to align interested international parties on questions of setting ground rules around the development of artificial intelligence.

But while Sunak hailed the summit in Milton Keynes and its resulting declaration as “world first” breakthroughs and the summit included participants from 28 nations, it did not really deliver concrete outcomes and was a clear failure in terms of diverse representation.

A major shortcoming of the summit declaration was a lack of specificity. The summit also produced no binding standards, regulations, policy changes, ethical guidelines or specific targets for reducing AI bias or any action plan to implement the principles discussed.

Despite the hype, the U.K. summit was not even really a first. The 2023 World Artificial Intelligence Conference held in Shanghai in July actually had more tangible outcomes. But given the challenges involved, an ad hoc event cannot be expected to show the way on AI governance; this is really an area where the U.N. should be moving matters forward.

While the EU has proposed requiring high-risk AI systems to meet transparency obligations, no such concrete commitments emerged from the U.K. summit. With advanced AI already deployed in high-impact sectors, the lack of concrete guardrails is worrying. Rhetoric needs to shift to action, with clear mechanisms for ongoing governance and accountability.

The unregulated proliferation of deepfakes, disinformation, autonomous cyber weapons, and other risks demands the establishment of global standards and policies. Otherwise, ethical AI will remain an aspiration rather than a reality.

Because AI is having a profound global impact, it is concerning to have an event like the U.K. summit limited to a narrow range of technology companies and governments. Civil society groups, marginalized communities and developing countries were all underrepresented at the summit.

This is a serious problem. Such perspectives are indispensable for inclusive governance as those at risk of harm can help expose biased data and algorithms that could amplify injustice if deployed uncritically.

The U.K. event was dominated by speakers from Western economies including the U.S., EU, Canada, Japan and Australia, and the summit declaration was signed by 28 countries and the EU, with Western nations predominant although China, India, Indonesia, Japan, Singapore, the Philippines and South Korea were among those joining in.

AI is a global challenge and requires global input. A Western-centric approach to AI governance could lead to the development of AI standards not relevant to other parts of the world, due in part to differences in values.

AI presents a universal challenge, but diverse nations have differing regulatory approaches. Progress necessitates incorporating non-Western viewpoints into cooperative initiatives, not ignoring them. Proposals from academia and civil society at Milton Keynes went largely unheard, undermining holistic governance.

Geopolitical tensions, particularly those between the U.S. and China, are a major obstacle to global cooperation on AI governance. This is increasing the risk of incompatible AI standards taking root in different countries.

Disappointingly, the U.K. summit avoided addressing many contentious issues key to achieving global collaboration on AI governance and failed to build trust between competing nations that could enhance global cooperation on AI governance.

Workers prepare for the World AI Conference in Shanghai in July: The conference agreed on the development of open-source AI platforms, common data sets and joint laboratories. © AP
Ignorance was amplified by the lack of coherent plans produced for AI governance. Strategies to curb AI misuse for oppression, mass surveillance and autonomous weapons were sorely lacking.

Control of dual-use AI technologies is already challenging because of the difficulty in distinguishing between legitimate and malicious uses. These dangers require urgent multilateral attention before humanitarian crises emerge.

Tangible outcomes from the Shanghai AI conference included agreement on the development of open-source AI platforms, common data sets and joint laboratories. Participants demonstrated a practical, results-oriented approach and the conference boasted diverse, global representation, including Nobel laureates and international academics.

The event also showcased AI models and their industrial impact, presenting a practical, solutions-focused agenda. Interactive sessions, livestreamed globally, made the Shanghai conference accessible and engaging, and its forward-looking approach toward emerging AI technologies marked a significant advancement in achieving concrete AI development and global cooperation objectives.

AI governance will inevitably be complex, multifaceted and evolving. Collective global ambitions for responsible AI development require transparency, inclusion, cooperation and grappling with AI’s dark dual-use potential.

While the U.K. summit represents a starting point of sorts, much work remains to build an effective framework for AI governance that delivers concrete outcomes through diverse cooperation and sustained coordination between nations.

This is why establishing a U.N. body for AI governance, akin to the U.N. Framework Convention on Climate Change, would be appropriate. Such a body would help ensure an equitable and globally diverse perspective and the incorporation of views from developing nations, emerging economies and non-Western cultures.

A U.N.-led approach would also offer a more neutral and balanced platform, mitigating geopolitical biases and facilitating the development and enforcement of globally applicable AI guidelines and standards. Such an initiative would promote equitable resource and knowledge sharing and could maintain long-term sustainability and oversight.

This kind of structure should result in higher public trust and legitimacy and facilitate capacity building. It could also significantly increase public awareness and engagement in AI issues, ensuring a comprehensive, structured approach to global coordination and policymaking.