Prof. Syed Munir Khasru
South China Morning Post (Hong Kong)
August 2, 2023
____________________
At a time when the Biden administration has secured voluntary commitments from seven top companies – Google, OpenAI, Amazon, Meta, Microsoft, Anthropic and Inflection – to address the risks posed by AI, the artificial intelligence industry is expected to reach US$1.8 trillion by 2030, growing at an annual pace of 37.3 per cent between 2023 and 2030.
The companies have committed to testing their AI systems’ capability and safety, sharing their results as well as information on managing AI risks before releasing them to the public. As governments across the world invest heavily in AI research and development, the ethical constraints of AI – which includes issues such as bias, privacy and data protection – are of great concern.
Cybercrime is an issue affecting businesses around the world, and academic integrity is under threat as students turning in AI-generated papers has become a worry.
Biases within AI systems pose additional ethical challenges as unconscious biases can be introduced through human involvement. Examples include biased outcomes of facial recognition AI for people of colour and the discriminatory consequences of automated systems, such as Amazon’s recruiting tool, which reflected bias against women.
Meanwhile, the geopolitical battle for AI domination is on, with the United States and China investing massively in research and development to fight for technological dominance. While many countries are taking policy action on AI, there are differences in the scope of rules, with some nations focusing on certain AI systems while others are adopting more comprehensive measures. The emphasis on security, justice, openness and human scrutiny is crucial to avoid bias and govern these AI systems.
The UN’s International Telecommunication Union’s (ITU) AI for Good Global Summit in Geneva and the World Artificial Intelligence Conference in Shanghai have produced recommendations underscoring the need for responsible AI development and use. Doreen Bogdan-Martin, secretary general of the ITU, has called for swift collaboration, warning that unrestrained AI runs the risk of “spiralling out of control”, risking “social unrest, geopolitical instability and economic disparity” not experienced before.
The geopolitical divisions were evident in the two events, with the Geneva event mostly having speakers from the West while the Shanghai conference had mostly speakers from China and its allies. The UK is planning to host a global AI summit this fall.
Geopolitics is a threat to international cooperation and standardisation on AI legislation, resulting in patchwork strategies, competing goals and disjointed international standards. The ongoing chip war between the US and China could make things worse, leading to problems with interoperability, uneven enforcement and unequal protection.
Inadequate protections could deepen inequality and cause moral dilemmas. ITU deputy secretary general Tomas Lamanauskas said, “It is up to us to make sure the good prevails over the risky, and that we leverage AI to help rescue the sustainable development agenda and save our planet.”
Establishing an environment for exchanging best practices and discussing ethical issues by cultivating trust is essential for the world to start converging in framing global rules of engagement and ensuring AI that upholds justice, respects human rights and minimises risks.
Stronger support is needed to bring nations together to make unified legislation to regulate AI, promote cooperation, share information and exchange resource via multilateral agreements, treaties and global standards. Developing ethical and responsible management of AI requires initiation of a global AI governance structure with the participation of stakeholders from across the world.
There are successful international collaborations in other fields, such as the Montreal Protocol, a worldwide accord that illustrates strong international leadership and government collaboration for phasing out ozone-depleting compounds. The European Union-led General Data Protection Regulation has been an exemplar for regulations on the digital spectrum.
The world faces two stark choices. Either we can let tech giants control our fate, or the global community comes together to harness the power of AI to improve everyone’s lives. Responsible navigation of AI’s rapid growth requires addressing ethical concerns, ensuring accountability and promoting collaboration and cooperation.
Getting the most out of AI that benefits everyone is not exclusive to retaining the core principles of developing unified international legislation, multilateral agreements and global governance standards. As Bogdan-Martin warned, “We’re running out of time”. The stakes could not be higher for the global community to act quickly, collectively and decisively on AI.