June 21, 2023
JAKARTA – Generative artificial intelligence (AI) is proving to be a transformative force on our economies and societies. Its rapid development in recent months underscores AI’s vital role as the foundational infrastructure upon which the digital landscape will evolve – and reinforces the need for ensuring its responsible, ethical use.
Generative AI rapidly synthesizes vast amounts of information to generate diverse content across various mediums, merging human-like creativity with technology. This emerging field presents many possibilities to revolutionize work across industries, automate tasks, foster creativity, and improve decision-making, equipping humans with tools to advance progress.
But it also raises challenges, ranging from ethical considerations and notions of fairness to more profound debates about intelligence and fears about reaching the technological singularity, a hypothetical future in which technology becomes uncontrollable. It poses potential risks including job displacement, malicious exploitation, and misalignment with human values, and it raises questions about ownership and intellectual property rights, open vs. closed models, the balance between innovation and safety, the tension between multilingual and sovereign national language models, and more. The pressing concern is that it might exacerbate the digital divide by leading to an unequal distribution of opportunities and resources.
These uncertainties highlight the urgency to create robust AI governance frameworks to ensure responsible and beneficial outcomes for all. AI governance strategies must evaluate the opportunities and risks and propose ways forward that strike a delicate balance between harnessing generative AI’s potential and safeguarding ethical considerations. As the technology continues to advance and expand its reach, it becomes imperative to ensure its development and deployment remains firmly aligned with human-centric principles, driving societal progress at every turn.
International institutions including the United Nations, OECD, G7 Hiroshima AI process, G20 AI portal, and the joint EU-US Trade and Technology Council have put forward proposals for ethical principles, transparency and safety in the development and deployment of AI technologies. Individual countries are also exploring regulatory approaches, from industry-led self-regulation to formal governance models. These initiatives have been promising – but alone they are not enough. Addressing generative AI development’s rapid speed and expanding reach requires global public-private collaboration.
In response, the World Economic Forum has launched the AI Governance Alliance to unite industry leaders, governments, academic institutions, and civil society organizations, and to champion responsible global design and release of transparent and inclusive AI systems.
This initiative builds upon existing frameworks and incorporates the preliminary recommendations from the Responsible AI Leadership: A Global Summit on Generative AI, hosted by the World Economic Forum in April.
The AI Governance Alliance is built upon the World Economic Forum’s more than 50-year expertise in establishing multi-stakeholder partnerships. It brings together private sector knowledge, public sector governance mechanisms, and civil society objectives to address the transformative nature of generative AI systems.
With the support of the World Economic Forum’s Centre for the Fourth Industrial Revolution (C4IR), the alliance actively engages with various regions while contributing to shaping a global approach to address the transformative nature of generative AI systems.
The alliance’s mission reinforces three fundamental actions that must be taken to ensure responsible and safe AI development and deployment.
First, we must prioritize safe systems and technologies, investing resources in robust and secure AI systems to ensure user safety and risk mitigation. This involves technical aspects such as establishing shared terminology, benchmarks, and traceability, as well as implementing responsible practices throughout its development and deployment and ensuring robust evaluation methods.
Second, we must ensure sustainable applications and transformation, aligning generative AI with long-term societal goals, addressing biases, and promoting transparency. It is essential to equip business and government leaders with the knowledge and foresight necessary to harness the power of generative AI effectively and responsibly within their respective organizations, while mitigating the risks that come with it.
Third, resilient governance and regulation are key. The alliance will collaborate with policymakers and stakeholders to establish ethical frameworks and regulatory measures specific to generative AI. With the aim to anticipate potential risks, guide development, and ensure a globally harmonized understanding of responsible AI practices, these involve evaluating societal impact, championing equity and inclusion, and co-designing essential governance blueprints.
We must come together to ensure innovation thrives and the benefits of AI are realized while minimizing risks. Generative AI can be a force for good, empowering individuals and advancing societies – if we bring together diverse insights and sound strategies to set it on the right path today.
***
Cathy Li is head, AI, data and metaverse at World Economic Forum, where Jeremy Jurgens is a managing director.