Since 2023, priorities, conversations and discourses on global AI governance have begun to be increasingly shaped by the AI Summit series, which is an emerging series of global convenings widely attended by national leaders, industry bodies and technological companies. Starting with the Bletchley AI Safety Summit, and the latest edition in New Delhi at the India AI Impact Summit of 2026 (‘the Impact Summit’), there has now been four such large-scale global convenings, resulting in a series of outcomes, including a set of voluntary commitments agreed upon by tech companies on the development of Frontier AI models and more recently, the multilaterally endorsed AI Impact Summit Declaration around shared priorities in future AI governance.
Notably, with the emergence of the series, there has also been a growing body of criticism that has highlighted the prior Summits’ limited engagement with civil society and academia organisation — for instance, the Paris AI Action Summit of 2025, was criticised for inviting very few civil society and academia organisations for the official event, and even less so from the Global Majority — highlighting a critical gap in achieving multistakeholder dialogue in advancing a shared vision of equitable AI governance worldwide.
This is further compounded by the fact that in their current framing, these critical conversations are often led by Global North priorities, despite the immediate impact of various AI technologies being primarily felt by multilingual, culturally diverse and high-inequality contexts of the Global Majority. As such, it is vital that civil society and academic voices — including stronger regional networks from the Global Majority — form an essential stakeholder group in influencing priorities, agendas and outcomes in future AI governance processes like the AI Summit series. Notably, the Impact Summit was only the first time that such a process has been hosted by a Global Majority country.
In recognition of these challenges, in 2025, the Centre for Communication Governance (CCG) and the Global Network Initiative (GNI) launched the Multistakeholder Approaches to Participation in AI governance (MAP-AI), designed to specifically foster and support multistakeholder, under-representated voices in the lead-up to and during AI governance processes to demonstrate how inclusive participation can enhance AI governance. The launch of the MAP-AI initiative was further preceded by a series of multistakeholder, pre-Summit convenings across four continents, where we continued to surface the aforementioned challenges, priorities and discourses. Our curated Insights Document further consolidates evidence-based insights and learnings emerging from these dialogues.
Finally, in February 2026, during the week of the Impact Summit, CCG and GNI organised a two-day convening towards facilitating multistakeholder engagement. On the 16th of February, we organised the Shared Learning Forum on AI, a closed-door event for academics, think tanks, civil society and researchers, which was followed by Reinforcements & Learning: Multistakeholder Convening on AI Governance, a multistakeholder event on 17 February featuring civil society, academic, and researcher-led roundtables, workshops, and panels, with high-level participation from industry, civil society, academia, foundations, government, and multilateral organization leaders. The event saw a turnaround of five-hundred participants, inviting deliberations, thoughts and insights from approximately hundred-and-twenty speakers.
Through these two days and over twenty-one sessions, we explored key questions of global processes associated with AI governance — for instance, on evolving notions of multistakeholder approaches to these processes — as well as drilled down on specific sectoral issues like AI and Digital Public Infrastructure, and privacy harms in AI models, which might otherwise not be within the ambit of discussions at the Impact Summit as well as other AI governance processes. Deliberations also included situating AI safety within broader digital rights and public-interest frameworks, as well as exploring how mechanisms such as transparency reports, assurance frameworks, risk assessments, and self-certification can enable more robust accountability practices for AI systems.
These sessions were ultimately tied to three critical themes that framed our overall programming — questions of global AI governance through collective responsibility, safe and trusted AI and building context-driven AI Infrastructure, as well as the cross-cutting theme of global south leadership in AI.
Building off of these, we are now excited to launch a series of blog-posts that speaks to key insights, themes and learning across these deliberations in New Delhi, with the aim of informing forward-looking international AI governance efforts and feeding into future work done across various verticals of the MAP-AI Initiative.