During the week of the India AI Impact Summit in February 2026, scattered pockets within and around the venue of the summit in New Delhi became the sites of sustained, sometimes uncomfortable, and consequential conversation about the future of AI governance. As part of the two-day programming hosted by the Centre for Communication Governance (CCG) and the Global Network Initiative (GNI), and as part our MAP-AI initiative, we brought together a wide range of policymakers, regulators, diplomats, industry leaders, technologists, civil society organisations, and academics from across the globe, with a strong and intentional emphasis on amplifying voices from the Global South. What emerged across both days was not consensus, but something arguably more valuable: a clearer map of the fault lines, and a sharper sense of what it would actually take to address them.
Through this piece, we explore four recurring themes that cut across this programming across the Plenary Sessions on both days: the legitimacy of multistakeholder processes, the uneven distribution of agenda-setting power, the gap between governance principles and their operational reality, and the broader structural questions about what AI is actually being built to do, and for whom.

Our plenary sessions highlighted that the foundational tension running beneath almost every AI governance conversation today is that genuine inclusivity is, by design, uncomfortable. Discomfort, however, it was argued, is not a failure of process but evidence that the process is working. It forces attention onto contested issues and confers legitimacy on outcomes that might otherwise be dismissed, and as one panelist argued, the risk of the smoother alternative is governance that looks participatory without being consequential.
This concern about ‘form’ versus ‘substance’ within AI governance processes, was drawn sharply from the outset. Processes can be consultative without being decisive, filling seats without shifting power, one panelist stressed. Further, panellists also warned against the risk of civil society organisations being cast as merely friction in an otherwise efficient system, rather than as holders of knowledge that policymakers cannot access any other way. The Global Network Initiative was invoked as a counterexample: a case where major technology companies eventually recognised they could not resolve global questions around privacy and free expression without the sustained, institutionalised involvement of civil society and academia. That lesson however, does not seem to have been yet fully absorbed into AI governance architecture.
The question of bargaining power ran as a persistent thread across these sessions. Discussions examined what it would take for Global South actors to move from participation to agenda-setting in international AI governance spaces. This was seen as a significant distinction given how standards, norms, and regulatory templates tend to travel.
Interestingly, the term “Global South” itself was also further interrogated. One framing described it as a position, and not a place. Panelists also argued that this is a category imposed externally on a heterogeneous set of countries to make them legible on others’ terms. Another pushed back on the North-South binary altogether, noting that no country has yet fully solved the problem of designing trustworthy AI systems, and that common ground may be more available than the framing suggests.
The trajectory of AI technologies seems to have also exceeded early expectations, which has left many Global South governments blind to technological “colonisation” risks. For instance, panelists noted that in South Africa and beyond, structural power imbalances between government, corporate and civil society persist within these systems. African universities, for instance, were flagged as lagging in governance focus, demanding renewed geopolitical and structural research at a fundamental level.
A recurring diagnosis across sessions concerned the distance between high-level governance principles and their operational implementation. Concepts like fairness, accountability, and human rights carry real weight in policy documents but require translation into technical requirements that engineers can actually build to. Panelists argued that this translation gap is further unevenly distributed: some jurisdictions have policymakers with genuine technical fluency; but most do not.
Copy-pasting foreign laws was also identified as a regulatory model that may fail; contextual factors including digital divides, political instability, capacity gaps, population scale — demand tailored models rather than transposed frameworks. Regionally, the discussion scoped Brazil’s advances in networked governance and legislation, and China’s layered, flexible model (guidelines atop frequent updates), while EU’s AI Act was described to lead and the US’ approach was to offer guidelines. However, none of these, panellists argued, translated cleanly into the realities facing governments in for instance, Nairobi or Dhaka.
Compliance costs added another layer of concern. According to industry representatives, regulatory burdens are estimated to stand at roughly 30 to 40 percent for digital sectors in some jurisdictions, which risk entrenching incumbents and foreclosing the ability of smaller actors. This is also a risk that is more pertinent for actors in the Global South, to scale globally. The case for international standards harmonisation was made on precisely these grounds: a company in Nairobi should not be trapped by checkbox compliance across fragmented frameworks.
The sharpest structural discourse that arose out of our programming reframed AI governance as inseparable from broader questions of economic and environmental policy. For instance, it was noted that while data centres anchor the AI economy, they also shatter the myth of a climate-friendly digital transition. Moreover, centralization of both data and control looms large: Big Tech’s data center boom intensifies extraction, inverting accountability.
The provocation that perhaps landed the hardest was a call to move from generative AI toward regenerative AI that centre social care, environmental sustainability, and contextual models that treat skills as societal rather than individual assets. This also means, and as panellists emphasised, we must nurture pluralistic approaches to data governance, recognise collective and societal data rights, and limit the commodification of public data.
Whether or not this framing gains traction in formal governance spaces, it captured something the two days of programming kept circling: that the question of who governs AI cannot be separated from the question of what AI is being built to do, and for whom.
The dialogue did not resolve that question. But in making it harder to avoid, it did exactly what the best governance conversations should: pointed toward a different set of priorities than those currently dominating mainstream governance conversations.