Contextualising AI governance frameworks for the Global Majority: Learning and Reinforcements (Part 3)

Home > News

April 21, 2026  |  Policy

By Vidya Subramanian, Angelina Dash and Ankit Yadav

In the run up to the India AI Impact Summit, CCG and GNI organised the Strategic Multistakeholder Dialogue as part of our broader Multistakeholder Approaches to Participation in AI Governance (MAP-AI) initiative. Through multiple sessions covering common themes surrounding AI governance, the event sought to enhance multistakeholder engagement in conversations around AI governance.

This blog extrapolates insights across two different sessions towards drawing out common themes, conclusions and considerations. The first session centred on discussions around developing a rights-centred AI risk management framework while the second session delved on the significance behind contextualising the field of AI safety.  Speakers across both the sessions emphasised on the need for safety and risk management frameworks to be anchored in a rights-focused paradigm, tailored to address specific concerns based on unique local realities.

This blog is structured as follows: first, it explores conceptions of AI safety, and how the needs, considerations and lived realities of the Global Majority may call for more nuanced AI safety efforts. Subsequently, the blog examines the approaches that different jurisdictions are undertaking through policies and regulations to embed safety within AI governance. This includes regulatory approaches such as rights-based and risk-based frameworks for AI safety. Finally, the blog delves into considerations in developing pathways and a way forward towards contextualised AI governance.

Conceptions and considerations: What does AI safety entail? 

AI safety as a discipline has progressively gained traction as a mechanism to offset the negative impact that flows out of the large-scale adoption of AI systems across applications. An evolving field open to multiple interpretations based on the nature of risks that it attempts to manage, the topic has been widely discussed in national platforms and international forums across the globe. AI safety efforts have hinged on preventing existential risks from AI, finding workarounds to technical alignment issues to finding solutions for the socio-technical issues presented by AI deployment.

The discussions across two days of our programming highlighted a need to rise above a narrow interpretation of safety by including not just physical and structural safety harms but also moral harms that can emanate out of rapid AI permeation across sectors. Emphasising the inadequacy of exclusive reliance on model-based harm assessments, the discussions also highlighted the significance of the AI deployment context and the resulting legal and governance implications. Finally, speakers also highlighted that in order for AI safety mechanisms to be effective, transparency and accountability interventions should be proposed not just at model and interface levels but also on an ecosystem level.

Risk-based v. Rights-based Frameworks

A common theme that developed in the discussions was the dichotomy of risk-based versus rights-based AI regulation. Speakers articulated that the rights-based approach is rooted in constitutional and human rights law, focusing on legality and proportionality restrictions and minimum, non-negotiable safeguards. Conversely, risk-based regulation operates through harm-benefit calculation, evaluating the probability and severity of harm.

As a result, speakers warned that risk-based approaches often prioritise technical evaluations, pivoting the discussion away from human rights questions. Under this framework, AI regulation then heavily emphasises impact assessments, licensing regimes and disclosure obligations, but it may insufficiently address deeper structural rights-based concerns.

While designing a risk-based approach can prove effective in laying down a framework suited to a particular regional context and enabling proportional oversight based on the gravity of risk posed, combining risk-management with considerations of human rights is the need of the hour. Taking the instance of safety benchmarks, speakers illustrated how benchmarks are frequently developed in isolation and imposed universally, resulting in uneven protection when applied to Global Majority countries. In order to be truly effective, speakers reiterated that standards must be grounded in real-world realities and regional contexts.

Contextualising Approaches

This is further tied to the discussion about the competing approaches of risk-based and rights-based governance. There is often a gap between regulatory frameworks and lived realities. This is because risk frameworks often assume mature institutions, stable regulatory capacity and administrative readiness, which rarely hold true in many Global Majority contexts.

Speakers therefore recommended that risk models must adapt to local realities, needs and aspirations of individuals, rather than countries adapting themselves to imported regulatory models. Beyond this, speakers also emphasised important principles such as structurally embedding rights protection in regulation. It was also recommended that governance frameworks should derive pathways to manage power asymmetries amongst different stakeholders. Country-specific AI assessments, transparency and procedural governance mechanisms, and the creation of expert oversight bodies were some of the mechanisms suggested by the speakers to build strong safety measures on a national level.

Despite emphasising on the significance of locally-tailored frameworks, speakers also acknowledged that the transnational nature of AI requires the creation of a framework with shared norms across jurisdictions for promoting AI safety too. Elaborating on how establishing and maintaining sovereignty over the AI technology can undermine cooperation between jurisdictions, speakers recommended mechanisms including cross-border incident reporting, harmonisations of safety thresholds, enforcement of oversight mechanisms and operationalisation of shared governance systems to promote international coordination on redlines. Speakers also addressed the conundrum around attributing liability for AI safety risks emanating around AI and the need to account for geopolitical considerations while framing accountability measures.

Way Forward

The sessions highlighted the need for multistakeholder dialogue, while bringing up associated challenges. For instance, existing power asymmetries can show up both in relation to the Global North vis-a-vis the Global Majority, but also in terms of technical communities vis-a-vis non-technical communities in multistakeholder conversations. Speakers highlighted the power imbalances rooted in the AI design and development processes contributing to harms while advocating for a framework which can accommodate labour, environmental and privacy concerns. Speakers also discussed how stakeholders such as engineers use technical risk-based language, while human rights advocates may typically employ normative and legal language, necessitating bridging the communication gap between these communities. As such, there might be benefits in expanding the stakeholder groups involved in AI governance discussions and processes, by developing movement-based tools for identifying harm and collaborating with journalists, lawyers and activists.

Pathways towards meaningful AI governance collaboratively leveraging different stakeholder perspectives would then depend on trust and shared language. The safe implementation of AI necessitates embedding respect for the diverse social, civil and cultural rights and the need to craft solutions contextualised to lived realities.

Copyright Global Network Initiative
Website by Eyes Down Digital