GNI was honored to participate in the AI Standards Summit on 4 and 5 December, in Seoul, hosted by ISO, IEC, ITU and KATS. GNI’s engagement at the Summit builds off the work that GNI’s AI Working Group has been doing, including our recent Policy Brief on Government Interventions on AI, which highlights the importance that rights-based AI governance efforts, including the role standards, can play in AI governance. The AI Standards Summit also provided input and connections that we will leverage as we build opportunities for enhanced Multistakeholder Approaches to AI Governance at February’s India AI Impact Summit in New Delhi.
As artificial intelligence (AI) becomes increasingly embedded in critical systems that affect billions of people worldwide, the need to maximise its benefits and minimise its risks to humanity grows more urgent every day. Technical standards can determine how AI systems are designed, assessed, deployed, and monitored, as they can become de-facto regulation when referenced in legislation. In this way, technical standards inform the architecture and direction of AI governance.
Traditionally, technical standards have centred on engineering processes and their safety, with limited attention to broader socio-technical impacts. When human rights are not meaningfully embedded in technical standards, gaps can emerge: such as weak cryptography and privacy safeguards, biometric standards without bias safeguards, surveillance-enabling interoperability or interception standards, or existing AI standards that defer to varying human rights protections within “applicable legal requirements” rather than international human rights norms. Even when human rights appear in standards, the focus is often on a limited set of rights or replaced with adjacent language such as “ethics,” “safety,” or “trust”. The result is a global standards ecosystem that may be implementable across different jurisdictions, but insufficient to protect citizens with the same human rights.
Given AI’s anticipated broad impacts on humanity, incorporating socio-technical considerations (and human rights in particular) has become increasingly prominent within AI standardization. The 2025 Seoul Statement, released on 2 December 2025 at the AI Standards Summit, commits global standards bodies to developing AI standards that are human-centred, inclusive, and rights-aware.
At a Summit fireside chat titled “The Interplay Between Standards and Human Rights” and in multiple discussions, GNI stressed a core principle: behind every technical decision lies a human consequence. For AI to maximise benefit and minimise harm to humanity throughout its rapid evolution, standards must be grounded in stable and interoperable international human-rights principles.
GNI further highlighted the importance of enabling multistakeholder participation in standards bodies (including holistic support for such participation); for embedding human rights directly into technical specifications; and for adopting a “human rights by design” approach, including risk-based human-rights due diligence (HRDD) across governments, companies, and standards-setting organizations. These efforts would enable protection, respect, and remedy at scale, in line with the UN Guiding Principles on Business and Human Rights and GNI’s own framework.
The Seoul Statement is an important first step. The task now, as GNI emphasised at the Summit, is to translate its commitments into concrete, rights-aligned AI standards that will ultimately underpin accountable, trustworthy, and human-centred AI governance worldwide.