Event Summary: GNI ALF 2024 on Global Dynamics in Govt Demands of Tech

Home > News

December 20, 2024  |  Events, Learning

By Ashwin Prabu and Hilary Ross

The Global Network Initiative (GNI) hosted its Annual Learning Forum “Global Dynamics in Government Demands of Tech: Respecting Digital Rights Amidst Changing Regulatory, Technical, and Political Landscapes” on 19, November 2024, exploring new dimensions and trends in government demands and mandates, particularly as they relate to generative AI products and services at the Internet’s infrastructure level. 

The digital landscape is rapidly evolving both nationally and internationally, driven by emerging and proposed regulatory frameworks for corporate accountability, the rise of authoritarianism, increasing demands for digital sovereignty, shrinking civic spaces, the swift deployment of often-opaque AI technologies, and the growing importance of the technical infrastructure that underpins the Internet. 

The event featured welcome remarks from Vicky Bowman, GNI’s Independent Chair and a keynote address from Dafna Rand, Assistant Secretary of State for Democracy, Human Rights, and Labor (DRL), followed by three panel sessions of GNI members speaking on government demands, AI, and surveillance respectively. Below are summaries of each session.

Keynote address from Dafna Rand, Assistant Secretary of State for Democracy, Human Rights, and Labor (DRL)

Assistant Secretary Rand warned that autocrats are incredibly adaptive, innovative, and nimble when it comes to surveillance, collecting data, and buying technology. She stressed the need for export controls to maximize human rights due diligence and ensure that US technology does not fall into the wrong hands. 

She also warned about the challenges that authoritarian regimes like the People’s Republic of China (PRC) and Russia pose to multilateralism and global governance. As the PRC and Russia continue to push multilateral and global organizations to lower standards on human rights and AI, we need to create a “club of adherents” among rights-respecting regimes to have high standards. 

Finally, Assistant Secretary Rand commented on the proliferation of cybercrime laws worldwide that are intended to crack down on journalists and censor content. Rand emphasized the need to call attention to these laws and their negative impacts on human rights. GNI has consistently commented on these laws as they have emerged, for example recently in Malaysia and previously in Myanmar

Session 1: New Dynamics in Government Demands Amidst Changing Regulatory and Political Landscapes 

This session featured Catalina Moreno (Karisma), Thobekile Matimbe (Paradigm Initiative), Shashank Mohan (Center for Communication Governance), and Alex Walden (Google) as panellists, and was moderated by Allie Funk (Freedom House).

Allie Funk introduced the session by reminding the audience of how much the landscape has changed over the last fifteen years – GNI was founded in 2008 and Freedom House’s Freedom on the Net project was launched one year later in 2009. Since then, there’s a growing number of governments genuinely seeking to strengthen corporate responsibility and foreground human rights. Yet, democracies globally have been challenged over that time. Freedom online has decreased every year since their report launched and many governments are using the language of corporate responsibility to actually go after people’s data and repress dissent.

Shashank Mohan discussed how India’s government has been attempting to expand its powers regarding regulating AI and social media content. For example, he referenced India’s misinformation executive rule, which required social media services to remove “misinformation”, and was eventually struck down by the courts. This shows how regulatory concepts can be adapted – and abused – globally. 

Thobekile Matimbe highlighted troubling trends in Africa, including deployment of surveillance technologies, internet shutdowns, telecom interception, and restrictive laws, often targeting the media. She also discussed Tanzania’s ban on VPNs, Zimbabwe’s Interception of Communications law that grants the responsible Minister unfettered discretion which may be used to to limit internet access, and the incidence of internet shutdowns in Mauritius and Mozambique as a cause for concern. She stressed how some States are purely defiant, going against their human rights obligations, citing her recent article on Internet shutdowns. She argued that private sector-civil society collaboration is vital to resist these types of rights-restricting actions. 

Catalina Moreno noted there’s an uptick in proposed legislation regarding technology in Latin America – her organization is monitoring around 60 bills currently. She thinks this shows that policymakers are not yet diagnosing the problems well and figuring out what is needed from companies to reduce harms and protect rights. Instead, they are seeing problematic laws like an anti-gambling law that enabled the gambling authority to block URLs of any platform where gambling is happening without registering and paying taxes on it. This inadvertently resulted in reddit, tumblr, and other online platforms being blocked. Meanwhile, actual problems like “micro” Internet shutdowns happening across Latin America are not being addressed.

Alex Walden presented the challenges Google faces when states fail to protect the human rights of its citizens. She described how GNI’s principles have been helpful and how Google’s lawyers and staff implement GNI principles when assessing requests from governments. Google’s responses involve striking a balance between following GNI principles and the UN Declaration of Human Rights, and complying with state mandates in order to maintain operations, as offering services is one critical component to the company’s commitment to freedom of expression. In the times when they do comply with government requests, they are as transparent with the public as possible. 

The panel also pointed out how many groups are still missing in the broad coalitions on digital rights, like civil society, companies in the Global South, and South Asian organizations beyond India. 

Session 2: Exploring the Human Rights Implications of Government Influence over AI

This session featured Miranda Bogen (Center for Democracy and Technology or CDT), J. Carlos Lara (Derechos Digitales), Ellie McDonald (Global Partners Digital or GPD), and Alex Warofka (Meta) as panelists and was moderated by Jason Pielemeier (GNI). 

Ellie McDonald shared GPD’s research on mapping international AI governance initiatives. She discussed how research of almost fifty initiatives shows increasing consolidation in AI governance policies, citing examples including the work of the OECD and the US Executive Order on AI. She presented the following key insights: 

  • Compared to other internet policy making, particularly in the cyber domain. In a shorter period, AI governance regulations have been moving a lot faster, notably evident through two cross-border regulatory initiatives, the EU AI Act and the Council of Europe AI Convention. 
  • There has been some shift away from existential risk narratives but AI safety and AI for good narratives have continued to dominate the discourse on AI governance, rather than human rights framings. 
  • The majority of AI governance initiatives have been spearheaded by Global North actors however there are notable exceptions including the New Delhi Leaders’ Declaration announced as part of the G20 or the Shanghai Declaration. There has also been a lot of action by China to reinforce their domestic approach in technical standard setting bodies.

Miranda Bogen explored the implications of “AI safety” as a framing for AI governance versus human rights framings. She noted that there’s been a shift in AI technologies – now that there are general purpose tools, downstream use cases and risks are becoming clearer although they are still not fully defined or understood. It is also not yet clear where responsibility for those risks should sit within supply chains. Additionally, there has been a shift in understanding the ways governments influence AI and AI uses. For example, governments can influence areas like foundation model “safety” benchmarks and evaluations. These criteria and processes are quite opaque, making it difficult to identify and push back on restrictive government influence. There’s still a need for more clarity around what people mean by “safety” and much more work to do to enable useful transparency, so the tradeoffs are clear and can be debated.

Carlos Lara discussed how, in the global rush to regulate AI, too many AI governance bills in Latin America have been just “for show” and/or been watered down, especially in Brazil and Chile, notwithstanding the insistence in both introducing new bills or advancing the ones in place. He also described tension between Latin American states and global tech companies, citing how Brazil’s Data Protection Authority temporarily banned Meta due to their training AI systems’ use of personal information, in contrast with the expectation to join the global data economy. He stressed that Global Majority countries are eager to have some impact on how these massive companies behave, yet they need to find ways to engage that take into account their economic leverage – or lack thereof. 

Alex Warofka conveyed how Meta, similarly to Google, attempts to respect local laws while adhering to international human rights standards, consistent with the GNI Principles. He noted that they will continue with the same philosophy regarding AI regulation. Alex also advocated for regulation that enables the release of open models, stating that open models lower barriers to entry for developers – enabling more developments for underserved communities and advancing non-discrimination and mitigating risks because of the increased scrutiny that comes with openness. Alex highlighted how adhering to government demands will look different in the age of AI. For example, governments will likely demand that models stop producing outputs that are critical of the government. Defending from this kind of overreach will require additional thinking, as the technology works differently from social media, where it is very clear that content is user-generated. 

Jason Pielemeier noted that many of the concerns GNI has with generative AI technologies are consistent with our broader approach to promoting freedom of expression and privacy in the tech ecosystem: ensuring inclusive stakeholder participation, enhancing meaningful transparency, and focusing on risk identification and mitigation. While AI hype has driven calls for distinct regulations, issues related to AI actually reflect a continuity in core challenges, not a shift in their nature. GNI has worked to clarify how the GNI Principles – including freedom of expression, privacy, responsible company decision-making, multi-stakeholder collaboration, governance, transparency, and accountability – apply to AI services within the GNI assessment framework and will continue to convene stakeholders to help understand and guide AI governance and regulation going forward.

Session 3: New Developments in Censorship and Surveillance at the Infrastructure-Level 

This session featured Usama Khilji (Bolo Bhi & GNI), Heloisa Massaro (InternetLab), Lillian Nalwoga (CIPESA), and Luke Valenta (Cloudflare) as panelists and was moderated by Ramsha Jahangir (GNI). 

Luke Valenta, an engineer on Cloudflare’s research team, highlighted the company’s ability to detect global network “connection tampering” by identifying patterns of anomalous resets and timeouts. Cloudflare’s global infrastructure captures and analyzes Internet traffic anomalies, enabling the detection of third-party interference in real-time. This process uses identifiable signatures, validated by comparing patterns observed on Cloudflare’s network with known cases of interference reported by active measurement studies. These insights are now visualized on Cloudflare Radar, enhancing transparency around internet censorship and filtering practices. 

Usama Khilji shed light on the rise in censorship in countries such as Pakistan where democracy has eroded, noting how governments are increasingly investing in Deep Packet Inspection (DPI) technology to block websites and VPNs, in addition to other sophisticated tech to disable certain functionalities in communication apps. 

Heloisa Massaro discussed challenges in implementing the court-ordered blocking of X in Brazil, stemming from a secret inquiry into anti-democratic acts in 2023. The order involved blocking X via ISPs, app stores, and threatening fines for VPN use, though enforcement faced loopholes due to varied ISP compliance and VPN usage. Despite these gaps, the blocking was largely effective for its intended purpose, demonstrating that complete success wasn’t necessary to achieve the intended goal. (GNI put out a statement on the blocking order expressing concern and calling for it to be consistent with international human rights principles.)

Lillian Nalwoga highlighted that surveillance tactics used by African governments mirror global trends, with tools sourced mainly from China, Europe, Israel, and the US. Despite rapid digitalization, many African countries lack data protection laws, allowing governments to deploy surveillance and censorship technologies at will, leading to the undermining of human rights.

What’s next: continued need to protect rights online

The GNI Annual Learning Forum underscored the complexity and urgency of addressing government regulation and overreach across the tech ecosystem. The discussions highlighted how challenges to democracy, assertions of digital sovereignty, surveillance, and censorship are global issues manifesting in distinct regional contexts—from Africa’s restrictive laws and internet shutdowns, to India’s evolving regulatory mandates, and Brazil’s banning of X. The insights from this Forum demonstrate that while technology is evolving rapidly, the core necessities of inclusive stakeholder engagement, protecting human rights, and fostering accountability remain constant. GNI looks forward to continued engagement on these themes.

 

Copyright Global Network Initiative
Website by Eyes Down Digital