On 22 October, the Global Network Initiative (GNI) is hosting our flagship Annual Learning Forum in Buenos Aires, Argentina, alongside our Annual Members Meeting and the 13th annual workshop by GNI’s academic member, the Center for Studies on Freedom of Expression and Access to Information (CELE) at the University of Palermo, titled “Towards a Free Internet”.
This year’s Annual Learning Forum will explore how global paradigm shifts are affecting the digital rights landscape and how practitioners can adapt. Globally, there’s an ongoing wave of geopolitical, regulatory, and technological changes that has the potential to significantly impact people’s online experiences and rights. These changes also have far-reaching impacts on the associated practitioners that work on tech and human rights, such as the digital rights, tech policy, and trust and safety communities. The Forum will explore these paradigm shifts and their implications across stakeholder groups and jurisdictions, with a geographic emphasis on Latin America and a practical emphasis on using human rights due diligence as a key approach to protecting user rights.
The Forum is anchored by two panel discussions. This blog post highlights the first session, which centers on protecting human rights in the context of the overall changing regulatory and geopolitical paradigms. The second session, to be covered in the next blog post in this series, will focus on understanding the human rights implications of government interventions in AI technologies, in the context of broader shifts in technological paradigms brought about by AI. Both panels will be recorded and posted on our website following the event.
Globally, we are witnessing a continuing and evolving trend of intermediary liability regulatory regimes shifting in ways that can adversely impact freedom of expression online. Many jurisdictions are considering or already have reduced intermediary liability protections and increased content take-down expectations. For instance, jurisdictions are moving towards regulatory frameworks that rely on notice-and-takedown frameworks, proactive monitoring, and the removal of user-generated content by intermediaries. However, the intent behind these regulatory regimes can differ. And, it is not clear what the specifics of these newly envisioned frameworks will look like in practice or how their resulting human rights implications will play out.
This opening session will discuss a few relevant jurisdictions, covering the regulatory intent, what’s changed, what’s stayed the same, the resulting or possible impacts on human rights, how companies manage risks and moderate content online, how companies can use human rights due diligence as a practice to ensure greater respect for user rights, and what these overall regulatory shifts might mean for human rights more generally.
The panel will feature:
These distinguished panellists will draw connections between jurisdictions to illustrate possible trends, identify open questions that need to be addressed, and discuss how relevant practitioners can strategically navigate these changes.
For example, in Brazil, the Supreme Court recently ruled that the longstanding intermediary liability regime (specifically Article 19 of Marco Civil da Internet) was unconstitutional, instead moving the country towards a “notice and takedown” regulatory model. However, the specifics of the emerging model that the Supreme Court of Brazil is creating through judicial interpretation is unclear. The Court has introduced the doctrine of “serious illegal content” and says that “systemic failure” will be characterized by a lack of appropriate preventive or remedial measures, considering the state of the art, the provider’s activity, and its technological capacity. Given ongoing uncertainties regarding interpretation and implementation, legal scholars and digital rights experts have room to help shape the limits of interpretation of those concepts in the most rights-respecting ways possible.
In Taiwan, the Tobacco Hazards Prevention Act – a relatively novel application of public health law to online content regulation – has been implemented in ways that have adverse impacts to freedom of expression. While the Act’s primary objective is to protect public health, there are serious concerns around its online provisions, particularly the lack of distinction between commercial advertising and user-generated content, the absence of “safe harbor” for intermediaries, and the introduction of proactive monitoring and a 24-hour content removal window for legal compliance. Despite rounds of deliberation and push back from different stakeholder groups, these concerns persist, as the Taiwanese government continues to push for amendments that could impose substantial obligations on internet service providers (ISPs) and other online platforms in ways that adversely impact user rights.
In Sri Lanka, the government has opened a review of the Online Safety Act (OSA), which came into effect in February 2024 and has since undergone several amendments following significant pushback from companies, civil society, digital rights advocates, and international stakeholders. The Act criminalizes overly broad categories of content such as “false statements” and “religious insult,” setting dangerous precedents through vague and subjective terms like “hurt feelings” and “public disorder.” These provisions are applicable not only to content created and disseminated within Sri Lanka but also to content hosted or authored abroad – thereby extending the law’s reach extraterritorially and raising serious questions about its compatibility with international human rights standards. The OSA also introduces a strict, 24-hour notice-and-takedown regime, granting the Online Safety Commission – a centrally appointed body with sweeping investigatory powers and legal immunity – unilateral authority to direct internet intermediaries to remove content without judicial oversight. This Commission has the power to impose compliance obligations on intermediaries and to recommend individual criminal prosecution for alleged violations.
Meanwhile, there are numerous additional online safety regulations – including in the UK, Australia, and the European Union – which establish risk management, due process, and transparency obligations for certain intermediaries. These have been described as “second generation of global internet rules,” focused on creating accountability for managing risks/harms through a system of checks and balances, user empowerment, and input from across the ecosystem. Yet, these approaches are also contested, including for their possible impact on rights and cross-border implications.
The Annual Learning Forum’s opening discussion will feature insights from experts and practitioners focused on these and other approaches to content regulation, illustrating similarities and differences, and unpacking possible impacts to human rights. We look forward to exploring how GNI constituencies – companies, civil society, academics, and investors – can strategically navigate these and broader paradigm changes in the service of best protecting the human rights of users.
To get the recording after it is published, sign up for GNI’s newsletters and follow us online on LinkedIn, X, and Bluesky.