Addressing New Trends in Platform Governance and Information Integrity in GNI Assessments

Home > News

December 5, 2024  |  Accountability, Confluence Blog

By Min Aung, GNI Assessment and Accountability Manager

In early August 2024, GNI published a blog about five fundamental changes that have been made for the fifth assessment cycle to recognize the needs of our membership for a dynamic, adaptable, meaningful, and efficient assessment process that addresses the evolution of GNI, its members, and the external environment. This blog is the fourth in a series that explores each of these changes in detail. Please also see other blogs on service-related adaptations and regulatory adaptations.

When GNI launched in 2008, the internet as we know it today began taking shape. Just two years earlier, Facebook was made available to the public, Twitter was launched, and YouTube was acquired by Google, with all three gaining rapid popularity by 2008. The first iPhone launched just one year earlier, giving users a glimpse of smartphone-enabled internet, changing access patterns and kicking off the exponential demand for mobile data. As these trends developed, concerns over censorship, surveillance, and data security began to surface, prompting new dialogues and regulations.

Fast forward 16 years: internet and user behaviors have substantially evolved, and so has the way governments articulate restrictions and demands that may negatively impact users’ freedom of expression and privacy rights. While the sorts of overbroad government demands on tech companies that precipitated the establishment of GNI continue, and in fact have grown, governments have simultaneously implemented a wide range of additional approaches to achieve information control and surveillance, many of which also have negative human rights impacts. Examples include but are not limited to broadly defined legislation around platform governance aimed at controlling “illegal” or “harmful” content, more technical surveillance legislation, and various modalities of state-sponsored actions that compromise information integrity.

As set out in GNI’s public Assessment Toolkit, during our fifth assessment cycle, we will address how GNI company members’ implementation of the GNI Principles on Freedom of Expression & Privacy and corresponding Implementation Guidelines (together, the “GNI framework”) have evolved in tandem with the evolution to more nuanced and sophisticated forms of government restriction and demands. Below are five examples of the types of issues that we anticipate will be covered in assessment case studies.

  1. From safe harbor provisions to mandated and automated content filtering? 

    Safe harbor provisions are essential for enabling the free flow of information online while balancing the need to address illegal content. However, even in democratic societies, governments are increasingly enforcing stricter regulations demanding platforms not only remove illegal content but also “lawful but awful” with short deadlines and fines for non-compliance. Through our policy advocacy activities, GNI has consistently expressed concerns that such approaches could lead to over-censorship and self-censorship, negatively impacting freedom of expression and disrupting the role of intermediaries.In a further erosion of safe harbor provisions, governments are increasingly mandating automated content removal on platforms, with compliance enforced through significant fines and even criminal actions. In 2023, according to Freedom House, 22 countries required social media companies to use automated moderation systems. This practice risks leading to over-moderation and biased moderation, both of which negatively impact freedom of opinion and expression. In addition, these mandates can also infringe on privacy, especially when moderation is mandated for encrypted communications. We have encouraged assessors to seek evidence of companies advocating for rights-respecting measures, proportional approaches to implementing content regulation laws, transparency in the use, audits, and outcomes of automated content moderation tools, and effective user grievance mechanisms.

  2. Extra-judicial mechanisms for information control 

    Some governments have created mechanisms to request content removal or deprioritization through ICT companies’ Terms of Service (TOS) policies, bypassing legal processes. This includes anonymous requests via user-facing tools, government “Internet referral units” (such as the well-documented cases in the UK and US), and the unjustified harassment and arrests of participants in the development of public digital resources. At the same time, some governments have begun mandating and regulating the role that private “trusted flaggers” play in content moderation, creating risks that state sanctioning and review of such activities could lead to intentional or unintentional bias. We have encouraged assessors to find evidence that companies lobby for government transparency on TOS referrals, include these referrals in transparency reports, and provide remedies for users whose content or accounts were wrongly removed due to these referrals.

  3. Mandatory age verification 

    Some countries, notably Australia and the UK, as well as some US states, are considering or enacting laws mandating age verification for social media and pornographic content access. Methods being considered and deployed for age verification include the use of credit cards, government IDs, third-party intermediaries, facial recognition technology, and browsing history analysis. These measures risk infringing on the freedom of expression and privacy rights of both young people and adults, as well as creating entry barriers for new services. Platforms may self-censor to avoid verification complexities. Minorities lacking documentation or fitting AI biases may be excluded. Collecting and storing biometric data poses privacy risks and surveillance concerns. We have encouraged assessors to look for evidence of companies performing due diligence on the impact of such systems on users’ rights (including meaningful stakeholder engagement with potentially impacted user groups), implementation of the least intrusive method of such verification, transparency on age verification methods, transparency on algorithm audits (if AI is used for such verification), and user grievance mechanisms.

  4. Government use of generative AI for content manipulation at scale 

    Governments have in the past directly ordered telecommunication providers to disseminate information, purchased political advertising on social media platforms, and conducted search engine optimization (SEO), all of which can create negative freedom of expression impacts when not done with appropriate authorization, transparency, and accountability. As governments’ capacity to use such AI-enabled technology has advanced, governments have increasingly used state and state-sponsored actors to create and spread disinformation on social media platforms. Freedom House reports that in 2023, at least 47 nations already used pro-government commentators employing deceptive or clandestine strategies to manipulate online content, marking a doubling from levels observed a decade earlier. Generative AI provides non-democratic states and their allies with the means to generate text and multimedia content at scale, creating large-scale, persuasive disinformation, including deepfakes, complicating the detection of unbiased information. We have guided assessors to seek evidence of responsible AI practices, investment in AI detection, transparent content moderation policies, AI watermarking, fact-checking partnerships, and user education.

  5. Government control of AI services 

    The launch of generative AI tools like ChatGPT and Bard/Gemini can help users in digitally repressed regions access uncensored information from the global internet. This has led authoritarian states to include these services in their censorship frameworks, similar to their past approaches to limiting and controlling the impact of social media. Governments are increasingly regulating all aspects of AI services – ranging from the types of questions/queries that are acceptable, the datasets and sources used to train AI models, and the outputs that models can provide (for instance, requiring that outputs must conform to local “sensitivities”). Assessors may evaluate how companies handle censorship demands that impact AI-enabled services, as well as how their responsible AI policies align with the GNI Principles and Implementation Guidelines, especially regarding government interactions and national AI policies. GNI will further explore the application of the GNI Principles in AI in more detail in a separate article.

In conclusion, incorporating a broad range of case studies addressing nuanced government restrictions and demands into our assessment process will help illustrate the continued relevance of the GNI Framework in an increasingly dynamic and challenging online environment, while allowing GNI’s Board and Accountability Committee to provide contemporary and relevant feedback to our company members.

Copyright Global Network Initiative
Website by Eyes Down Digital