Following the earlier contributions to this blog series, which provided an overview of the six GNI-Internews fellows, in this collection, fellows document their work over the last few months. These blogs do not represent the viewpoints of GNI or any of its members.
Setting the stage: VIP lists on social media?
In 2019, the Brazilian football player Neymar posted on his Facebook and Instagram accounts nude images of a woman in a private conversation without her consent. The posts were part of the strategy the athlete designed to publicly respond to a rape accusation. Although Meta’s policies forbid the publication of nonconsensual intimate imagery, the content remained on the platform for over 24 hours, being viewed by around 56 million people.
Neymar’s episode exemplifies a modus operandi that would be confirmed two years later. In September 2021, the Wall Street Journal published a story revealing the existence of a system developed by Meta that added an additional layer to the content moderation process on its platforms. The mechanism, called Cross-check program by the company, provides for a different scrutiny for specific users, such as elected politicians, significant business partners, number of followers, among others. In practice, when profiles that belong to the list submit content flagged as potentially infringing, their posts are directed to a different queue, overviewed by humans, instead of the regular moderation one.
A helpful analogy is the boarding line at the airport. Everyone agrees that the elderly and people with babies should board first. But what if the line, in practice, mainly applied to “premium customers”?
The disclosure of Meta’s Cross-check raised several questions regarding the justification and legitimacy of such systems. Implementing such mechanisms raises concerns about transparency, equal treatment, and risks to fundamental rights. Should layered moderation based on users’ lists exist? Would they distort or promote fairness and transparency in the platforms’ operation? If they produce any positive effects, what would be the best parameters for them to be deployed?
How is InternetLab researching the topic? The Global Network Initiative (GNI)-Internews Fellowship
In 2022, InternetLab’s executive director Francisco Brito Cruz was awarded a one-year fellowship by GNI to conduct research on layered moderation systems, questioning whether or not they should exist, and, if yes, what would be their best design and features when seeking to mitigate distortions created by regular content moderation on Internet platforms. This article aims to present the research and some preliminary findings of the study.
Freedom of Expression and Content Moderation Systems
Before deepening the features of a layered moderation system such as Meta’s Cross-check, it is important to rewind to set out a common ground of definitions.
As an operating definition used by InternetLab on its approach to the topic, content moderation refers to a key activity for a digital platform: elaborate and apply rules, procedures, and systems to remove, limit reach, label content, and suspend or remove accounts (1, 2 and 3). This exercise is, at the same time, both the management of an individual user’s expressions and a part of the product and value that platforms can offer to the other users.
The activity of moderating content poses a logistical challenge to platforms; it deals with an immense amount of content and multifaceted contexts. This is well established in literature approaching its key challenges, such as argued by scholars from different perspectives (1, 2 and 3). Thus, one could say that layered moderation systems could be one strategy employed by the companies to mitigate risks to human rights, for instance, since it gives an analysis’ priority to a few types of users or content that should be carefully reviewed for protecting some speech or other rights.
It makes sense, for example, that activists or journalists have their speech more protected than regular users as their words have a different audience reach and impact — and their accounts and discourse could be constantly under strategic targeting by its antagonists? This is because their words have a more significant impact and reach a broader audience, and as such, their accounts and discourse may be under constant and targeted attacks from those who oppose their views. This targeted harassment can be a means of silencing their voices and may also pose a significant risk to their safety and well-being. In other words, layered moderation ideally can be a tool creating fairness inside a deeply automated process, functioning as an attempt to mitigate distortions created by the regular and industrial moderation processes by the platform.
But what if the layered moderation serves only to preserve business partners and commercial interests? What if the rules of the system are unclear and its gearing ends up promoting more inequality, contrary to the protection of human rights?
The Meta’s Oversight Board take: what is on the table?
The existence of systems that offer different treatment to some users is certainly not unique to Meta, but the scoop published by the Wall Street Journal in 2021 revealed important details of this program, as well as the gap around transparency about those systems among the industry.
Following the disclosure, in October 2021, Meta’s Oversight Board (OSB) accepted a request from the company to review Cross-check and make recommendations for its improvement. One year later, the body released a policy advisory opinion bringing key findings and guidance to ameliorate the system. In general terms, the OSB concluded that, by providing unequal treatment for some users, cross-check (i) caused a delay when removing violating content posted by the ones on the list; (ii) failed to track and disclose the metrics employed by the system; (iii) lacked transparency around its functioning. According to the Board, “while there are clear criteria for including business partners and government leaders, users whose content is likely to be important from a human rights perspective, such as journalists and civil society organizations, have less clear paths to access the program.”
Among other recommendations, the Board suggested that the company should prioritize expression that is fundamental to human rights, as well as increasing transparency around Cross-check’s operation and damage reduction measures by content left up during the layered moderation process — which tends to be delayed.
Beyond Meta’s systems: our method and research
In this context, in early 2022, InternetLab started to carry out research looking at layered systems in content moderation, seeking to create frameworks to help assess whether such a system is necessary, and its limits, mechanisms, guarantees, and safeguards for human rights. If the tool is important to tackle moderation’s logistical challenges and even other politically sensitive issues, how should it be designed to not pose significant risks to fundamental rights?
Furthermore, our research had a particular interest. Besides understanding its necessity and discussing transparency parameters, we wanted to use a regional lens to deepen the advantages and disadvantages of its application in specific social, political, economical and cultural contexts, for example, in Latin American countries.
We then conducted a series of focus groups with Latin American stakeholders who study and act in the fields of digital rights, election integrity, disinformation, and journalism. Participants were selected across sectors that influence the online environment, taking into account markers of class, gender, race and LGBTQIA+. After the sessions, all the participants were invited to present written contributions about their perceptions around those kinds of systems, their risks and legitimacy. Our main goal was to identify the main issues posed by layered moderation systems from diverse perspectives, and to discuss possible regulatory paths to build healthy guidelines. The material was compiled and served as background material for the research, which findings are exposed below.
Our findings
We break down our findings into two perspectives: the optimist’s view and the pessimist’s view, or the glass-half-full approach and the glass-half-empty approach.
The glass half full
The research led us to consider the necessity of layered moderation systems based on users and/or content, in order to pursue fairness, as opposed to formal equality. It is important to treat unequal individuals in accordance with their inequalities, and an alternative to in-scale and automated moderation which has the potential misinterpretation and mistakes in sensitive cases, especially when seeking to promote human rights by protecting political and minority discourses, public interest journalism and activism.
The glass half empty
In theory, layered moderation should not change the rules applied, only the enforcement procedures. However, in practice, as shown by the Cross-check case, the “special” enforcement can alter the nature of decisions around content since it ends up implementing different rules for some privileged individuals. Thus, it can distort a principled and consistent content moderation across the whole range of users and contexts.
Although the concept of implementing a mechanism such as the Cross-Check program to protect speech plurality on online platforms is welcomed, its application can pose risks to human rights and potentially shield unfair business practices. On one hand, such a tool can be essential in safeguarding diverse opinions and ideas, but on the other hand, it can also be abused by companies to avoid accountability and neglect their responsibility towards upholding human rights. Additionally, companies may use these mechanisms for public relations purposes, such as shielding their reputation from content moderation scandals.
Moreover, the research shows that there is little attention to the impact of layered content moderation at a regional level. In those contexts, we have noticed a lack of literature and awareness around the issue of the usage of layered content moderation systems in order to counter violence against historically marginalized groups across different protected categories and social markers, making it challenging to have constructive conversations with industry players, particularly in regions like Latin America. Due to the data scarcity, we lack studies that consider the effects of the system on political, cultural and social features from particular countries and in different languages, for example. There is insufficient data and transparency resources for some regions to the detriment of others, and the ones left aside are precisely the ones where marginalized groups struggle the most to access a basic set of rights and guarantees.
From VIP lists to fair protection for speech
As mentioned, we believe that the verification system must exist. This is due to the need for greater protection of some speeches and figures, seeking equity, not mere formal equality. In this case, we should advocate for clearer rules and parameters, as well as a stricter application worldwide. Global companies should have the will and capability to enforce their policies globally. Thinking about how to reform and improve a layered moderation system, we propose the inclusion of settings such as:
Next steps
The research presented shows that there is much room to develop content moderation systems, so they can contribute to fostering equality when it comes to the circulation of speech online. We seek to keep the discussion, including different platforms and regions, to craft formal recommendations that consider different groups and regional nuances to help improve these systems that dictate expression.
About InternetLab
InternetLab is an independent interdisciplinary research center that produces knowledge and promotes debate in different areas involving technology, rights and public policy. We are a non-profit entity based in São Paulo, which acts as a nexus of expertise between stakeholders, such as researchers, civil society and the public and private sectors.