On 27 October, the letter and analysis below were shared with members of the Chilean Senate Commission on Challenges for the Future, Science, Technology, and Innovation on 27. Please find the letter in Spanish here.
Senador Guido Girardi Lavín
Senadora Carolina Goic Boroevic
Senador Francisco Chahuán Chahuán
Senador Juan Antonio Coloma Correa
Senador Alfonso De Urresti Longton
Comision de Desafíos del Futuro, Ciencia, Tecnología e Innovación
We are writing on behalf of the Global Network Initiative (GNI) to express our concerns about the recently introduced draft Bill Nº 14.561-19 to Regulate Digital Platforms in Chile.
GNI appreciates the Commission’s legitimate concerns about harms related to user-generated content online — as well as the need for more transparency and due process in how major platforms address various forms of online content and activity. We also praise the stated commitment in the bill to preserving freedom of expression, including via digital communications platforms. However, we are concerned with newfound liability risks for digital platforms for user-generated content and sweeping and vaguely worded obligations for content moderation, which are paired with significant penalties for noncompliance. These competing pressures on a broad range of digital platforms are likely to distort the online information environment and pose risks for the rights to freedom of expression and privacy in Chile, as we outline in the brief analysis below.
GNI is the preeminent, international multistakeholder organization working in support of freedom of expression and privacy in the information and communications technology (ICT) sector, bringing together academics, civil society, ICT companies, and investors from around the world. Our positions on the proposed bill are informed by the policy brief we released last Fall. This brief was the result of a series of multisatkeholder consultations around the world, and examined over two dozen recent government initiatives that aim to address online harms. It identifies common pitfalls, as well as innovative approaches, and concludes with a series of recommendations.
We also highlight what appears to be a rushed process to introduce this bill, including overlooking expert consultation led by this very Commission. We call on the Commission to reconsider the current proposal and consult more broadly with Chilean and international experts on digital rights and platform regulation, including other relevant Commissions of the Senate. As we note in our policy brief, processes for legislative deliberation that are open and non-adversarial, drawing on broad expertise, offer the best chance to ensure results are well thought out and evidence based. We stand ready to engage in such efforts.
GNI Analysis and Key Concerns
Overly Broad and Vague Obligations
In the policy brief, we emphasize the need for clear guidance for those tasked with enforcing the law, as well as the need to refrain from overly stringent enforcement and penalties. Such approaches allow for regulation to accommodate a diverse range of business models and capacities among covered businesses, as well as to foster innovative approaches to content moderation and guard against over-removal. These outcomes are critical to preserving the principles of legality, legitimate purpose, and necessity and proportionality that must, under international human rights law, underpin any potential restrictions on the right to freedom of expression that emerge.
Unfortunately, the intermediary liability regime presented in the current proposal, together with the convoluted moderation requirements detailed below, place competing pressures on digital platforms and fail to provide necessary guidance for companies tasked with enforcement. On one hand, digital platforms would be expected to invasively monitor their users accounts for potentially illegal content to avoid significant legal penalties, while on the other, the law poses restrictions on digital platforms’ moderation of harmful but otherwise legal content. Article 6 of the law states that digital platforms will not be liable for content transmitted on their platform, provided they have acted diligently to block or remove content when they have effective knowledge it is unlawful — an important safe harbor that can encourage platforms to moderate harmful content. However, the same article states that platforms can be liable if they act in a way “contrary to the act,” or “exceed its described services,” undermining that very same safe harbor. Furthermore, Article 15 states “the digital platform provider will be objectively liable for all damages, whether financial or moral, caused to users,” with penalties up to double the cost, or even suspensions of services for “systematic” violations.
This legal ambiguity and risks for companies are reinforced by a myriad set of content moderation obligations. These include requirements for digital platforms to implement “bias control mechanisms” under Article 9 to prevent discriminatory impacts, and requirements to “protect the image and integrity of people who are considered vulnerable by law, whether due to their age, condition or other analogous circumstances,” including warnings for addictive or sensitive content and age verification mechanisms, under Article 8. While preventing discrimination and protecting children’s rights are essential goals of content moderation, as drafted, the vaguely worded provisions of the bill are going to be extremely difficult for companies to enforce consistently, and in ways that provide necessary clarity for digital consumers on acceptable speech.
Furthermore, these obligations are applicable to an overly broad set of digital service platforms, defined as “any digital infrastructure whose purpose is to create, organize and control, through algorithms and people, a space for interaction where natural or legal persons can exchange information, goods or services.” This could cover internet infrastructure services, or search engines, who are equipped to restrict access to entire websites or services, as opposed to specific pieces of content. They could also be applicable to independent publishers, bloggers, or even ecommerce platforms. Imposing such a broad range of obligations on actors without the resources or business model to moderate at scale can pose risks for their operations, including outlets for online speech. This legal uncertainty and regulatory inconsistency is also likely to pose significant economic risks. It could limit opportunities for innovation and growth, particularly for smaller and medium sized companies, and hurt efforts to attract foreign investment in the digital economy in Chile.
“Digital Freedom of Expression”
As the Human Rights Council has clarified, the same human rights that are enjoyed in the analog space must be reflected online. Unfortunately, the concept of “digital freedom of expression” articulated in the law fails to meet this stated aim of “equivalence of physical and digital spaces,” as articulated in Article 4(a). It instead creates a narrower application of the principle specifically for the online space, including by limiting platforms’ ability to moderate harmful content.
Article 6, which outlines the “digital freedom of expression” concept, states that digital platforms are only able to restrict content that “may be considered civilly injurious, libelous, or they constitute threats or constitute crimes established by other legal bodies or that incite to commit a crime.” This may not cover some of the most harmful forms of speech, from online harassment to targeted disinformation, that, if left unchecked, can drown out and further marginalize disadvantaged or minority voices. In addition, by narrowing the range of expression that is acceptable in the digital space, but otherwise remains legal in analog forms, the law risks potential discriminatory impacts, undermining the stated goals of Article 9. As we note in the policy brief, the right to freedom of expression is broad in its scope, encompassing “even expression that may be regarded as deeply offensive.” And even with these limits on digital platforms’ moderation practices, platforms will still face significant legal pressure to remove the content that they are in fact authorized to restrict, contributing to uncertainty for platforms and digital consumers alike.
These concerns are reinforced by various provisions that might allow public authorities or other powerful figures to influence the range of content that is allowed on digital platforms. These include vague “neutrality” provisions outlined in Article 5 — which could include limiting the ability of digital platforms to downrank or otherwise curate content that may be harmful but is not illegal — and the opportunity to submit “correction notices” for “manifestly false information” in Article 6. GNI has written extensively on our concerns about the tradeoffs that must be considered between the rights to privacy and expression through “right to be forgotten provisions” in other jurisdictions, and the proposal in Article 7 lacks the kinds of safeguards present in other jurisdictions where this right is present. The bill further limits’ platforms’ moderation capacity by establishing an excessively high standard to determine the legal value of the adherence to a platform´s Terms of Service in Article 13, making them mandatory for the user only when they relate to “essential and more evident conditions.”
Transparency, Digital Due Process, and Data Protection
In the policy brief, we emphasize how laws articulating appropriate standards for content moderation, based on traditional rule-of-law concepts such as transparency regarding decision making, due process around content determinations, and remedy for impacted users, can help meet goals of more effective, fit-for-purpose, and flexible regulation. While the proposal shows a laudable commitment to these principles, the current design of Articles 10, 11, and 12 should be further fleshed out and clarified to truly meet these aims. In particular, we are concerned that consumers’ “right to deactivate programs” in Article 10 is vague and unrealistic to implement for platforms of all sizes. Similarly, the call for “fast and transparent” “digital dispute resolution” mechanisms to process user complaints in Article 12, subject to appeal to courts, warrants further detail and clarity. As we note in the policy brief, and as set out in the UN Guiding Principles on Business and Human Rights, “effective remedial mechanisms must be legitimate, accessible, predictable, equitable, transparent, rights-compatible, [and] a source of continuous learning.” Finally, the emphasis on the right to privacy, including data protection provisions in Article 7 and 14, while laudable, further illustrates the overly ambitious scope of this project. Data protection and privacy regulation are complex issues that are already covered in other elements of Chilean law, and should continue to be subject to deliberation in the context of a potential data protection bill in Chile.
As we have noted throughout this submission and in our policy brief, fit-for-purpose and tailored regulation can be a solution for legitimate concerns about harms that result from user-generated content. Any such regulation should be rooted in the international human rights framework, and centered on articulating appropriate standards for companies’ systems and practices rooted in transparency, due process, and remedy. While the current proposal alludes to these principles, our experience has shown complex issues require much more rigorous and detailed consideration than what is currently presented. Furthermore, the competing pressures on a broad set of digital platforms, via a stringent liability regime and vague and significant obligations for content moderation, are more likely to undermine than protect “digital freedom of expression” at the core of the proposal. Moving forward, we hope the Commission can reconsider the current proposal as drafted, and undergo extensive consultation with Chilean and international experts and relevant Senate bodies. We stand ready to support such an effort.