By Heloisa Massaro, Director of Research and Operations, and Danyelle Reis, Researcher, at InternetLab
At the end of June 2025, the Brazilian Supreme Federal Court (STF) delivered a landmark decision on the constitutionality of Brazil’s intermediary liability regime established by Article 19 of the Civil Rights Framework for the Internet (Marco Civil da Internet, Law No. 12,965/2014). Since enacted in 2014, the provision had shielded internet application providers from civil liability for user‑generated content unless they ignored a specific court order. Two exceptions already existed: one for non‑consensual dissemination of intimate images, which were subject to a notice‑and‑takedown regime under Article 21, and another for copyright violations, which should have been regulated by specific legislation but remained subject to notice-and-takedown based on case law. The ruling declared Article 19 partially unconstitutional and established new liability rules for platforms. In the Court’s view, this shield, devised in 2014, no longer protected fundamental rights or democracy as the digital ecosystem evolved.
Background
The leading case on the constitutionality of Article 19 was RE 1.037.396, which arose in São Paulo shortly after the Marco Civil came into force in June 2014. The case surrounded a woman who discovered a fake profile on Facebook impersonating her and insulting her friends. When the platform failed to remove the account upon her report, she filed a lawsuit seeking the profile’s takedown and compensatory damages. Facebook removed the post following an order issued by the judge overseeing the case. The plaintiff then appealed the decision to ask for damages based on the reasoning that Art. 19 would be unconstitutional and incompatible with consumer protections in Brazil. As constitutional control in Brazil is decentralized, the Court of Appeals acknowledged the allegation and ruled in her favor. The case then advanced through successive appeals until it reached the Supreme Court after Facebook petitioned the Court for the recognition of Article 19’s constitutionality. The case had been pending for almost a decade until the court issued its final decision on 26 June 2025.
Core legal changes:
The notice and takedown regime became the new “general” regime for intermediary liability in Brazil. The Court states that it will apply for damages arising from third-party content in cases involving criminal offenses or unlawful acts.
In these two cases,  liability will be presumed, even without notice. However, if the provider proves that it acted diligently and within a reasonable time to remove the content, it may be exempted from liability
The Court established a new liability regime that will be triggered in cases of systemic failure to promptly remove content involving “serious criminal offenses”, i.e those which relate to a list of crimes that includes anti-democratic acts, terrorism, incitement to self-harm, hate crimes, gender-based violence, child sexual abuse, and human trafficking.
The Court defined “systemic failure” as “lacking adequate preventive or remedial measures, considering technology and provider capacity.” It also clarified that individual pieces of content do not trigger duty of care; for those, notice-and-takedown will continue to apply.
The glass is half empty: gaps, grey zones, and the limitations in a judicial opinion
Brazil’s regime under Article 19 was internationally recognized for striking a balance between liability and freedom of speech. Over time, however, it became insufficient to address the complexity of today’s online ecosystem, highlighting the need for broader regulation. The Court’s shift to a general notice-and-takedown model, however, is unlikely to meet this demand by itself. Instead of strengthening governance systems, it shifts the burden to proactive regulation of speech. Platforms are encouraged to manage judicial risks rather than develop rights-based processes for moderation. In a country as vast as Brazil, where thousands of judges can reach different conclusions, this can create strong incentives for over-removal, with the heaviest impact on voices already facing barriers to visibility and protection.
This risk is compounded by a vague category of “unlawful acts.” While the incentives for a swift removal created by this type of regime may be justified for severe content, the same can not be said about the open-ended range of disputes that will fall under the category of “unlawful acts.” From policy debates on drug regulation to reproductive rights, the scope is so broad that legitimate speech can easily be silenced.
The decision also introduced several new terms whose meaning and use are still unclear. One example is the idea of “identical content,” which refers to material already declared unlawful by a court. Under the new rules, platforms must remove such material upon notice, without a new court order. For crimes against honor, however, the rule under Article 19 still applies. This makes the lack of clear exceptions for journalism or humor especially concerning.
This need for further clarity is more acute under the new “duty of care” regime. Key concepts such as systemic failure and adequate measures remain vague, and there is no definition on the procedures for assessing such failure nor for who has a standing to claim a breach on the duty of care. The Court said that a single post cannot trigger this liability, but gave no parameters for what does trigger it. In practice, one judge could treat ten posts as systemic failure while another could demand hundreds. There is no guarantee that what will be analysed is in fact the adequacy of the moderation systems implemented by the provider, and not the number of posts who managed to bypass it. Also, could a single plaintiff be compensated by a systemic failure that impacts millions of users?
Finally, the decision applies to virtually all “internet application providers” under the Marco Civil, meaning that major platform operations are governed under the same principles as blogs, forums, comment sections, collaborative encyclopedias and community-run services. While large companies may be able to absorb compliance costs, smaller and nonprofit projects could be disproportionately affected. By overlooking this diversity, the Court risks consolidating power in the hands of a few actors and narrowing Brazil’s digital ecosystem.
The glass is half full: a middle ground outcome and the case of crimes against honor
When Congress shelved the platform regulation bill 2.630/2020 in 2024, it was already clear that the Supreme Court would revisit the constitutionality of Article 19. At that point, there was little chance that the provision would survive intact. The concern grew when the rapporteur Justice Dias Toffoli issued his opinion proposing to strike down Article 19 completely and replace it with strict liability across a wide range of content, which would have been devastating for freedom of speech. Against this backdrop, the final ruling represents a middle ground: it avoids the most extreme path and preserves essential safeguards for speech.
A significant achievement was keeping crimes against honor, such as defamation and slander, under the Article 19 regime rather than shifting them to the new notice-and-takedown system. These cases make up a significant portion of lawsuits for content removal and are frequently used as tools to intimidate journalists. Data from the Monitor of Judicial Harassment Against Journalists shows that most lawsuits identified as harassment rely on honor claims. While the new “identical content” rule still raises concerns, maintaining Article 19 for these cases represents a significant safeguard for press freedom and democratic debate.
The STF’s ruling is final, but it allows motions for clarification to address any ambiguities. Their analyses will be an important opportunity for the court to refine and better delineate its decision. Nevertheless, even the most detailed judicial decision is insufficient to fulfill the need for broader legislation that establishes adequate regulatory measures with proper oversight procedures. Therefore, in its decision, the Court also urged Congress to legislate. Following the call, the government announced its intention to send a new bill, but no draft has yet been made public. This process will only succeed if it is inclusive, transparent, and participatory, involving civil society, experts, and affected communities. It is also important that the process moves beyond content-specific liability to account for platform architectures and moderation systems more broadly, taking into consideration their different sizes and models. The stakes for Brazil’s digital ecosystem are high– the way forward must protect democratic debate while addressing real harms. InternetLab will keep working with partners in Brazil and abroad to push for a framework that is inclusive, balanced, and grounded in fundamental rights.