
The discussion on the civil liability regime for internet application providers in Brazil took on a new dimension following the Federal Supreme Court's recognition of the partial and progressive unconstitutionality of Article 19 of the Brazilian Civil Rights Framework for the Internet (Law No. 12,965/2014). The provision, originally conceived as a mechanism to protect freedom of expression and legal certainty, conditioned the civil liability of platforms on the existence of a specific court order determining the removal of illegal content. Although innovative at the time, this model became insufficient in the face of profound changes in the digital ecosystem, marked by the large-scale dissemination of illegal content, the coordinated action of artificial distribution networks, and the intensive use of platform infrastructure for the commission of serious crimes that affect both individual rights and the integrity of the democratic regime.
The Brazilian Civil Rights Framework for the Internet (Law No. 12,965/2014) established a specific model of civil liability for application providers in Brazil. Its Article 19 enshrined the so-called "specific court order rule," according to which platforms can only be held civilly liable for damages resulting from content generated by third parties if, after being notified by court order, they fail to remove the content identified as illegal. This is a deliberate legislative choice: by ruling out private censorship mechanisms and automatic removal, the legislator sought to protect freedom of expression, preventing platforms from acting as "arbiters of legality" of speech. The provision, therefore, established a judicial safe harbor, distinct from the (broader) US model and the (more restrictive) European model at the time.
The US model, enshrined in Section 230 of the Communications Decency Act (CDA), enacted in 1996, is internationally recognized as the broadest platform immunity regime. In summary, the provision establishes that no provider or user of interactive computer services may be treated as a publisher or held responsible for content published by third parties. This means that platforms such as forums generally enjoy broad civil immunity and cannot be held liable for the maintenance of illegal content published by users, even if reported extrajudicially. The logic behind Section 230 is to encourage innovation, reduce legal risks, and prevent platforms from excessively removing content to avoid litigation—a central concern of US lawmakers in the 1990s, when the digital environment was still in its infancy.
In contrast, the original European model, enshrined in Directive 2000/31/EC (Electronic Commerce Directive), adopted a more cautious and restrictive stance. The directive established that hosting providers would only be held liable if, having "actual knowledge," they did not act promptly to remove illegal content. Unlike the US rule, the European Union adopted a mechanism known as notice and takedown, according to which liability arises when the provider is notified and remains inactive. The directive did not require a court order to characterize this "actual knowledge" — it was sufficient for a sufficiently clear notification to be sent to the provider. The European model sought to balance protection of freedom of expression and protection of fundamental rights such as honor, intellectual property, privacy, and security.
Brazil, in drafting the Civil Rights Framework for the Internet, made a peculiar choice. Instead of replicating the broad US immunity, the legislature chose to reduce the risk of abusive private removals, rejecting the European idea that the platform could be held liable simply for having "actual knowledge." A hybrid model was then created: a conditional safe harbor, which requires a specific court order for liability. This option is reflected in the core of Article 19, whose heading establishes that the provider can only be held liable if, after a court order, it does not make the content identified as illegal unavailable. The central idea was to prevent platforms, for fear of litigation, from preventively removing legitimate content, thus reinforcing freedom of expression.
This design, however, has become insufficient in the current digital landscape, where massive damage occurs in seconds and the requirement for a court order makes timely responses unfeasible. The core of the rule is in the heading of Article 19: "The internet application provider may only be held civilly liable for damages resulting from content generated by third parties if, after a specific court order, it fails to take steps, within the scope and technical limits of its service, to make the content identified as infringing unavailable."
Article 21 of the Civil Rights Framework, on the other hand, establishes a relevant exception to the rule in Article 19 by providing for liability regardless of a court order in situations involving intimate content, such as non-consensual pornography, leaked nudes, and sexual material disclosed without the victim's authorization. In this case, the provider must remove the content after extrajudicial notification containing minimum elements that allow for the identification of the illegality. If this duty is not fulfilled, the provider becomes liable for damages. The objective is to protect extremely sensitive personality rights, the damage to which spreads irreversibly and instantly. Article 21, therefore, introduced the logic of liability for omission after notification of the victim, functioning as the embryo of a more diligent and active responsibility on the part of platforms.
The Supreme Court, in reinterpreting Article 19, recognized that the American model is excessively permissive and that the European model at the time—although more protective—also does not adequately address systemic risks, which is why the European Union approved the Digital Services Act in 2022, introducing duties of diligence, transparency, and risk mitigation. The Brazilian decision is closer to this new paradigm.
It was in light of these two provisions that the Federal Supreme Court was called upon to rule on the constitutionality of Article 19 in the set of lawsuits that comprised the so-called "Direct Claims of Unconstitutionality of the Brazilian Civil Rights Framework for the Internet." The central issue in the judgment was whether the specific court order rule would be compatible with the Constitution in the current information environment, marked by coordinated disinformation, digital political violence, hate speech, attacks on vulnerable groups, algorithmic manipulation, and massive practices of violation of fundamental rights.
The Court opted for a model of progressive unconstitutionality, establishing that Article 19 remains in force but cannot be applied in a comprehensive and unrestricted manner, requiring interpretation in accordance with the law until the legislature produces normative discipline appropriate to the new technological realities. According to this new interpretation, the liability regime for providers now includes significant nuances and exceptions. First, until a specific law is enacted, platforms may be held civilly liable under Article 21 of the Civil Rights Framework itself whenever they are faced with content that constitutes serious crimes or highly reprehensible illegal acts, without prejudice to their duty to promote removal.
The Supreme Court has established, in detail, an exhaustive list of conduct that requires an immediate and diligent response, such as anti-democratic crimes provided for in the Penal Code, terrorist acts or preparatory acts, inducement or incitement to suicide and self-mutilation, crimes of racial or gender identity discrimination, crimes against women, sexual crimes against vulnerable persons, child pornography, and human trafficking. In these cases, the failure of platforms to address the massive circulation of such content constitutes a systemic failure and gives rise to liability, provided that the preventive or corrective measures adopted are proven to be insufficient.
In addition, the decision clarifies that isolated incidents do not automatically constitute civil liability, and in such cases, the provisions of Article 21 apply. On the other hand, when there is massive circulation of serious illegal content, the provider is required to take immediate action commensurate with the state of the art, imposing a qualified duty of care. This duty does not imply strict liability, which remains excluded, but establishes a presumption of liability whenever it involves paid advertisements and boosts or artificial distribution networks—such as bots or automated systems—used to amplify illegal content. In such cases, the platform is only exempt if it can prove that it acted diligently and within a reasonable time to make the content unavailable.
The decision also brought important improvements regarding recurring practices on platforms. For crimes against honor, the original regime of Article 19 remains in place, requiring a court order, but removal by extrajudicial notification is allowed whenever the provider chooses to do so. In cases of successive replications of content already declared illegal by court decision, all platforms must automatically remove identical publications, regardless of new decisions. This is a "notice and stay down" mechanism, which seeks to prevent the continuous reintroduction of the same offensive content, a problem that, in practice, undermines the effectiveness of traditional court decisions.
Furthermore, the Supreme Court has defined the cases in which Article 19 continues to apply in full, safeguarding the differentiated regime for services whose functionalities justify greater protection of confidentiality and private communication. Thus, email providers, closed video or voice meeting platforms, and instant messaging services remain subject to the original rules, exclusively with regard to interpersonal communications protected by Article 5, XII, of the Constitution.
As part of the reinterpretation of the regime, the Court established structural duties for platforms, aligning the Brazilian system with international trends such as the European Digital Services Act. Among these duties, the following stand out: (i) the need to adopt notification and due process mechanisms, (ii) the obligation to publish periodic transparency reports, the creation of accessible and widely publicized service channels, and (iii) the duty to maintain a legal representative in Brazil with full powers to respond administratively and judicially, comply with determinations, provide information, and bear penalties. Finally, the Supreme Court modulated the effects of the decision, establishing that the new guidelines apply only prospectively, in order to protect legal certainty and avoid retroactive impact in consolidated situations.
The decision represents a milestone in the redesign of platform liability in Brazil. By recognizing the inadequacy of the original model and establishing provisional guidelines for its correction, the Supreme Court has promoted a careful balance between the protection of fundamental rights, freedom of expression, and democratic integrity. The challenge now falls on the National Congress, which must draft comprehensive, technical legislation that is compatible with the systemic risks of digital communication, ensuring that the online environment is simultaneously safe, democratic, and promotes rights.
The Brazilian Supreme Court has redefined the civil liability regime for internet application providers by recognizing the partial and progressive unconstitutionality of Article 19 of the Brazilian Civil Rights Framework for the Internet. The former “specific court order” safe harbor remains formally in force, but no longer applies as a general, unrestricted rule. Platforms now face expanded duties of care, diligence, and risk mitigation, particularly in cases involving serious crimes, systemic harms, and artificial amplification of illegal content.
• Why it matters (not business as usual)
This is not a marginal doctrinal adjustment. The decision marks a structural shift from a reactive, judicially mediated liability model to a risk-based governance framework, closer to the logic of the EU Digital Services Act. The ruling challenges the premise that platforms can remain neutral intermediaries until judicial intervention and introduces liability exposure tied to scale, recurrence, monetization, and algorithmic amplification.
• Executive impact
Legal, compliance, trust & safety, and public-policy teams must reassess:
• Exposure arising from massive or systemic circulation of illegal content;
• Liability linked to paid advertisements, boosted content, and automated distribution networks;
• Evidentiary burdens to demonstrate timely and adequate diligence;
• Operational readiness to comply with transparency, due process, and local representation duties.
• Where to go deep
• The emergence of a qualified duty of care without adopting strict liability.
• Notice-and-stay-down obligations and their operational implications.
• Alignment with international trends (EU Digital Services Act).
• Decisions to make
• Whether to recalibrate moderation, escalation, and takedown protocols beyond court-order dependency.
• How to quantify and communicate litigation and regulatory exposure to boards and investors.
• Whether to proactively align internal governance with anticipated legislative reform.
• Next steps
• Conduct a platform-specific risk and liability mapping, focusing on amplification mechanisms.
• Stress-test internal policies against scenarios of mass harm and coordinated illegal activity.

This section gives quick answers to the most common questions about this insight. What changed, why it matters, and the practical next steps. If your situation needs tailored advice, contact the RNA Law team.
Q: What changed, and why is this not “business as usual”?
A: The Supreme Court effectively dismantled the idea that a court order is always a prerequisite for platform liability. While Article 19 formally remains, it no longer shields platforms in cases involving serious crimes, systemic dissemination, or artificial amplification. Liability now hinges on risk, scale, and diligence, not mere inaction after judicial notice.
Q: Does this mean platforms are now strictly liable?
A: No. The Court explicitly rejected strict liability. However, it introduced a presumption of liability in scenarios involving paid promotion, boosting, or automated distribution, which platforms can only rebut by proving timely, proportionate, and effective action.
Q: Which areas of organizations will feel the impact first?
A: Trust & safety, legal, compliance, advertising, and product governance teams will be immediately affected—especially those responsible for content moderation, ad review, algorithmic amplification, and crisis response.
Q: What is now considered a high-risk content category?
A: The STF identified an exhaustive list, including:
• Anti-democratic crimes;
• Terrorism and preparatory acts;
• Incitement to suicide or self-harm;
• Racism and gender-identity discrimination;
• Crimes against women;
• Sexual crimes against children and vulnerable persons;
• Human trafficking.
Mass circulation of such content triggers heightened duties and potential liability.
Q: How does this affect crimes against honor?
A: For defamation, slander, and insult, Article 19 remains applicable, requiring a court order. Platforms may remove content voluntarily after extrajudicial notice, but are not compelled to do so.
Q: What is the significance of “notice and stay down”?
A: Once content is judicially declared illegal, platforms must remove all identical replications, even without new court orders. This mechanism addresses the systemic failure of traditional takedown models in the face of rapid content replication.
Q: What structural obligations were imposed on platforms?
A: The Court imposed duties including:
• Clear notification and due process mechanisms;
• Periodic transparency reporting;
• Accessible service and complaint channels;
• A fully empowered legal representative in Brazil.
Q: What should be done in the next 30–90 days?
A:
• Review moderation and escalation protocols for systemic-risk scenarios.
• Audit advertising and boosting practices for liability exposure.
• Align internal governance with risk-based diligence standards, anticipating legislative reform.