Artificial Intelligence and Deepfakes in the Brazilian Civil Code Reform

2025
7
mins read

The emergence of deepfakes and other forms of synthetic image manipulation by artificial intelligence (AI) has produced one of the most complex contemporary challenges to civil law and the protection of personality rights. The ease of creating hyper-realistic representations of individuals — whether living or deceased — puts traditional principles of consent, truthfulness, and human dignity under tension, exposing the insufficiency of classical norms on image and honor in the face of new generative technologies. In this context, Bill No. 4/2025, which proposes an update to the Brazilian Civil Code, represents a paradigmatic shift by expressly incorporating artificial intelligence as a normative category within Digital Civil Law. Among the project’s most relevant innovations is Article 2.027-AN, which, for the first time, regulates the creation and use of images of living or deceased persons through AI, establishing clear criteria of lawfulness and protection mechanisms against abuses. The provision structures a tripartite system of lawfulness — based on informed consent, respect for personality rights, and mandatory labeling of synthetic content — and provides a direct civil basis for liability in cases of improper manipulation, such as pornographic, political, or defamatory deepfakes. This article analyzes the new legal regime through three main axes: (i) the content and normative scope of Article 2.027-AN; (ii) its articulation with other legislative initiatives aimed at regulating deepfakes and artificial intelligence; and (iii) the risks of normative overlap and legislative inflation arising from the proliferation of bills addressing the same issue under civil, criminal, and administrative perspectives. The aim is to understand how the reformed Civil Code could function as an axis of unity and coherence for a fragmented yet rapidly evolving legal system, in which the balance between creative freedom, personality protection, and technological transparency becomes a central requirement of the new Brazilian civil law.

CONTENT AND SCOPE OF THE CIVIL CODE PROJECT

Article 2.027-AN of Bill No. 4/2025 uniquely regulates the creation of images of living or deceased persons through artificial intelligence, establishing legal parameters for the lawful use of this technology and safeguards against its misuse. The provision authorizes the use of AI in legitimate activities — such as artistic, audiovisual, educational, or journalistic productions — provided that strict conditions are observed to preserve the dignity and personality of the person portrayed.

Item I imposes, as a central requirement, the obtaining of prior, express, and informed consent from the living natural person or, in the case of death, from their legal heirs or representatives, ensuring the continuity of image and honor protection after death. Item II complements this duty by requiring respect for the person’s dignity, reputation, and legacy, including their cultural, religious, and political dimensions, avoiding uses that distort their memory, beliefs, or expressions of thought.

Item III conditions the commercial use of images of deceased persons on authorization by their spouses, heirs, or representatives, or on an express testamentary disposition, recognizing the patrimonial value of image rights and their controlled transmissibility. Item IV reinforces compliance with mandatory and public-order rules, safeguarding ethical and legal limits that prevent the instrumentalization of human personality for illicit or offensive purposes.

The text also provides for complementary rules: §1 prohibits commercial exploitation without consent, except in legally authorized cases (such as journalistic or public interest uses); §2 extends copyright and image protection to AI-generated images, granting heirs or representatives of the deceased ownership of these rights; §3 makes labeling mandatory for all artificially created images, requiring clear and precise information that the content was produced by artificial intelligence; and §4 extends the rule’s application, “where applicable,” to avatars and digital representations of legal entities, recognizing that corporate identity and reputation can also be harmed by the misuse of AI.

Taken together, the article builds a tripod of lawfulness for AI-generated images: (a) valid and informed consent, expressing the individual’s self-determination over their own representation; (b) respect for personality rights, preventing degrading, decontextualized, or manipulative uses; and (c) mandatory labeling, ensuring transparency and avoiding confusion between real and synthetic content. These three pillars provide objective normative criterion to assess the lawfulness of AI-generated productions and serve as a civil basis for liability in cases violating the image, honor, or memory of natural or legal persons.

In practice, the provision establishes a clear legal basis for takedown requests and compensation claims in cases of abusive deepfakes, which are proliferating in digital environments. Situations such as manipulation of intimate images, unauthorized posthumous reconstructions, false political representations, or improper commercial exploitation now receive express treatment in the Civil Code, without depending solely on the broad interpretation of existing image and honor rules.

Article 2.027-AN therefore expands the scope of personality rights into the domain of artificial intelligence, while integrating the duty of transparency and the principle of human dignity as structuring elements of Digital Civil Law. From a comparative perspective, the provision aligns the Brazilian legal framework with international initiatives — such as the European AI Act and UNESCO’s recommendations on AI ethics — which also recognize the need for consent and mandatory labeling of synthetic creations, seeking to balance technological innovation, freedom of expression, and the protection of human identity. The discipline established in Article 2.027-AN of Bill No. 4/2025 fits within a national normative context under formation, marked by the proliferation of legislative initiatives aimed at curbing and preventing the abusive use of AI technologies in creating false images or deepfakes. In recent years, the National Congress has discussed several bills addressing this issue from complementary perspectives — civil, criminal, electoral, and personality rights — demonstrating the topic’s complexity and the need for systemic coordination among legal spheres.

OTHER LEGISLATIVE INITIATIVES REGULATING DEEPFAKES

Currently, more than twenty legislative proposals address the use of AI to create false pornographic or defamatory content. Among them, Bill 9930/2018, introduced by Representative Erika Kokay (PT-DF), criminalizes the non-consensual dissemination of intimate images and videos of women, proposing the inclusion of a specific criminal offense in the Penal Code (Art. 216-H), with penalties of detention and fines. Similarly, Bill 3902/2023 proposes amending the Brazilian Internet Civil Framework (Law No. 12.965/2014) to prohibit the creation, distribution, and commercialization of applications and programs designed to produce pornographic deepfakes, targeting technology providers and developers.

In the same line, Bill 5695/2023 seeks to criminalize the alteration of photos, videos, and audio using AI for the purpose of committing violence against women, treating deepfakes as a form of digital harassment and psychological violence. Other proposals, such as Bills 5721/2023 and 5722/2023, aim to increase penalties for creating or distributing manipulated nude or sexual content without consent, strengthening the protection of victims’ image, intimacy, and honor.

Another significant axis of legislative discussion concerns the use of images and voices of deceased individuals. Bills 3592/2023, 3608/2023, and 3614/2023 propose guidelines to protect dignity, privacy, and post-mortem rights, delimiting the use of biometric data, images, and voices of deceased persons by AI. Bill 4025/2023 follows a similar path, proposing amendments to the Civil Code and the Copyright Law to regulate the use of images and sounds of living or deceased persons, as well as the ownership of copyright over works generated by artificial intelligence systems.

Against this backdrop, Bill 4/2025, by introducing Article 2.027-AN into the Civil Code, seeks to establish a general civil basis of lawfulness and liability applicable across these contexts. In this sense, Article 2.027-AN would function as a structuring norm for the civil system of protection against deepfakes, providing the foundation for civil liability, compensation for moral and material damages, and the exercise of rights such as takedown and indemnification.

However, the set of AI and deepfake-related bills currently under consideration reveals a trend toward normative inflation, with initiatives addressing identical issues from different legal angles. Although this multiplicity demonstrates legislative sensitivity to the topic’s urgency, it also creates overlap and potential normative conflict. This redundancy undermines legal coherence and generates overlapping punitive regimes.

CONCLUSION

The advance of artificial intelligence — particularly generative technologies — opens a frontier where personality protection is challenged by new forms of representation, reproduction, and manipulation of human identity. Deepfakes epitomize this dilemma: while expanding expressive and creative possibilities, they also become instruments of image, honor, and memory violations, demanding a coherent, proportionate, and technically informed legal response.

In this context, Article 2.027-AN of Bill No. 4/2025 represents a milestone in the civil codification of transparency obligations and the right to informational self-determination over one’s image. By establishing the tripod of consent, respect for dignity, and mandatory labeling, it reinforces the core principles of Digital Civil Law. At the same time, the multiplicity of bills addressing deepfakes across criminal, civil, and administrative spheres highlights the urgency of systemic normative coordination.

FAQs

Frequently asked questions about this article.

Question text goes here

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Question text goes here

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Question text goes here

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Question text goes here

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Question text goes here

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.