The Right to Information in Human-AI Interactions: Transparency, Accountability, and Civil Liability Under Brazilian Civil Code Reform

2025
12
mins read

The incorporation of artificial intelligence (AI) into the field of civil liability represents one of the most significant developments in contemporary private law. The growing autonomy of algorithmic systems, their capacity for learning, and the opacity of their decision-making processes challenge classical models of attribution based on human conduct and subjective fault. In this context, Article 2.027-AL, item IV, of Bill No. 4/2025 — which proposes a reform of the Brazilian Civil Code — requiring the assignment of civil liability to a natural or legal person for damages caused in digital environments, according to the principle of full reparation.

Rather than creating an autonomous liability regime for AI, the provision integrates duties of transparency, non-discrimination, traceability, and human oversight into the traditional system of civil attribution, reinforcing the role of good faith and the social function of technology as benchmarks for assessing fault. When read in conjunction with Chapter IV (Transparency and Security in Digital Environments) and Article 2.027-Z, it reveals two distinct regimes of attribution.

The first applies to content providers and digital platforms, which are liable for damage caused by third parties whenever there is a systematic failure to comply with the duties of diligence and transparency established in Book VI. The second governs the advertising distribution of third-party content, in which liability arises from active promotion or direct economic benefit derived from the dissemination of harmful content.

In the case of proprietary AI systems—those created or offered directly to the public, such as generative models, automated assistants, or decision-making algorithms—the standard of civil diligence is measured by compliance with the duties enumerated in Article 2.027-AL: non-discrimination, explainability and traceability, effective human oversight, and algorithmic governance. The failure to adopt mechanisms for monitoring, reviewing, and documenting automated decisions increases the degree of culpability and facilitates the establishment of causal links, since negligence can be demonstrated through the absence of records and protocols. Thus, deficient governance acts as an aggravating factor of liability, consistent with the principles of prevention and digital safety.

Moreover, Article 2.027-AM of Bill No. 4/2025 codifies the right to information in interactions with artificial intelligence (AI) systems. The provision recognizes that any natural person who interacts, through interfaces, with AI systems—whether or not these are embedded in devices—or who suffers harm resulting from their operation, has the right to be informed both that they are engaging with an autonomous technology and about the general model of operation and the criteria for automated decision-making, particularly when such decisions directly influence the exercise of rights or significantly affect economic interests.

This provision introduces, within civil law, an obligation of contextual transparency, which does not require the full disclosure of source codes, algorithmic formulas, or trade secrets, but demands that the information provided be understandable, auditable, and sufficient to enable the individual to exercise contestation and accountability. It thus establishes a duty of qualified communication, directed not merely toward technical disclosure but toward guaranteeing the informational autonomy of individuals, allowing them to understand the legal and economic consequences of their interactions with automated systems.

The provision also incorporates a principle of proportionality in the duty of explainability, limiting the obligation to provide detailed information to cases in which an automated decision “influences rights” or significantly impacts economic interests. This framing functions as a regulatory-intensity trigger: the greater the potential impact of an automated decision, the higher the degree of clarity and detail required in the information disclosed.

Finally, the rule establishes a direct connection with civil liability, as compliance or non-compliance with the duty of information becomes part of the assessment of diligence and good faith between the parties. The omission or inadequate provision of information may constitute a breach of legitimate trust and give rise to a duty of reparation.

AI LEGAL FRAMEWORK VS. BRAZILIAN CIVIL CODE REFORM

A comparison between Bill No. 2.338/2023 and the AI Legal Framework in Brazilian Civil Code Reform (Bill No. 4/2025) is essential to assess the coherence and systemic articulation of Brazil’s legal response to AI regulation. While Bill No. 2.338/2023 adopts a regulatory and procedural approach, integrating AI into existing legal frameworks and defining liability according to levels of risk and autonomy, Bill No. 4/2025 offers a dogmatic reconfiguration of Civil Law, embedding positive duties of governance, transparency, and human supervision within the traditional fault-based structure.

This comparative analysis helps determine whether the two bills complement each other or generate overlaps and normative gaps, particularly regarding liability in autonomous systems, consumer protection, and compatibility with sectoral laws such as the LGPD (Brazilian General Data Protection Law). It is also vital for evaluating the alignment of national innovation policies with international ethical standards, including UNESCO’s Recommendation on the Ethics of Artificial Intelligence (2021).

Accordingly, Chapter V of Bill No. 2.338/2023 also addresses civil liability for damage caused by AI systems, adopting an integrative model that refrains from establishing an autonomous regime of attribution. Article 35 stipulates that when damage arises in a consumer relationship, the Consumer Protection Code (Law No. 8.078/1990) applies, preserving the logic of strict liability for suppliers. Article 36 provides that outside the consumer context, AI-related damages remain subject to the Civil Code, without prejudice to the application of the new law’s provisions. The Civil Code thus retains its role as the structural axis of AI-related civil liability, serving as both a subsidiary and interpretative source.

However, the sole paragraph of Article 36 introduces an innovative element by requiring that the applicable liability regime in each case be determined based on two cumulative criteria: (i) the system’s level of autonomy and risk; and (ii) the nature of the agents involved, particularly when subject to special liability regimes (e.g., transporters, physicians, banks, or data providers). This provision creates a dynamic bridge between civil law and technical regulation, allowing attribution to vary according to system risk and autonomy—thus bringing Brazilian law closer to the EU’s AI Liability Directive model.

In practice, Bill No. 2.338/2023 reaffirms the Civil Code’s centrality but through a regulatory lens of risk management. The notion of civil damage is reinterpreted through technological risk governance: the higher the autonomy and unpredictability of a system, the stricter the diligence required. This reasoning is mirrored in Article 37, which allows for reversal of the burden of proof when the victim is in a position of vulnerability or when the technical complexity of the system makes it excessively burdensome to establish traditional elements of liability (conduct, damage, causation, and fault). This mechanism serves as a procedural tool for balancing asymmetries, inspired by consumer law, addressing the evidentiary barriers typical of algorithmic harm and the “black box” opacity effect.

Article 38 further ensures that even within regulatory sandboxes—testing environments for AI experimentation—participants remain fully liable for damages, reaffirming the principle of non-exclusion of reparation during innovation. Article 39 complements this framework by preserving the validity of special laws (such as the LGPD, the Consumer Code, and the Child and Adolescent Statute), reinforcing the intersectoral and non-exclusive character of the regime.

Comparatively, both bills share a structural convergence but differ in normative depth. Bill No. 2.338/2023 operates on the regulatory-procedural level, offering tools for balancing risk and allocating evidentiary burdens, while Bill No. 4/2025 functions on the substantive level, incorporating duties of governance, transparency, and human oversight as independent sources of fault and diligence. Conceptually, Article 2.027-AL, IV, of Bill No. 4/2025 mandates that every AI operation be legally anchored in an identifiable natural or legal person, responsible for full reparation under Articles 186 and 927 of the Civil Code. Yet, Bill No. 2.338/2023 allows for graduated liability based on risk and autonomy, which may generate interpretive tension—particularly in distinguishing the boundaries between fault and strict liability for highly autonomous systems.

CRITIQUES AND TENSIONS IN THE AI LIABILITY MODEL

The integration of AI into civil liability exposes more tensions than solutions. Article 2.027-AL, IV, of Bill No. 4/2025 emerges not as a definitive answer but as an attempt to adapt classical civil law to a technological phenomenon that destabilizes its conceptual foundations. By requiring liability to rest with an identifiable human or legal entity, the provision maintains an anthropocentric paradigm of fault, yet shifts key duties—transparency, traceability, and human oversight—into the technical domain.

This normative transposition mirrors the European experience with the AI Liability Directive: while it seeks to preserve civil law’s conceptual coherence by integrating algorithmic governance duties, it risks producing a hybrid model that is difficult to operationalize, overly reliant on technical evidence, certifications, and compliance parameters that the legal system itself cannot yet fully regulate.

The first critique concerns overconfidence in the normative integration of technical and legal duties. Both in the EU and in Brazil, merely importing notions such as transparency, traceability, or algorithmic governance into the fault-based regime risks creating a framework that is more symbolic than effective, since these duties depend on volatile technical standards that are often difficult to assess judicially. Without clear parameters for auditability and proof of compliance, such duties may degenerate into declaratory or formal liability rather than substantive accountability.

A second critique relates to the evidentiary burden. Bill No. 4/2025 does not explicitly adopt mechanisms for reversing the burden of proof comparable to those in Article 37 of Bill No. 2.338/2023 or Article 4 of the EU proposal. Thus, although it imposes documentation and traceability duties, it fails to guarantee victims equivalent procedural means to obtain technical records necessary to demonstrate causation. European experience shows that without disclosure mechanisms or presumptions of causality, civil regimes tend to reproduce the informational asymmetry typical of algorithmic harm, perpetuating the so-called black box impunity.

Finally, systemic concerns arise regarding regulatory overlaps. The coexistence of Bills No. 4/2025 and 2.338/2023 may create duplicative regimes and interpretive uncertainty. The former operates on a dogmatic level, reinforcing the conceptual core of fault and good faith; the latter functions on a regulatory and procedural level, introducing risk parameters and evidentiary rules. If uncoordinated, this duality could fragment the coherence of the liability system— mirroring criticisms directed at the EU’s AI Act and AI Liability Directive for generating overlapping obligations, interpretive conflicts, and increased compliance costs.

Another concern is the “technologization” of civil wrongs. By transforming technical duties into autonomous legal standards of diligence, the law risks displacing the value-centered nature of civil liability— traditionally grounded in the person and the reparatory function—toward a compliance-based, bureaucratic model, where adherence to protocols and documentation might be mistaken for the absence of fault. This trend, already visible in critiques of the AI Act, underscores the danger of a bureaucratized liability system focused more on formal conformity than on substantive justice.

CONCLUSION

Civil liability for artificial intelligence inaugurates a new paradigm in private law, where the attribution of harm can no longer rest solely on traditional notions of human conduct. The convergence between Bill No. 2.338/2023 and Bill No. 4/2025 reflects a legislative effort to balance technological innovation, legal protection, and doctrinal coherence, while exposing the enduring tension between a risk-management regulatory model and a dogmatic model of fault-based liability. Ultimately, the challenge is not merely legal but ethical and institutional: to construct a liability framework that preserves the centrality of the human person and the reparatory function of civil law, without stifling technological progress.

FAQs

Frequently asked questions about this article.

Question text goes here

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Question text goes here

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Question text goes here

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Question text goes here

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Question text goes here

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.