
The growing incorporation of artificial intelligence-based meeting assistants in corporate, academic, and institutional environments marks the current digitization of human communications. These tools promise significant efficiency gains through recording, automatic transcription, and analysis of virtual meetings. However, the same technology that optimizes workflows opens up a gray area from a legal standpoint, as it operates on extremely sensitive data: the human voice and the content of private communications.
In this context, recent class action lawsuits filed in the United States against AI meeting assistant platforms, such as in the cases Cruz v. Fireflies.ai and Brewer v. Otter.ai, question the legitimacy of using voice data and the content of communications from participants who are not direct users of the platforms and who have never freely and knowingly consented to such operations.
Although not at the center of the regulatory debate on AI, these lawsuits expose fundamental tensions related to consent, the purpose of data processing, the economic exploitation of information assets, and the legal position of so-called "non-users." This is because it is precisely in these side disputes, anchored in everyday and widespread uses of AI, that some of the most concrete and immediate problems of the contemporary data economy emerge.
The class actions filed against the Fireflies.ai and Otter.ai platforms reveal, from different regulatory perspectives, the same structural problem: the collection and use of voice data from participants in virtual meetings without the valid consent of all data subjects involved.
The Cruz v. Fireflies.ai Corp. case falls under the Illinois Biometric Information Privacy Act (BIPA), state legislation widely recognized for its strict protection of biometric data. Unlike other regulations that treat biometrics in a generic manner, BIPA expressly defines "voiceprints" as biometric identifiers, subjecting their collection and processing to strict formal requirements.
The controversy centers on Fireflies' "speaker recognition" functionality, which analyzes participants' unique vocal characteristics to distinguish who is speaking during a meeting. According to the indictment, this process involves the creation of vocal templates capable of persistently identifying individuals, which would constitute the processing of biometric data under Illinois law.
For its part, the litigation involving Otter.ai has a broader and more complex scope, articulating grounds in federal and state laws aimed at protecting the privacy of communications. Unlike the Fireflies case, which focuses on vocal identity, here the focus shifts to the content of conversations and how that content is intercepted, recorded, and reused.
According to the indictment, Otter does not act as a merely passive tool at the service of the user, but as an independent third party that intercepts private communications to which it is not a party. By automatically joining meetings, the platform would record and read the content of conversations "in transit."
For comparative purposes, the analysis of US litigation requires an understanding of the profound structural differences between the US legal system and the Brazilian legal system. In the United States, the regulation of privacy and the use of personal data is characterized by a fragmented and sectoral model. The Brazilian model, in turn, is based on a different logic, structured predominantly in federal regulations, such as the General Personal Data Protection Law (Law No. 13,709/2018).
The voice, as a characteristic capable of directly or indirectly identifying an individual, is generally classified as personal data. Biometric data linked to a natural person is classified as sensitive personal data, subject to a more restrictive legal regime (Art. 5, II). Speaker recognition practices or ASR model training based on individualizing vocal characteristics would tend to be classified as sensitive data processing.
Beyond the LGPD, the processing of voice data in virtual meetings must also be analyzed in light of the Brazilian Civil Rights Framework for the Internet (Law No. 12,965/2014). Art. 7 guarantees users the inviolability of their intimacy and private life, as well as the confidentiality of stored private communications.
Finally, a relevant issue in these cases is the underlying economic dynamic: the systematic incorporation of informational assets produced by third parties into artificial intelligence models that subsequently compete, directly or indirectly, with those same agents in the economic exploitation of those assets.
In the Brewer v. Otter.ai case, the platform uses meeting recordings to train its own speech recognition models, improving a commercial product that generates its own competitive advantage. This suggests a structure of externalization of costs and internalization of benefits, where AI companies capture data on a large scale to extract autonomous economic value without remuneration or adequate transparency.
Although they do not constitute the epicenter of the regulatory debate on artificial intelligence, disputes involving meeting assistants and the use of voice data serve as borderline cases, capable of anticipating broader and recurring conflicts. These cases contribute to the maturation of legal reflection on the limits of data exploitation in the digital economy and on the role of law in containing innovation models structured around informational asymmetry.

This section gives quick answers to the most common questions about this insight. What changed, why it matters, and the practical next steps. If your situation needs tailored advice, contact the RNA Law team.
Q1: What is the main legal issue in the US lawsuits against AI meeting assistants?
A1: The lawsuits question the legitimacy of collecting and using voice data and communication content from participants who are not direct users and never consented to the data processing.
Q2: How does the Illinois Biometric Information Privacy Act (BIPA) apply to these tools?
A2: BIPA treats voiceprints as biometric identifiers. If an AI tool uses speaker recognition to identify individuals, it must follow strict formal requirements, including obtaining written consent.
Q3: How is voice data classified under Brazilian law?
A3: Under the LGPD, voice data capable of identifying a person is considered personal data, and when linked to biometric identification, it is treated as sensitive personal data.
Q4: Can a meeting host's consent cover all participants in an AI-recorded meeting?
A4: Recent litigation argues that consent must be individual and specific; the consent of a host or a single user cannot be presumed to apply to all other speakers.
Q5: What is the risk of using meeting recordings to train AI models in Brazil?
A5: Such practices may constitute a misuse of purpose if the data was collected for transcription but then used for commercial AI improvements without clear notice or a valid legal basis.