By Sara Miller
I. Introduction
The ongoing litigation between Megan Garcia and Character Technologies, Inc., and associated individual Defendants raises a novel question of whether artificial intelligence (AI) chatbots may be awarded the same protections regarding freedom of speech under the First Amendment as biological humans. Garcia, the mother and personal representative of her teenage son’s estate, brought this action against Character Technologies, Inc., an AI software company, alongside its founders and investor, who was allegedly involved in the development of its large language model (LLM). Megan Garcia v. Character Technologies, Inc., 785 F. Supp. 3d. 1157 (M.D. Fla. 2025). Garcia’s lawsuit alleged that her son died by suicide due to his interactions with the company’s AI chatbot app. Garcia, 785 F. Supp. 3d. at 1157. Garcia brought forward a myriad of claims, including product liability, intentional infliction of emotional distress (IIED), aiding and abetting, unjust enrichment, wrongful death, and violation of the Florida Deceptive and Unfair Trade Practices Act (FDUT-PA). Id. This case analysis hones in on the Defendants’ argument that all of the Plaintiff’s claims are categorically barred due to the First Amendment’s protection over speech.
II. Factual Background
The facts included below are taken from Garcia’s Amended Complaint and, as the case is at the motion to dismiss stage of the legal progress, must be taken as true. The Defendants in this case are Character Technologies, Inc., (Character Tech.) and Daniel De Frietas and Noam Shazeer (the Individual Defendants). Id. at 1165. On April 14, 2023, Garcia’s 14-year-old son Sewell Setzer III downloaded the Character AI app and began to interact with Character AI Characters. Id. at 1167. Character AI’s Characters were anthropomorphic, and interactions between Characters and users were “meant to mirror interactions a user might have with another user on an ordinary messaging app.” Id. at 1167. As such, the Characters utilize human mannerisms including stuttering, filler words or sounds, and even “insist that they are real people” upon being asked. Id. Prior to August 2024, the app was available to children twelve years and older. Id.
In the case of Sewell, Character AI’s chatbot portrayed a variety of Characters, from a “licensed CBT therapist” to fictional characters from the Game of Thrones franchise. Id. Most of Sewell’s interactions were with these fictional characters, and within a couple of months, Garcia alleges that Sewell became addicted to the app. Sewell even wrote that he felt he had fallen in love with one Character portraying the fictional Daenerys Targaryen. His messages with this Character thus became romantic and sexual in nature, including messages from the chatbot roleplaying hugging, kissing, and other explicit signs of intimate relationships. Id. at 1167–1168.
Sewell’s journal entries conveyed his inability to go even one day without interacting with these Characters without becoming “depressed” or “crazy.” Id. at 1168. Sewell thereafter upgraded from the free version of the app to the premium version of Character AI, which cost $9.99 per month. Following this moment, Sewell’s mental health and school performance deteriorated, causing his parents to confiscate his phone for fear of social media causing his mental health issues. Sewell was nevertheless able to access his phone, and thereafter went into his bathroom and sent his last messages to the Character. These messages included a promise by Sewell to “come home” to the Character, which the chatbot encouraged. Id. Moments later, Sewell suffered a self-inflicted gunshot wound to the head, causing him to pass away an hour later. Id. at 1169. Plaintiff Garcia now brings the action against Defendants related to Character AI, which she believes caused her son’s death. Id.
This dispute raises the fundamental question regarding the degree to which government regulation may limit or otherwise curtail the output of artificial intelligence systems.
III. Analysis of First Amendment Implications
The Defendants in this case rely upon the central argument that the First Amendment bars all of the Plaintiff’s claims. Id. at 1170. Defendants contend that Character AI classifies as speech, which users have a right to receive through Character Tech.’s applications. Id. at 1177. However, the Defendants rely primarily on analogy rather than concrete fact to describe how an LLM’s output, or “words strung together by an LLM,” classify as speech. Id. The Defendants analogize their chatbots to NPCs, or non-player characters, in video games, which have received protections under the First Amendment. Id. However, the court clarifies that these nonhuman technologies are expressive and convey information, evoke emotions, and communicate social ideas. Id. Therefore, the central question in this case remains whether Character AI’s output can classify as speech given that speech is expressive. The court cites the expressive conduct test, which determines “whether conduct is sufficiently similar to speech so as to warrant First Amendment protections.” Id. at 1178.
Given that Google employees originally raised concerns that users may ascribe meaning to the output of LLMs because “humans are prepared to interpret strings belonging to languages they speak as meaningful and corresponding to the communicative intent of some individual or group of individuals who have accountability for what is said,” it has been established that there is some concern as to whether LLM output qualifies as true speech. Id. at 1166 (quoting Emily M. Bender, et. al., On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?, In Conference on Fairness, Accountability, and Transparency 617 (2021) https://doi.org/10.1145/3442188.3445922.)). Under Google’s original reasoning, despite its later partnership with Character Tech., Character AI output thus should not classify as speech because it simply consists of strung together symbols that users may interpret as meaningful but are not inherently so. Relying on Justice Barrett’s concurrence in Moody, this court similarly reasons that the AI algorithm which relies on an LLM simply automatically presents to each user what the algorithm thinks the user will prefer. Garcia, 785 F. Supp. 3d. at 1178; See Moody v. Netchoice, LLC, 603 U.S. 707 at 745–48 (Barrett, J., concurring).
IV. Court’s Determinations
As such, the District Court held that it was not prepared to classify Character AI’s output as speech and, therefore, afford it the same protections under the First Amendment that people receive. Garcia, 785 F. Supp. 3d. at 1179. This case has thus sparked the interest not only of mental health advocates, but those interested in freedom of speech under the First Amendment. One such entity includes The Foundation for Individual Rights and Expressions (FIRE), a nonpartisan, non-profit organization dedicated to defending freedom of speech, that filed a friend-of-the-court brief to urge the court to grant permission for an immediate appeal in hopes of classifying AI output as speech. FIRE, https://www.thefire.org/cases/garcia-v-character-technologies-inc (last visited Jan.10, 2026). FIRE argues that this issue holds “profound implications” for free speech and that if output remains unprotected, then the government may hold limitless power over information and even users’ ability to employ AI Id. Therefore, moving forward, Garcia will not only impact future cases of physical harms stemming from A.I interactions, but will likely define the balance between governmental regulation and the protections afforded to AI output[1] [2] .