News Detail Banner
All News & Events

Artificial Intelligence Update - April 2026

April 15, 2026
Business Litigation Reports

Generative AI and Section 230: Where Courts Are Drawing the Line

            A growing number of lawsuits are testing how existing law applies to generative AI systems, particularly conversational AI and large language models.  Commentary on generative AI has often framed liability questions in terms of whether Section 230 of the Communications Decency Act (47 U.S.C. § 230) would shield developers and platforms.  But the first wave of cases suggests that many disputes involving conversational AI are being litigated on theories that do not depend on Section 230 at all.

            Instead, courts are allowing claims to proceed, or resolving them, under traditional principles of products liability, negligence, discrimination law, and defamation.  The emerging pattern is not a narrowing of Section 230’s scope, but a shift in the kinds of claims being brought and litigated.

Section 230 Governs Traditional Platform Functions

            Section 230 remains a powerful defense where claims depend on treating a defendant as the publisher or speaker of third-party content.  Courts continue to apply the statute broadly in cases involving recommendation algorithms, moderation decisions, and the organization or display of user-generated content.  E.g., M.P. by & through Pinckney v. Meta Platforms Inc., 127 F.4th 516, 521 (4th Cir. 2025), cert. denied sub nom. M. P. By & Through Pin v. Meta Platforms Inc., 146 S. Ct. 287 (2025); Six4Three, LLC v. Facebook, Inc., 109 Cal. App. 5th 635, 655 (2025).

            Courts remain divided over how far those protections extend in cases involving highly personalized recommendation systems.  In Anderson v. TikTok, Inc., 116 F.4th 180 (3d Cir. 2024), the Third Circuit allowed certain claims to proceed based on allegations that TikTok’s recommendation algorithm affirmatively steered harmful content to a minor user, reasoning that an algorithmically curated feed may in some circumstances be characterized as a platform-created product.  But other courts have rejected that approach.  In Patterson v. Meta Platforms, Inc., 2025 WL 2092260 (N.Y. App. Div. July 25, 2025), the court held that recommendation algorithms remain protected editorial functions, warning that removing Section 230 protection based on algorithmic curation would expose platforms to effectively unlimited liability for third-party content.

            This circuit split on whether algorithmic recommendation systems constitute protected editorial functions may prove relevant as courts begin to confront questions about the liability of autonomous AI systems that generate original content, as opposed to merely curating third-party material.  The underlying tension—whether algorithmic systems are editorial tools or independent products—has clear implications for conversational AI.

Generative AI Cases Are Being Litigated on Different Theories

            By contrast, courts addressing conversational AI systems have generally centered their analysis around the design, operation, and foreseeable risks of the systems themselves.

            In Garcia v. Character Technologies, Inc., 785 F. Supp. 3d 1157 (M.D. Fla. 2025), a federal district court declined to dismiss claims arising from a minor’s alleged harmful interactions with a conversational AI chatbot.  The court held that the defendants plausibly owed a duty of care based on foreseeable risks associated with anthropomorphic AI systems and declined to resolve defenses premised on treating the defendants as publishers at the motion-to-dismiss stage.  Garcia illustrates how plaintiffs have pursued claims based on the design and operation of conversational AI systems, rather than on the publication of third-party content.

            Similarly, courts have been willing to resolve claims involving allegedly false statements generated by large language models under traditional defamation doctrines.  In Walters v. OpenAI, L.L.C., Case No. 23-A-04860-2 (Ga. Super. Ct. Gwinnett County May 19, 2025), a Georgia court granted summary judgment to AI developer OpenAI on a claim based on incorrect statements generated by ChatGPT, applying ordinary fault and damages principles.  The court found that ChatGPT’s disclaimers and warnings meant that no reasonable person could have understood its output as communicating actual facts, and that OpenAI’s extensive efforts to reduce AI “hallucinations” demonstrated the absence of negligence or actual malice.  The decision suggests that AI developers who implement robust mitigation measures and provide clear warnings to users may successfully defend against defamation claims under traditional tort standards.

The Importance of Framing Claims

            The emerging pattern is not a wholesale retreat from Section 230, but a shift in how claims are framed.  Claims that depend on treating an online service as a publisher of information continue to encounter strong Section 230 defenses.  But claims framed around system design, safety features, duty of care, or platform conduct have sometimes proceeded under traditional legal frameworks that do not implicate the statute.

            This distinction is not unique to generative AI.  Some courts have allowed claims to proceed in cases alleging harm arising from product design or platform conduct, even where third-party content played a role in the underlying events.  E.g., Lemmon v. Snap, Inc., 995 F.3d 1085 (9th Cir. 2021); In re Social Media Adolescent Addiction/Personal Injury Prods. Liab. Litig., 702 F. Supp. 3d 809 (N.D. Cal. 2023); Liapes v. Facebook, Inc., 95 Cal. App. 5th 910 (2023).  In Liapes, for example, the California Court of Appeal found that Section 230 did not bar discrimination claims challenging Facebook’s ad-delivery algorithms where those algorithms allegedly made their own discriminatory decisions.

            Early generative-AI cases reflect a similar analytical approach.  Where alleged harm is plausibly attributed to the design and operation of an automated system, rather than to the publication of third-party content, courts have been willing to evaluate claims under traditional legal principles without discussing Section 230.  Accordingly, the closer a claim comes to plausibly challenging system design, safety features, or AI’s autonomous decision-making, rather than the publication of third-party content, the more likely it is to proceed beyond the pleading stage.  This pattern suggests that AI developers may face greater exposure from design-defect and duty-of-care theories than from claims that would traditionally trigger Section 230 immunity.

Conclusion

            As generative AI litigation moves into discovery and trial, the factual record regarding how systems are designed, trained, and deployed is likely to become the decisive factor in determining whether courts characterize a system as a neutral intermediary engaged in publication or as a product whose design and operation are subject to traditional tort principles.  Companies deploying generative AI should account for these dynamics in system design and documentation, as architectural and recordkeeping choices made today may shape liability exposure tomorrow.