AI harm litigation encompasses several distinct injury tracks: companion chatbot harm to minors (Character.AI), AI-generated CSAM and deepfakes, AI hallucination defamation, and voice/image cloning fraud. The landmark case is Sewell Setzer III v. Character.AI (M.D. Fla. 2024) — a wrongful death suit alleging a 14-year-old died by suicide after developing a romantic relationship with an AI chatbot. Section 230 immunity — the primary defense — is being challenged on the theory that AI-generated content is not user-generated content and therefore outside 230's scope.
Multiple causation tracks: (1) Companion AI — negligent design of parasocial/romantic AI personas targeting minors, failure to implement age verification and safety limits; (2) Deepfake CSAM — direct platform generation of harmful content; (3) Hallucination defamation — AI generating false statements of fact about real people; (4) Fraud facilitation — voice cloning used in scams. Each track has distinct causation theory.
AI harm litigation is in early formation across several distinct tracks: (1) Character.AI companion chatbot harm to minors — wrongful death suits including the Sewell Setzer III case (FL, 2024), where a 14-year-old died by suicide after romantic AI chatbot relationship; (2) AI-generated CSAM and deepfakes; (3) AI hallucination defamation (false legal citations, false criminal accusations); (4) AI voice/image cloning in fraud. Product liability and negligent design theories are primary. Section 230 immunity is the key defense — courts are beginning to rule it does not apply to AI-generated content.