Throughout 2025, artificial intelligence reshaped the litigation landscape, created new federal and state regulatory hurdles and challenged the way law firms conducted business and delivered services to clients.
Perhaps the biggest impact came in the intellectual property space, where IP rights holders pursued a series of cases against AI developers over copyrights and trademarks. Law firms and news outlets count at least four-dozen significant federal court cases over AI companies’ use of copyrighted or trademarked content.
Thus far, IP rights holders have had a mixed track record in court, and some are opting to negotiate settlements and licensing agreements with AI providers. But many companies appear to be digging in for a long season of litigation, and they are adapting along the way as they see how judges react to claims and as they gain new information in discovery about AI developers’ use of their content.
This new information, Debevoise & Plimpton recently wrote, is helping IP rights holders “cure deficiencies that led to previous motions to dismiss.”
Companies Jump In
In its just-released review of 2025’s AI-related intellectual property disputes, Debevoise found that courts are “finally beginning to confront the substantive merits of plaintiffs’ infringement claims and defendants’ fair use defenses.” The firm said it is tracking more than 50 lawsuits between intellectual property owners and AI developers that are pending in U.S. federal courts.
Debevoise noted that companies with significant IP assets see generative AI as a potential threat. And “rather than waiting on the sidelines,” those companies have decided to pursue litigation to protect their content, the firm reported.
“Though headlines have focused on the first major fair-use decisions addressing AI training, those aren’t the only developments affecting the litigation landscape,” Debevoise said. “Less-reported but increasingly outcome-determinative litigation trends from 2025 include the rise in corporate plaintiffs and class actions, with fierce discovery battles and class certification becoming pivotal battlegrounds shaping case trajectories.”
Given the extraordinary valuations of leading AI companies (ChatGPT maker OpenAI, for example, is now valued at $500 billion), high-profile plaintiffs are joining forces to pursue litigation against AI developers, a move Debevoise said “may help manage the high cost and complexity of discovery in AI litigation—especially against very well-resourced defendants.”
Collective Action
A key example is a case brought in February by 14 major magazine and newspaper publishers—including Condé Naste, Vox, The Atlantic, Axel Springer Companies, The Guardian and others— against AI developer Cohere Inc.
The case, Debevoise wrote, is “significant as it represents the largest collective action by media organizations against an AI developer to date and reinforces the trend of well-resourced corporate plaintiffs pursuing multi-pronged legal challenges to protect their IP assets.”
In their complaint, the media companies allege that Cohere unlawfully reproduced, distributed and displayed their copyrighted works, including delivering outputs that are either full verbatim copies, substantial excerpts or substitutive summaries of their works.
The company, which is valued at nearly $7 billion and recently raised $500 million from investors, has hired Orrick, Herrington & Sutcliffe for its defense.
The deep pockets and Big Law support have not halted the case, however. In November, the U.S. District Court for the Southern District of New York rejected Cohere’s motion to dismiss claims of direct and secondary copyright infringement, trademark infringement and false designation of origin. Oppenheim & Zebrak, a Washington, D.C., litigation boutique is representing the publishers.
Cases and Settlements
The publishers may be hoping for a settlement like the one Anthropic reached with a group of U.S.-based authors over the summer. In June, the U.S. District Court for the Northern District of California ruled that the AI company—facing a class action by the authors—may have illegally downloaded some 7 million books to train its large language models.
Two months later, the company settled the case for as much $1.5 billion. According to Reuters, an unfavorable decision in the case could have made Anthropic liable for billions of dollars in damages. Quoting Subha Ghosh, a Syracuse University law professor who specializes in IP law, Reuters wrote that the settlement could “huge in shaping” current and future litigation against AI companies.
Another major media player, The New York Times, is pursuing two of the highest-profile infringement cases. In late 2023, the company sued OpenAI and Microsoft for using its content to train large language models without permission. The case is in discovery, with the Times recently demanding that OpenAI hand over 20 million private ChatGPT conversations to find examples of users attempting to breach the newspaper’s paywall.
On Dec. 5, the Times also sued Perplexity AI, accusing it of stealing the newspaper’s journalism to power its AI research assistant. The suit follows 18 months of failed licensing negotiations, according to news reports.
While cases are proliferating, IP owners have faced setbacks. On June 25, the U.S. District Court for the Northern District of California granted summary judgment to Meta, saying its use of copyrighted books to train a generative AI tool was fair use.
Getty Images brought a case in the U.K. claiming that Stability AI engaged in trademark and copyright infringement by using Getty material to train its AI-driven image generator. Initially touted as a landmark copyright case, Getty’s suit ran aground when the company was unable to secure evidence about Stability’s AI training efforts. In November, a U.K. court largely rejected Getty’s claims.
Regulatory Moves
The results in the Getty case have led to renewed calls for the U.K. government to provide greater regulatory guidance on AI-related intellectual property issues. As one top U.K. litigator told Reuters, the Getty decision "leaves the UK without a meaningful verdict on the lawfulness of an AI model's process of learning from copyright materials.”
Meanwhile, in the United States, the Patent and Trademark Office recently weighed in with new guidelines aimed at determining when inventors may patent inventions created with the assistance of artificial intelligence. On Nov. 26, the USPTO reasserted previous findings that artificial intelligence systems, no matter how sophisticated, are not inventors or joint inventors on a patent application because they are not humans.
The order rescinded a 2024 Biden administration ruling that, according to Reuters, “relied on a standard normally used to determine when multiple people can qualify as joint inventors.” AI may provide services and generate ideas, the USPTO wrote in November, “but they remain tools used by the human inventor who conceived the claimed invention.”
The guidelines could be valuable for inventors seeking or defending patents created with the help of AI. In 2023, the U.S. Supreme Court declined to take a case by a computer scientist challenging a USPTO decision refusing the patent on an invention entirely created with artificial intelligence. But as Reuters noted, the courts have not yet determined when a person can receive patents for inventions created merely with AI’s assistance.
Targeting AI Fraud
Artificial intelligence issues are also capturing the attention of regulators at the Federal Trade Commission and the Securities and Exchange Commission. In September, the FTC, in a news release, said it had ordered seven major companies with consumer-facing AI-powered chatbots to provide information about how they measure, test and monitor “potentially negative impacts of this technology on children and teens.”
The agency is also continuing a key Biden-administration policy, “Operation AI Comply,” which targets deceptive AI-related marketing claims. Businesses have been cited for deceptive advertising and for “AI washing,” in which they falsely claim to investors and consumers that they are enhancing or improving their products with artificial intelligence.
“Recent enforcement actions this year…show that the FTC’s scrutiny has continued—and perhaps even intensified,” the law firm Benesch recently wrote in a client bulletin.
The SEC, too, has said it is focusing on AI washing. DLA Piper noted in a May article that the SEC’s Enforcement Division and its Cybersecurity and Emerging Technologies Unit had made rooting out AI washing “an immediate priority.” Holland & Knight’s blog on SEC matters also reported that the commission and the U.S. Department of Justice had filed parallel actions in April against a former CEO accused of making false and misleading statements to investors about his company’s AI technology.
AI companies are facing state regulators as well. A new Colorado law governing privacy and decision-making when AI is used in high-risk scenarios will take effect in 2026. And the California legislature has recently greenlighted several AI-related measures around transparency, chatbots, employment and healthcare.
The Challenge for Firms
Even as they take on a growing docket of litigation and regulatory actions, law firms themselves are grappling with AI implementation.
A review of several recent reports on legal technology trends published by the American Bar Association’s Law Practice Division finds that generative AI is now “no longer theoretical” and a “daily reality” for many law firms. Citing a previous ABA survey, the review noted that most firms expect AI to become an essential component of legal research in the next two years. That’s despite continuing concerns about accuracy.
Firms are also facing questions about whether new digital tools will compromise security and data privacy. Three-quarters of law firms now rely on cloud platforms “for everything from document storage to client collaboration,” the review said, citing surveys by the ABA and Wolters Kluwer. “However, this convenience brings new responsibilities, particularly in terms of data security and regulatory compliance,” the review’s authors said.
The ABA survey found that 60% of firms now have formal cybersecurity policies, and Wolters Kluwer showed that data privacy compliance is now a top concern for firms handling cross-border matters, the review’s authors noted.
Given the likelihood that AI litigation involving intellectual property clients is almost certain to continue to ramp up in 2026, firms should have their own houses in order where technology is concerned.
According to Debevoise, 2026 is likely to bring “sharper challenges to fair-use defenses tailored to specific training practices; aggressive plaintiff strategies to unlock proprietary training information through discovery; and a new wave of class certification battles as increasingly sophisticated plaintiffs refine their legal theories and AI developers face growing pressure.”
Action Items
What should clients be doing to reduce litigation and regulatory risk, according to lawyers and AI industry experts?
For in-house teams, determining where AI shows up in the business is key. Many companies are creating inventories of AI uses, data sources and tools. These inventories allow legal teams to build playbooks identifying risk, establishing when humans should review AI outputs, and determining how AI tools should be tested, among other tasks.
Tech companies are likely to feel increasing pressure from customers to explain AI training data, IP ownership, privacy compliance and cybersecurity. Reducing litigation and regulatory risk and potential customer backlash may hinge on their ability to provide a clear picture of how they developed their AI platforms—what data did they use, how did they test their tools, and how will they respond if something goes wrong.
For content producers, positioning themselves to protect their intellectual property catalogs, enforce their rights, or license their content is a top priority. A number of IP rights holders are rewriting their website terms and conditions, contributor agreements, and distributor contracts to spell out how content may be used for AI training. They are also doing basic blocking and tackling, like registering works, tracking how and where content is distributed, and monitoring their sites for large-scale scraping.
The companies that do this groundwork now—mapping out how they use artificial intelligence, tightening up contracts, documenting decisions—will be in a far stronger position as the next wave of AI-driven disputes arrives.
---
David L. Brown is a legal affairs writer and consultant, who has served as head of editorial at ALM Media, editor-in-chief of The National Law Journal and Legal Times, and executive editor of The American Lawyer. He consults on thought leadership strategy and creates in-depth content for legal industry clients and works closely with Best Law Firms as senior content consultant.