Rules of Professional Misconduct

“The Mistake Will Not Recur [Until Two Sentences From Now]”

a fail stampWhat you did.

Every day this week so far Law360 has reported on a new AI-related sanctions order, and I’m sure these are just the ones Law360 made space for. Pretty soon they’ll need a separate newsletter just for that topic (and if they do, I will subscribe to it). I too do not have space (well, time) to write about all of these, so there needs to be some distinguishing characteristic for a story to qualify. “Dummy relies on generative-AI output” is no longer sufficiently newsworthy.

Today’s sufficiently-newsworthy example involves a decision by the Supreme Court of Alabama, which not only sanctioned the AI-using attorney but dismissed his clients’ appeal because it found the briefs were “grossly deficient.” Ibach v. Stewart, No. SC-2025-0106 (Ala. Apr. 24, 2026). The facts of the case are unimportant to us, until we reach Section II, entitled “[Counsel’s] Extensive Use of Nonexistent or Misquoted Authorities.” How extensive? “Astoundingly” so, it turns out.

This was first identified, as usual, by opposing counsel. See, in the U.S. we have an “adversary” system, meaning one is normally opposed by an adversary who might do adversarial things, like actually look for the cases you cited and tell on you if you made them up. I’m sure all of you know that, but it’s starting to seem like a lot of people don’t. Anyway, that’s what happened here. In fact, according to the opinion, the response brief spent twelve pages talking about how many of the moving party’s cases didn’t exist. That seems like too many, but it’s hard to fault those lawyers for beating up on someone who fabricated an “astounding number” of authorities.

And that was just in the opening brief. In the reply—which, remember, was supposed to be replying to a brief that spent twelve pages talking about how many of the moving party’s cases didn’t exist—the offending attorney addressed the problems only in a single footnote. Here’s the footnote:

[Plaintiffs] acknowledge and regret that their opening brief misquoted two secondary sources …. The error arose from counsel’s first use of an AI research tool that summarized commentary not readily available through standard legal databases. The tool misattributed quotes. Counsel accepts full responsibility for relying on those summaries without independently verifying the original texts. The mistake will not recur. But the underlying legal principle … is both correct and well-settled. See Ex parte Seabol, 782 So. 2d 212, 216-17 (Ala. 2000); Ex parte United Serv. Auto. Ass’n, 78 So. 3d 979, 983-84 (Ala. 2011); Hughes v. Glover, 157 So. 2d 299, 302 (Ala. 1963); Franciscan Sisters Health Care Corp. v. Dean, 448 N.E.2d 872, 876 (Ill. 1983); Restatement (Second) of Trusts § 219(1) & cmt. a.

Have you guessed the problem? I put some of the words in bold to help you spot it. Correct! Counsel’s promise, “the mistake will not recur,” was fulfilled for one sentence.

I could also point out (the court certainly did) that the problem went far beyond “two secondary sources,” and also that the rest of the brief was also full of bogus citations. In fact, the next fourteen pages of the court’s opinion was devoted exclusively to those bogus citations.

At the show-cause hearing, the attorney finally admitted what he had done—or more accurately, not done—and showed remorse, and a week later he wrote the other side a check for $17,200. The court still alerted the state bar, and denied the plaintiffs’ motion to file supplemental briefs that had real legal authorities in them. Appeal dismissed.

Two justices dissented in part, mainly from the decision to dismiss the appeal, which punishes the clients for something that was the lawyer’s responsibility. (He will be hearing from their new lawyer soon.) But they all agreed that they’re just about fed up with this nonsense.

This is not to say, as even the majority pointed out, that it is necessarily wrong or unethical to use generative AI for … whatever you’re using it for. But it is both wrong and unethical to rely on its output without confirming the results yourself. It is designed to generate things that look like humans wrote them, not necessarily to generate right answers. That’s the human’s job.

Similarly, an AI could probably generate what looks like a check for $17,200, but for now it’s still the human’s job to pay the money. I would keep that in mind too.