LA Courts Deploy AI 'Learned Hand': Finally, a Judge That Doesn't Need Coffee Breaks
The Los Angeles Superior Court is running a pilot where a select group of judges gets their own purpose-built AI sidekick, cheekily named Learned Hand. This digital clerk chews through case filings, highlights crucial facts, organizes evidence, and even drafts preliminary rulings for civil matters, freeing up the human brains for the actual thinking part—you know, the job.
Founder and CEO Shlomo Klapper explained to Decrypt that the court system is buckling under pressure as dockets explode. A report from Fisher Phillips in February 2026 showed filings skyrocketed 49 % in one year, from 4,100 to 6,400, making litigation more expensive than a blue-chip NFT at peak mania. Klapper states the AI's sole purpose is to automate the "drudge work," insisting it won't touch a judge's final say—the discretion is still human, for now.
Learned Hand, founded in 2024 and taking its name from a legendary federal judge, operates on a tightly controlled diet of legal texts instead of the wild buffet of the open internet, a architecture choice intended to stop it from making things up like a overconfident degen on a Discord call. The system dissects legal tasks into micro-jobs, assigns each to a specialized model, and serves it all up in a simple point-and-click interface—no fancy prompt engineering required, much to the relief of any non-crypto native on the bench.
Presiding Judge Sergio C. Tapia II was quick to clarify this is purely an efficiency test, not a hostile takeover threatening "the sanctity, independence and impartiality of judicial decision‑making." Consider it a very advanced, unpaid intern that doesn't sleep.
This pilot lands right in the middle of the wider legal world's messy brawl with AI. Over in San Francisco, U.S. District Judge Maxine Chesney hit the pause button, issuing a preliminary injunction to stop Perplexity AI’s Comet browser from going on a shopping spree on Amazon, leaving the question of AI-driven consumerism in legal limbo.
Recent courtroom AI blunders perfectly illustrate the verification nightmare Klapper is trying to avoid. In 2023, the defense team for Fugees’ Pras Michel had to admit their AI-crafted closing argument was packed with nonsense claims. That same year, a federal judge made Michael Cohen's lawyers physically print out case citations because the court couldn't trust the digital versions. Even Colombia’s Supreme Court rejected an AI-generated appeal, in a beautifully meta twist where the tool used to detect the AI authorship confessed it had used generative AI to help write its own report.
Klapper points out that the real expense in his model isn't in the generation, but in the verification: “Anyone can generate something, but how do you make sure it’s reliable?” It's the classic crypto problem: easy to fork a chain, hard to build something people actually trust.
In other tales from the AI-legal wild west, a North Carolina man named Michael Smith copped a guilty plea for a wire-fraud conspiracy that used AI and bot armies to drain over $8 million in music-streaming royalties. His sentence could be up to five years in the big house, plus he has to give back all the ill-gotten gains—a rug pull with real-world consequences.
On the pure tech front, OpenAI is doing some major consolidation, mashing ChatGPT, Codex, and its Atlas browser into one "superapp," according to chief of applications Fidji Simo.
Share Article
Quick Info
Disclaimer: This content is for information and entertainment purposes only. It does not constitute financial, investment, legal, or tax advice. Always do your own research and consult with qualified professionals before making any financial decisions.
See our Terms of Service, Privacy Policy, and Editorial Policy.