The Structural Barriers to AI Lawyers
Why AI Hasn’t Transformed Law (Yet)
Law was supposed to be easy for AI.
The profession runs on documents. Contracts, briefs, motions, discovery requests, regulatory filings. Every billable hour leaves a paper trail. And unlike medicine, where AI must contend with the complexity of biological systems, or finance, where microsecond arbitrage advantages disappear instantly, legal work operates on human timescales with human language. A contract dispute from 1982 reads much like one from 2024.
The pitch writes itself: AI systems that draft documents in seconds, review discovery in minutes, and catch errors that bleary-eyed associates miss at 2 AM. The technology exists. Westlaw’s Deep Research promises comprehensive legal research in under ten minutes. Clio’s Vincent AI will hand-craft you a personalized article from a treatise. Harvey.AI, trained on elite law firm work product, offers an agentic attorney assistant to BigLaw.
And yet.
Recent surveys report impressive AI adoption numbers in law, with up to 79% of attorneys claiming to use artificial intelligence at their firms. But these figures measure exposure, not integration. Having Copilot enabled or using the AI features baked into existing tools like Relativity counts as “adoption” in survey responses, even when actual workflows remain unchanged. The attorneys I speak with at conferences and Continuing Legal Education events across the country tell a different story: most firms have experimented with AI, few have transformed how they practice. The modal American lawyer in 2026 still works on a desktop computer, still pays for Westlaw or Lexis, and still approaches AI with the same wariness they brought to the cloud a decade ago.
Structural barriers make legal practice resistant to technological diffusion in ways that other industries don’t face. Understanding these barriers matters because law is where AI meets civic infrastructure. Courts, contracts, regulations, and rights flow through lawyers. If AI can’t diffuse through law, its broader social impact will remain constrained.
The Data Moat
Legal AI has a data problem that most industries don’t face, and it has two layers.
The first layer is raw legal data. To build useful AI for legal research, you need comprehensive databases of case law, statutes, regulations, and secondary materials. Only three entities in the United States have anything approaching complete coverage: Westlaw (Thomson Reuters), Lexis (RELX), and vLex/Fastcase, which Clio acquired in a $1 billion deal in November 2025. That deal pulled the third meaningful legal research database under a company focused on small and mid-size firm practice management, and Clio’s $5 billion valuation and $500 million Series G round suggest investors see the strategic value of owning one of only three complete legal datasets in the country. Everyone else either licenses from one of these three or works with incomplete data.
The second layer is what makes those databases worth paying for. Westlaw and Lexis don’t sell raw judicial opinions (much of that is publicly available). They sell the editorial infrastructure built on top: headnote taxonomies that organize millions of opinions into searchable categories, practice guides written by specialists over decades, and treatises that synthesize primary law into usable guidance. A California real estate attorney without access to Miller and Starr would be at a serious disadvantage, not because the underlying case law is hidden, but because navigating it without an expert-curated roadmap takes exponentially longer. Imagine being handed an encyclopedia to learn something vs. having a beautifully curated twenty-page guide from a panel of practitioners who have been through the procedure a thousand times. That’s the difference: substantive knowledge plus procedural shorthand, built up over years of practice in a single area of law.
The litigation around database access shows how fiercely incumbents defend this moat. Thomson Reuters sued Ross Intelligence not over case law itself, but over Westlaw’s headnote taxonomy, the editorial layer that organizes and summarizes judicial opinions. In February 2025, the court sided with Thomson Reuters, rejecting Ross’s fair use defense. The message: even if the underlying legal materials are free, the value-added structure built on top of them is proprietary and protected. Open-source alternatives like SALI have emerged in response, offering a vendor-neutral taxonomy that AI developers can use without licensing risk.
Cracks in the Moat
The data moat is real, but it may be more porous than it appears.
The Free Law Project’s CourtListener provides free access to millions of federal and state court opinions, oral arguments, and PACER documents. State-level open data initiatives, like Oklahoma’s, have made primary legal materials freely accessible. Harvard’s Caselaw Access Project digitized every official state and federal case through 2020. All U.S. state bar associations now provide members with free access to either vLex Fastcase or Decisis, which, for a solo practitioner handling state court matters, might be enough.
The editorial layer that was essential for human researchers may matter less for AI systems. vLex’s Vincent AI demonstrates a different approach: using AI to generate the synthesis layer rather than paying human experts to write it. Damien Riehl (Clio’s Tech Evangelist, perhaps best known for his viral TED Talk on music and copyright) calls this a “Me-Tise,” a personalized knowledge base rather than the traditional legal treatise. If AI can create practice guide-quality analysis from primary sources, the competitive advantage of having the best human-written treatises diminishes. The moat doesn’t disappear, but it gets shallower.
And there’s a whole category of legal technology that has no data moat at all. Legal research incumbents sit behind proprietary datasets. But a huge swath of legal tech (eDiscovery platforms, case management, billing, client intake, compliance, marketing, document automation) consists of traditional SaaS offerings where the value proposition is software engineering and workflow, not proprietary data.
When Anthropic launched legal skills as open-source plugins for its Claude Cowork platform on February 2, 2026, the market reaction was immediate and brutal: Thomson Reuters dropped nearly 16% in a single day (its worst on record), LegalZoom fell almost 20%, RELX lost 14%, and Wolters Kluwer shed 13%, roughly $285 billion overnight. The damage was concentrated in the SaaS-heavy segments. That’s why the vendor halls at Legalweek and Techshow are packed with AI startups attempting new, innovative ways to integrate AI into traditional workflows.
These segments face a threat that most legal tech vendors didn’t anticipate: frontier AI labs are no longer content to serve as infrastructure underneath vertical software. They’re increasingly building application-specific capabilities to directly serve users. Anthropic’s Claude Cowork, OpenAI’s Codex, and Perplexity’s computer-use agents automate entire desktop workflows, not just individual legal tasks. They draft documents, manage calendars, send emails, organize files, and handle billing without any legal-specific software in the stack. When the AI operates at the OS layer, the SaaS application sitting on top of it starts to look redundant.
Could frontier AI labs eventually purchase one of the Big Three legal datasets? They have the capital. The reason it hasn’t happened yet is simpler than the moat theory suggests: Thomson Reuters has a market cap of around $75 billion; RELX sits around $85 billion, which sounds enormous until you compare it to the markets these companies are already chasing in healthcare, finance, enterprise software, and consumer products. Legal data is a rounding error on their strategic roadmaps. For now.
The Messy Middle: Why Firms Can’t Execute
The barriers inside most law firms are organizational, not technical. Even firms that want AI can’t deploy it because their data is a mess and their governance structures punish change.
I learned this firsthand while consulting for a mid-sized firm eager to modernize. When I asked where their data was stored, the answer came in pieces: some was in iManage, some on SharePoint/OneDrive, some on an old local server (“the S:\ Drive”), and some was still paper. Before any AI system could leverage the firm’s accumulated wisdom, someone would need to locate, digitize, organize, and normalize years of fragmented work product scattered across incompatible systems and storage media. Even better if you could get it into a data lake and attach meaningful metadata.
This is a common story. Firms have spent years accumulating data in whatever system was convenient at the time, with no eye toward future interoperability. Even firms that have invested in document management systems find that adoption has been inconsistent: partners maintain personal filing systems, assistants save documents in non-standard locations, naming conventions are useless, and metadata is an afterthought. In niche practice areas where institutional knowledge is everything, a firm that has handled hundreds of similar transactions holds a substantial advantage, but only if that knowledge can be retrieved, synthesized, and deployed. Most firms can’t do that yet.
The governance problem compounds the data problem. A mid-sized, 30-person law firm has 10 to 15 partners, each with an equity stake and a vote on firm decisions. Unlike a corporation, where a CIO can mandate new tools across an organization, law firms operate as partnerships where every senior lawyer has veto power over changes that affect their practice. A partner who doesn’t want to learn new software can refuse, and the firm’s management has limited ability to force compliance. Technology decisions devolve to the lowest common denominator. The partner who complains loudest about change gets to block it. Enterprise legal technology vendors reinforce this pattern by focusing sales on large firms, leaving smaller firms with self-service products and no implementation support.
Small and mid-size firms face this most acutely because they lack dedicated technology leadership. A 200-lawyer firm might have a CIO with genuine authority. A 20-lawyer firm has a “technology partner” whose actual job is practicing law, with IT responsibilities layered on top. That partner’s time for evaluating AI tools competes with billable work, client development, and everything else. My hobby-horses, AI-literacy and AI-competency, take a backseat to hitting the requisite 1,900 annual billable hours.
After presenting at a Midwest state bar annual conference, I was approached by a young, AI-forward attorney who had been serving on his (fairly large) firm’s internal AI committee for a year. The committee’s senior partners had repeatedly deferred any decision on AI adoption, citing risk. Meanwhile, he’d been teaching himself Python and building tools in Cursor on his own time because he wanted to develop custom solutions for the firm. They had no appetite for any of it. He asked me what he should do. I told him he needed to leave. I gave him my slides from the presentation, told him to make them his own, and advised him to find local firms where he had connections and pitch himself as their in-house AI attorney. I told him to mark up his salary and spend his extra time evaluating the AI legal tools on the market while continuing to develop his own. He pitched the position to three different firms and got three offers. He’s working at a rising firm that is going all-in on AI as their in-house AI expert.
This pattern points toward increased mid-tier competition. The firms that adopt early gain a structural advantage in both efficiency and talent acquisition. Meanwhile, those that defer will watch their best associates walk out the door. And new firm structures are accelerating the shift. Arizona has pioneered Alternative Business Structure (“ABS”) programs that allow nonlawyer ownership of legal practices, opening the door for technology companies to co-own and operate law firms.
The results are already visible: Eudia Counsel, a Palo Alto-based AI startup with $105 million in funding, launched the first AI-augmented law firm under Arizona’s ABS program, embedding AI directly into M&A and contracting workflows for Fortune 500 clients. Virgil, co-founded by Answer.AI’s Jeremy Howard, Eric Ries (The Lean Start-up), and start-up attorney Luke Versweyveld, has created a law firm where developers and attorneys work hand-in-glove on a daily basis to automate the practice of law. Plymouth Street has carved out tech immigration as a near-fully automated practice area. And in the UK, Garfield.Law became the first firm authorized by the Solicitors Regulation Authority to deliver regulated legal services entirely through AI, handling small claims debt recovery starting at £2 per letter.
There is serious innovation at the margins, and 2026 does not look like 2024. In 2024, we saw a lot of firms talking about being “AI Native” and “leveraging AI,” but in 2026, we see firms truly innovating, whether it’s taking advantage of novel regulatory structures or rethinking entire workflows from the ground up with AI.
The Efficiency Paradox
When AltFee won the ABA TECHSHOW 2024 Startup Alley competition, company representative Scott Leigh offered a simple proposition: our product will help you divorce from the billable hour.
I’ve attended this competition for years, and this moment stuck out. The Startup Alley pitch competition is a live vote: attorneys in the room choose the winner in real time. No panel of judges, no curated selection committee. Practicing lawyers from all walks of life, voting with their phones. And they chose a tool designed to help them escape hourly billing.
The billable hour creates a misalignment between AI efficiency and law firm economics, but the misalignment is subtler than the usual narrative suggests. Every hour of associate time that AI eliminates is an hour that can’t be billed. The most routine, automatable legal work is also the most lucrative on a per-hour basis, as clients pay associate rates for tasks that don’t require partner judgment.
I was presenting at a State Bar conference, showing examples of AI services that can review tens of thousands of documents in minutes with a lower error rate than human attorneys. A senior attorney in the audience interrupted: “Why the hell would I want to do that?!” The question highlights the strain AI puts on traditional billing models. A majority of the industry still operates in a model where armies of junior associates grind through tasks that are ripe for automation.
But the efficiency narrative gets complicated by risk. When an AI misses a privileged document that gets produced in discovery, someone is responsible for that failure. When AI-generated analysis contains a subtle error that shapes litigation strategy, someone bears the consequences. An attorney can’t tell the General Counsel at a Fortune 500 company that a bot reviewed their M&A documents and expect that explanation to suffice if something goes wrong. The “human in the loop” is how lawyers sleep at night and how they retain the trust of their clients.
The billable hour is already under significant pressure from sources that have nothing to do with AI. Clients demand alternative fee arrangements, caps, fixed fees, and efficiency reporting. Realization rates have declined as clients push back on invoices. According to Clio’s 2025 Legal Trends Report, 59% of firms now offer flat fees exclusively or alongside hourly rates. The question is who captures the efficiency gains from AI: the firm, the client, or both? That uncertainty creates investment hesitation when combined with genuine concerns about quality and liability.
Some firms have stopped waiting for the answer. Whitney Harper and Gwen Griggs founded ADVOS Legal on a simple premise: stop measuring hours and start measuring value. Their subscription model rewards efficiency instead of penalizing it, and through ADVOS Pro they train other firms to do the same. Hello Divorce, founded by family law attorney Erin Levine, runs the same play at the consumer end. Levine sells DIY divorce starting at $99 and full attorney assistance averaging around $2,000, against a national average of $26,000 for a contested matter. Both firms make AI efficiency the product.
Individual attorneys run a smaller version of the same model. A bankruptcy lawyer offers a flat $1,500 package through the 341 meeting of creditors. AI shifts the attorney’s role from drafting and research to supervising micro-workflows and reviewing outputs, so an attorney who once handled 15 cases a month can manage 40 at the same fee. The bottleneck was never legal judgment. It was the document preparation around it. Productize the workflow and the efficiency gains land on the firm’s side of the ledger, not the client’s.
This model works for productizable offerings where discrete workflows map to flat fees. Large firms handling complex, bespoke matters (cross-border M&A, multi-district litigation, regulatory investigations) face a harder path to capturing AI efficiency gains. The work doesn’t break into neat packages. But for the vast middle of the legal market, value-based billing turns AI from a threat into an asset. Every process you automate improves your margins instead of creating awkward conversations about billing.
Risk, Trust, and the Supervision Gap
Lawyers have also learned from experience to be risk-averse to new technologies.
From 2013 to 2018, Google faced litigation over scanning Gmail content for advertising profiles, including emails sent to law firms’ clients who used consumer Gmail accounts. The 2018 settlement in Matera v. Google Inc. included injunctive relief requiring Google to stop scanning email contents for advertising purposes. For lawyers, these incidents confirmed suspicions about cloud services and confidential client communications. Many firms still refuse to use cloud technologies entirely, keeping all data on local servers. The risk aversion has a rational basis: a data breach at a high-profile firm could destroy the value of multiple client companies simultaneously.
This history helps explain why law remains one of the most Microsoft-entrenched industries. Firms migrated to Microsoft’s enterprise cloud partly because Microsoft offered clearer contractual protections around data handling, partly because Microsoft’s enterprise sales force understood compliance concerns, and partly because switching costs made staying with familiar tools the path of least resistance.
Microsoft’s Copilot was supposed to be the AI bridge that met lawyers where they worked. After a year of testing across multiple law firms, consultants at Clear Guidance Partners summarized Copilot as “minimally usable for legal work”. Most firms renewed pilot licenses but declined to expand beyond initial test groups. I was on the pilot for implementing MS Teams for a flagship state university, and the experience was nearly identical; they turned on Copilot and told us to kick its tires. I immediately tried to red-team it with prompts for data that I knew I shouldn’t have access to and found a ton of stuff. When I reported this back to the pilot group, the IT department screwed down the permissions to the point where it was lobotomized and useless, because it relied on individual users setting their SharePoint permissions carefully across the entire MS tenant, something that is very hard to police across a giant organization with decades of data and employee churn. They elected not to move forward with purchasing. Microsoft had only 8 million active Copilot users across 440 million M365 subscribers as of August 2025, a 1.8% conversion rate.
Bigger firms purchase enterprise or white-listed versions of frontier models from OpenAI, Anthropic, or Google, hosted on familiar AWS or Azure infrastructure with security policies their IT departments can stomach. But in doing so, they strip away the application layer that makes these models useful to non-technical attorneys: the agentic tooling, workflow integrations, and features like Claude’s Cowork or OpenAI’s Agent Mode that change the 2024 chatbot interface into a true digital assistant. Firms pay premium prices for hobbled versions of tools that are far more capable in their consumer-facing form.
When the world’s most entrenched enterprise software vendor struggles to sell AI tools to its existing customer base, lawyers’ skepticism looks less like technophobia and more like rational caution.
That caution hardened after June 2023, when lawyers Peter LoDuca and Steven Schwartz of Levidow Levidow & Oberman faced sanctions in the Southern District of New York for submitting a brief citing cases fabricated by ChatGPT. Mata v. Avianca became the legal profession’s cautionary tale about AI hallucination. The $5,000 fine was modest. The reputational damage was severe. Databases now collect these incidents (this one by Damien Charlotin now has over 1,200+ incidents), and these are paraded in front of attorneys at their ethics CLE’s as a warning. The broader impact on AI adoption was chilling.
But the nuance got lost. Legal media covered Mata v. Avianca as a story about AI gone wrong, not as a story about lawyers who didn’t do their jobs. The profession absorbed the lesson that AI could get you sanctioned, creating an incentive structure where risk-averse lawyers (which is most lawyers) decided the safer path was avoidance. The stakes in law are different from those in other professions. If you’re in marketing and AI makes a mistake on a social media post, you ask AI to generate ten more and pick the best. If a lawyer makes a mistake, people can lose their house, end up in jail, or lose custody of their children.
The hallucination problem has become a self-fulfilling prophecy. Any reported inaccuracy, regardless of context, reinforces the perception that AI can’t be trusted for legal work. Most of the attorneys getting sanctioned for fabricated citations are using free, consumer-grade models from Frontier Labs for legal research, which is the wrong tool for the job. The Big Three legal research platforms (Westlaw, Lexis, and Clio’s vLex) have addressed the citation hallucination problem by hyperlinking AI-generated references to primary sources, allowing lawyers to verify that a case exists before relying on it. But solving the “fake case” problem doesn’t solve the deeper concern: that AI might misrepresent a holding, overstate the strength of a legal theory, or miss a distinguishing fact that changes everything. For a profession where nuance is the product, “probably right” isn’t good enough. Frontier labs operate on a release-fast-fix-later cycle, shipping models at breakneck speed and resolving issues as they surface. Lawyers waiting for a hallucination-free model are going to be waiting a very long time by Silicon Valley standards.
And now a new problem is emerging. At Legalweek 2026, Danielle Benecke (Founder and Head of Applied AI at Baker McKenzie) presented a framework she called the “supervision gap.” As AI moves from an assistive tool to a primary work producer, the traditional model of attorney supervision (required by every state bar in the US) breaks. Full human review of AI output becomes economically irrational at scale. Our ethical rules say to treat AI as a junior associate, review everything it produces, and take responsibility for the output. But when an agentic system handles an entire workflow (all of your eDiscovery, drafting, or research – think OpenClaw for law), the attorney faces an impossible choice. You can rely on the vendor, or review all the work yourself, which is redundant and expensive. Or you can trust the system without a full review, which puts your license on the line in new and riskier ways beyond simply trusting an eDiscovery platform to tag documents.
Benecke called this the “outcome economy”: a world where liability pressure begins to shift from lawyers to AI vendors. If the lawyer is not meaningfully in the loop, the system must decide who owns the risk. That question gets more urgent if malpractice insurers or clients start demanding AI use. If your malpractice insurer won’t cover you unless you use AI-assisted review, or your client won’t hire you without it, the risk calculus flips. The profession needs new risk-sharing models between attorneys, vendors, and clients. We don’t have them yet.
The Access Question
Everything discussed so far explains why AI diffusion in law has been slow. None of it explains why slow AI diffusion in law is everyone’s problem.
According to the Legal Services Corporation’s 2022 report, 86% of civil legal problems faced by low-income Americans received inadequate or no legal help. Stanford research found that 75% of civil cases have at least one party without legal representation, roughly 15 million cases per year, where at least one side has no lawyer.
The average retainer for private legal representation runs $2,000 to $10,000. For a household living paycheck to paycheck, that might as well be $2 million. My favorite thing to do during our unit on access to justice in class is to ask students how many of them have $10,000 in their checking account for a retainer, to which none of them raise their hand, and I get to remind them that they (future lawyers) do not currently have access to an attorney. These cases aren’t marginal disputes about trivial matters. They’re evictions, custody battles, debt collection, disability claims, and domestic violence protective orders. The outcomes shape people’s lives.
AI could address this gap. The technology to automate intake, draft basic pleadings, and guide self-represented litigants through procedural requirements exists today. Hello Divorce demonstrates that productized legal services can work. Deployed at scale, AI-assisted legal services could reach millions of people navigating courts alone.
But AI isn’t being deployed at scale in legal aid. When I volunteered at the Legal Aid Society in San Diego, 5-7 interns were sharing two computers. The same barriers that slow BigLaw adoption (data moats, trust concerns, hallucination risks) apply with even more force to organizations serving vulnerable populations. A hallucinated case citation in a corporate lawsuit is embarrassing. A hallucinated case citation in a pro se eviction defense could cost someone their home. The legal profession’s AI investment flows toward the 14% of the population who can afford lawyers, not the 86% who can’t. Legal aid can’t afford Westlaw and Lexis enterprise licenses with all of the bells and whistles that an attorney has at Kirkland, and they’re not paying the extra $200-$500 per month, per attorney for advanced AI features.
Despite this underserved market, the legal profession has always been protectionist about who can practice law. LegalZoom has been sued in California for the Unauthorized Practice of Law for making forms that non-lawyers could use to represent themselves in court. Only in recent years have we seen a growing Access to Justice movement in places like Arizona and Utah, which have pioneered reforms allowing nonlawyer ownership and paraprofessional practice. California’s 2024 Justice Gap Study found that the situation has worsened since 2019. The gap between legal need and legal help isn’t closing.
• • •
Two open questions define the next phase of AI in law.
The first is the supervision question. As AI becomes capable of producing entire work products, the profession that has spent decades treating “I reviewed it myself” as the standard of care has no framework for what happens when that review becomes economically irrational. The ethical rules assume a human at the center. The technology is moving humans to the periphery. Someone has to reconcile those two facts, and the answer will reshape how legal services are delivered, priced, and insured.
The second is the access question. 86% of low-income Americans with civil legal problems don’t get meaningful help. AI-powered legal services could reach millions of them, and there are TONS of low-hanging fruit to help regular people. But the same structural barriers described in this article (data moats, trust deficits, governance paralysis, liability uncertainty) sit between the technology and the people who need it most. And the legal profession’s protectionist instincts, while sometimes well-intentioned, keep the drawbridge up.
The profession that runs on documents still can’t agree on who’s responsible when the documents write themselves. These are not questions of technology, but of liability, organization, and capital. Answering these questions can help ensure AI democratizes access to legal help for the people who need it the most.
Sean A. Harrington is Director of the AI & Legal Tech Studio at Arizona State University College of Law, where he teaches AI and the Practice of Law and researches the diffusion of artificial intelligence in legal services.



