JB
← Back

Workware

April 2026

When you hire a law firm, what are you actually paying for?

Not really their time, though that's how they bill you. You're paying for the work to be done and done right. You're paying for the outcome.

But almost nobody sells it that way. You buy a person's time and attention, whether that's billed hourly, as a retainer, or as commission, and you hope it adds up to something useful. The customer wants a result; the industry sells capacity. That gap has been there forever because there was no alternative: the work required humans, and humans don't scale.

I think AI is closing that gap. But most people building AI right now are aiming at the wrong side of it. They're building better tools for professionals. The bigger opportunity is to do the work itself.

What does a $400/hour lawyer actually do all day? Mostly, not $400/hour work. They draft documents, research precedent, coordinate with opposing counsel, chase down signatures, fill out forms. Weeks of this for every hour of the thing you're actually paying $400/hour for: the judgment call that saves you from a terrible contract, or the instinct that a deal is about to fall apart. The expensive part is judgment. Everything else is process.

process work
judgment

Where the time actually goes.

This is why services don't scale. Not because judgment is scarce (there are plenty of good lawyers), but because every hour of judgment comes wrapped in ten hours of process, and the process required humans too. A lawyer who only does judgment work can handle five, maybe ten times more clients. An accountant who only focuses on tax strategy can serve an entire portfolio. But they can't, because they're buried in everything else.

AI can already handle a large share of the process work. It can draft a contract, pull the relevant data, check it against regulations, file the paperwork, and follow up. The judgment still needs a human. But the process, which is most of the work, increasingly doesn't. Strip enough of it away and one expert can do the work of five or ten. That changes everything.

Most people building AI haven't noticed this yet. They're building tools: copilots, dashboards, workflow automations. SaaS with an AI layer, basically. And you can already see how that plays out. A note-taking app summarizes meetings, gets millions of users, raises a big round. Then Google ships the same feature inside Meet for free. Your moat was code, and AI is making code close to free. For the first time ever, SaaS companies trade at a discount to the S&P 500. Julien Bek at Sequoia points out that for every dollar spent on software, six are spent on the services that software supports. All of SaaS is roughly $300 billion. The professional services those tools support is $6.6 trillion. Twenty to one. Bek puts it well: if you sell the tool, you're in a race against the model. If you sell the work, every improvement in the model makes your delivery faster, cheaper, and harder to compete with.

That's what a new kind of company is starting to do. AI handles the process; humans handle the judgment. And instead of selling the customer a tool, or billing them for time, the company sells the finished work. You need an NDA reviewed? You send it in and get a reviewed contract back. Books closed? You get closed books.

Crosby's cofounder Ryan Daniels has a good analogy. Before 1856, steel was made by artisans who read the color of flames to forge purer metals. After Bessemer invented his process, steel mills became centers of research, inventing better methods of production. That's what made skyscrapers and transcontinental railroads possible. Not better artisans. Better mills. Daniels thinks that's what's happening to professional services now.

Crosby isn't a legal AI tool. It's a law firm. Daniels was general counsel at a tech company before founding it. They piloted every off-the-shelf legal AI product they could find, and none were good enough. “This was annoying at first, but quickly became existential.” So they built their own, with lawyers and engineers working on every case together. Median contract turnaround: about an hour. $86M raised from Sequoia, Lux, and Index.

It's happening in other industries too. Lawhive does consumer law the same way, growing 7x year-over-year across 35 US states. Harper places insurance for 1,000+ new customers a month; a traditional brokerage might do 20. Pilot closes your books, backed by $222M in funding. Nearly $500M into just these four.

Crosby
Crosby
Harper
Harper
Lawhive
Lawhive
Pilot
Pilot

Nearly $500M into just these four companies.

Nearly every one of them started by trying to sell AI tools to existing firms. Then pivoted to being the firm. The tool captures the tools budget. The firm captures the work budget. The work budget is twenty times larger.

The obvious question is: why can't existing firms just adopt AI and do this themselves? Some will. These aren't stupid companies. McKinsey has deployed over 20,000 AI agents alongside 40,000 consultants. Accenture has a real engineering org and is moving fast. The Big Four are pouring money into AI labs. It would be naive to write them off.

But think about what automating the process layer actually means for a firm with nearly 800,000 employees. If AI handles half the work, you either need to (a) find twice the customers or (b) cut half the staff. Neither is easy for a partnership that bills hourly and pays everything out to partners. Getting faster literally means making less money. That's not a technology problem. It's a structural one. Most traditional firms can't sell equity to outsiders and have no mechanism for R&D investment. The Am Law 100 made tens of billions in profit last year and put not a dollar of it into technology. That's more than Google's entire R&D budget, paid out to partners. Every dollar you invest in automation is a dollar not paid to the people who vote on the budget. The incentives point the wrong way. SaaS companies could vertically integrate, but selling the outcome means owning the liability, and that requires a completely different kind of organization (licenses, E&O insurance, domain practitioners, regulatory compliance). It's not a feature you bolt on. It's a different company.

The logic applies to anyone building an AI product for a service industry. If you're building an AI tool for accountants, you could just close the books yourself. If you're building one for recruiters, you could just place the candidates. You'd have to become a bookkeeping firm or a recruitment firm, but that's sort of the point. It doesn't even have to be a regulated profession. Any time there's a service where the process work dominates, you can (1) automate the process, (2) keep a human on the judgment, and (3) sell the outcome directly.

Everyone has a different name for this. Emergence Capital calls it “AI-Native Services.” Sequoia calls it “autopilots.” Y Combinator broke a twenty-year rule to start funding what they call “AI-native agencies.” I think what actually defines the category is simpler than any of these names suggest. It's what the customer buys. Not a tool, not hours. Completed work. I'm calling it workware.

“Isn't this just software?” In the same way SaaS was “just software.” Same code, same servers. What made SaaS a category was the business model, not the technology. Workware is the same kind of break: the customer stops buying a tool and starts buying the finished work. Everything downstream changes with it. Pricing, liability, margins, defensibility, who you compete with, how you grow.

What it takes

Looking at the companies that are working and the ones that aren't, a few things keep coming up. This probably isn't the complete list, but it's what I've noticed so far.

Practitioners and engineers have to be equals. Not domain experts advising from the side; people who know the work building alongside people who know the systems, on the same problems every day. An AI that drafts a contract can get the format right and miss the clause that matters. An engineer won't catch that. A lawyer will. The edge cases are where the value is, and only practitioners know where those hide.

You have to price on outcomes, not time. When you bill hourly, getting faster means making less money. That's a terrible incentive when your whole edge is automation. When you price on the delivered result, getting faster means serving more customers at higher margins. (SpaceX moved from cost-plus to fixed-price and it pushed them to build reusable rockets. Same idea.) The incentives have to point toward automation, not away from it.

And you have to measure quality, rigorously. How do you pick a lawyer right now? A friend's recommendation, the firm's brand, a Google search. There's no real way to compare. But when AI does the work, everything becomes measurable. Crosby tracks whether the AI catches the same issues a senior lawyer would, how often humans override it, and on what. Over thousands of cases, they start to know, concretely, how good they actually are. Daniels compares this to evidence-based medicine, which transformed healthcare by replacing “trust the doctor” with measurable outcomes. Nothing like that ever took hold in professional services, because the work was too hard to measure. It isn't anymore.

Why it compounds

A traditional services firm grows by hiring. More clients means more people, always. Revenue scales linearly with headcount (which is why services firms trade at 1-2x revenue while software companies trade at 10-20x). Accenture does ~$70B with nearly 800,000 people. The top ten firms hold less than 6% of the global market. Tens of thousands of small firms in every country, all running the same processes, all hitting the same ceiling. You can't consolidate what doesn't scale.

A workware firm grows by learning. Each engagement trains the system: a new edge case, a faster workflow, a pattern the AI missed. The thousandth case costs less to deliver than the hundredth, and you didn't hire anyone new to get there. That's not how services work. That's how software works, except the revenue per customer looks like services.

The catch is liability. When you sell a tool, failure is the customer's problem. When you sell the outcome, failure is yours. You need professional licenses (in regulated industries), E&O insurance, human review at critical points. There's real ongoing cost to owning what you deliver. Margins won't reach pure-software levels.

But that cost is the moat. The licenses, the operational infrastructure, the trust you build with customers and suppliers; none of that is easy to replicate. And here's what's weird about defensibility: when models get better, a tool company's product gets easier to clone. But when models get better, a workware company just gets cheaper to run. The moat stays. A firm that's handled ten thousand engagements knows things about its domain that can only be learned through operation: e.g. which carriers accept which risk profiles, at what price, with what exclusions. Which counterparties are reliable. Which structures hold up under stress and which fall apart in ways the textbook doesn't cover. Every expert override teaches the system to handle that pattern next time. None of that is code. None of it gets replicated by deploying a better model.

And today's judgment work becomes tomorrow's process work. As these systems accumulate data about what good judgment looks like, the frontier shifts. Tasks that need twenty years of experience today can be handled by the system next year, because the workware company has seen every variation and the foundation model has seen none. The moat widens.

Where this goes

When computing was scarce, hardware captured the value. When computing got cheap, value moved to software. Software is getting cheap now (AI collapsed the cost of writing it), and value moves again: to whoever controls the scarce inputs to real-world outcomes. Operational data, human judgment, supplier relationships, regulatory licenses, trust.

And the supply side makes this more urgent than people realize. The US has lost over 300,000 accountants since 2019. Degree completions are at a 20-year low. 75% of working CPAs are approaching retirement. The pattern repeats in law, consulting, brokerage. The people these industries run on are disappearing, right as AI gets good enough to do most of the work. The firms that deliver expert outcomes with fewer experts don't just have better margins; they have the only operating model that survives this.

A $6.6 trillion market where the top ten players hold less than 6%. A labor pipeline in structural decline. Services have always had the revenue but never the scale. Software has always had the scale but never the revenue. Workware gets both.