Every vertical-AI startup you see is really competing on one thing — which LLM provider they chain themselves to, and whether they can exit that chain when the pricing shifts. The founders who figured this out early are the ones who are going to own their categories. The rest are optionality on someone else's margin.
We figured this out early. It is why LouieAuto has a provider router, not a provider.
The trap
You build a vertical-AI product. You pick a provider. Call it Anthropic. You build all your prompts against Anthropic's model. You tune your system prompts. You memoize your tool-use patterns. You build your evaluation harness against Anthropic's outputs.
Now Anthropic raises prices 40%. Or Anthropic launches a competing product in your vertical. Or Anthropic has a 6-hour outage during your customer's peak hour.
You cannot switch. Your entire product assumes the Anthropic model's strengths. Switching to OpenAI means re-tuning every prompt, re-building every evaluation test, re-writing every tool-use pattern. That is six months of engineering work. You do not have six months.
The provider router
Louie routes each request across five backends: Anthropic, OpenAI, Azure OpenAI, Google (Vertex), and local Ollama. The routing is rule-based — cost, latency, data-residency, task-type.
A lender-routing query that needs real-time access to the moat data goes to Anthropic for the reasoning quality. A high-volume BDC email draft goes to OpenAI for the cost profile. A customer-PII query that has data-residency rules goes to Azure or Google in a customer-controlled region. A dealer who wants on-prem goes to a local Ollama instance on their own hardware.
Every prompt is written in a provider-agnostic format. The tool-use schema is portable. The evaluation harness runs the same evals against each provider monthly to track regression.
Why this is the actual moat
The headline moat is the operator-encoded knowledge — the lender playbooks, the desk patterns, the F&I matching. That is real.
But the defensible moat is the routing layer. Because every competitor is going to have to build this eventually, and most of them are not going to. They will bet on their provider holding the line on price and capability for the next three years, and they will lose the bet.
If you are an acquirer pricing a vertical-AI asset, the first thing to check is provider diversity. Single-provider assets are optionality on someone else's pricing sheet. Multi-provider assets are actually their own company.
The acquirer angle
For an acquirer with existing enterprise LLM contracts — which is every major dealer-software incumbent by now — the provider router is the margin lever. Plug the router into your Azure or Google enterprise agreement and inference cost drops to your existing marginal rate, which is effectively zero.
Pre-acquisition, LouieAuto's gross margin at commercial pricing is around 85%. Inference is the main variable cost. Post-acquisition in a router-plugged configuration, gross margin is above 90%. That is a real number on the acquirer's P&L.
Three points of gross margin across a $23M ARR is a $690,000 per year variable-cost win. Over the five-year model, that is $3.5M of cash flow that did not exist pre-acquisition.
The router is worth the acquisition by itself, at the numbers that matter to finance teams at scale.
The warning
If you are evaluating other vertical-AI targets in any industry, this is the question to ask in the first call: how many providers does your production workload run against, and what is the switching cost between them?
If the answer is one, or the switching cost is months, you are not buying a company. You are buying a rental on someone else's infrastructure. Price it accordingly.