In Part 1, we discussed how most durable AI companies embed themselves in mission-critical systems that customers cannot afford to turn off.
That raises a natural question: if horizontal SaaS platforms have created some of the largest software companies in history, why do I think vertical AI will win in regulated and high-stakes markets?
Answer: In high-stakes environments, generality is a liability.
That may sound counterintuitive—especially considering the largest public software companies are largely horizontal platforms: Microsoft, Salesforce, ServiceNow, Workday. These businesses scaled by serving many customers, many workflows, and many use cases.
This model works extraordinarily well when failure is tolerable, switching costs are economic, and flexibility matters more than correctness.
But mission-critical systems operate under a different set of rules, and AI makes those differences more pronounced, not less.
Size vs. Durability
Horizontal SaaS companies get bigger. They maximize addressable surface area, win on distribution, and benefit from standardization. History proves this model works.
But size and durability are not the same thing.
In my opinion, AI will shift where durability forms. In low-risk environments, AI enables horizontal platforms to customize enough to deliver productivity gains and incremental workflow improvements.
In high-stakes environments, however (regulated operations, critical infrastructure, and systems with legal or physical consequences), those abstractions break.
What matters isn’t how many things the system can do. It’s whether it can do one thing correctly, every time. That’s how vertical AI will win.
Horizontal AI Optimizes for Scale, Vertical AI Optimizes for Consequence
Horizontal AI is broadly useful across contexts. It prioritizes generality, flexibility, and speed of deployment.
Vertical AI operates inside constraints. It prioritizes correctness, accountability, and reliability under real-world conditions.
In environments where failure has consequences, these priorities matter far more than feature breadth.
Why Regulated and High-Stakes Markets Punish Generality
I’ve seen four constraints repeatedly across healthcare, government, infrastructure, and defense that break horizontal abstractions:
Real-world data is messy: incomplete, inconsistent, and shaped by decades of legacy decisions.
Rules matter more than predictions: LLMs run probabilistically, and being “mostly right” isn’t enough when systems must encode policies, exceptions, and audit trails.
Human oversight is a requirement: supervised intelligence under scrutiny isn’t optional—it’s integral.
Operational risk trumps replacement risk: customers ask, “What happens if this breaks?”
Domain Depth in Action
When KnowledgeLake deployed document intelligence for government healthcare records, the system didn’t just “read documents.”
It learned HIPAA retention rules, state-by-state consent variations, and edge cases that only appear when processing millions of real-world files. That domain depth enabled 99%+ accuracy at scale—and why customers now plan growth around the system rather than evaluating replacements.
This isn’t AI as a feature. It’s AI as infrastructure.
Can Horizontal Companies Move Vertically? Yes, but with Limits
AI enables horizontal platforms to move into vertical use cases. Likewise, vertical platforms can expand laterally with agents and automation.
But direction of travel doesn’t remove the tradeoff:
Horizontal platforms optimize flexibility first.
Vertical AI systems optimize correctness under constraint.
Where failure is unacceptable, embeddedness matters more than reach. That’s why the most defensible AI businesses in regulated and mission-critical settings don’t ask, “What can this model do?”
They ask, “What must this system never get wrong?”
The Real Question for Builders and Investors
It’s not whether horizontal SaaS can win. They already have.
The question is: Where does AI-driven defensibility actually form?
Increasingly, I think it forms where systems become operational dependencies, where switching introduces risk rather than inconvenience, and where intelligence is embedded rather than layered on.
That’s where the value of vertical AI compounds.
To summarize these core ideas, here are a few common questions this thesis addresses:
Frequently Asked Questions
Why does vertical AI outperform horizontal AI in regulated industries?
In environments where failure has legal or operational consequences, flexibility matters less than correctness. Vertical AI systems are built to operate within rules, constraints, and oversight, while horizontal AI systems are optimized for generality and scale.
What makes an AI system mission-critical for an organization?
An AI system becomes mission-critical when it is embedded into workflows that organizations cannot afford to turn off or replace. At that point, the system becomes operational infrastructure.
Why is general-purpose AI often a liability in high-stakes environments?
General-purpose AI relies on probabilistic reasoning and broad abstractions. In healthcare, government, infrastructure, and defense, systems must encode rules, exceptions, and auditability. Being “mostly right” is not sufficient when correctness is required.