When your organisation debates whether to invest in semantic layers, ontologies, or knowledge graphs, everyone in the room thinks they’re having a technical conversation. The architects argue about implementation complexity. The data team weighs tooling trade-offs. Finance calculates ROI based on engineering hours.
They’re not having a technical conversation. They’re making philosophical commitments about the nature of knowledge itself, and those commitments determine what questions your organisation can answer, what kind of intelligence is possible, and whether your AI investments will actually work.
The problem is that this realisation often comes too late.
The Pattern
Look at what actually happened over the past twenty years.
The BI industry perfected semantic layers with tools like dbt and Looker, defining metrics in YAML configuration files: revenue equals this, churn equals that. Clean, consistent, calculable.
Meanwhile, healthcare systems, life sciences companies, and intelligence agencies bet everything on formal ontologies, mapping entities like Patients, Drug Interactions, and Intelligence Assets with their properties and relationships.
Both approaches work. Both solve real problems. But they rest on fundamentally different assumptions about what knowledge is, and those assumptions are invisible to the people making architectural decisions.
When you build semantic layers that define metrics, you’re adopting an empiricist epistemology: knowledge is data organised for calculation. Measure things accurately, standardise the definitions, and understanding follows. When Palantir built Foundry around ontologies that map entities, hierarchies, and logical relationships, they adopted a rationalist-structural epistemology: knowledge requires understanding concepts and how they relate, creating an opening for a ‘digital twin’ of reality where systems can reason about meaning.
When organisations build context graphs that capture why decisions were made (who approved what, under what conditions, based on what precedent), they’re embracing a pragmatist epistemology: knowledge is what enables effective action in specific situations.
Your architecture embeds a theory of knowledge whether you intended it or not. And that theory determines what your organisation can and cannot do.
The Questions You Can’t Ask
Start with semantic layers. They excel at answering ‘What is our revenue?’ or ‘How many users converted last quarter?’ These are measurement questions, and semantic layers were purpose-built for consistent measurement at scale.
But try asking a semantic layer: ‘Why did this customer’s order violate standard terms, what precedent existed for similar exceptions, and under what conditions was the deviation justified?’ You can’t. The architecture can’t hold this kind of knowledge because it was built on the assumption that knowledge is measurement, not reasoning about concepts and their relationships.
Meanwhile, healthcare systems with clinical ontologies can ask: ‘Given this patient’s symptoms, medications, and genetic markers, what are the contraindications for this proposed treatment, and what alternative protocols exist for patients with similar profiles?’ The ontology holds formal representations of Patients, Conditions, Drugs, and their relationships. It can reason about new situations using logical inference over structured knowledge.
Life sciences companies with biomedical knowledge graphs can ask: ‘What are all the known pathways between this protein and that disease mechanism, and which drug targets in those pathways have been explored?’ They mapped the domain formally because measurement alone couldn’t capture the knowledge their work required.
Intelligence agencies with context graphs can ask: ‘Why was this contractor allowed to access classified material on this date, who authorised it, what justification was documented, and what precedent was cited?’ They captured decision rationale because effective intelligence work demands understanding not just what happened, but why actions were taken and under what authority.
The questions you cannot ask are constrained by the epistemological commitments embedded in your architecture. Many organisations discover this constraint only when a critical strategic question proves unanswerable with the structures they’ve built.
Why AI Makes This Urgent
Large language models don’t consume dashboards. They consume semantic representations. The architecture you built for human analysts may be fundamentally misaligned with what AI systems need to function.
Consider how Palantir’s AI agents interact with their ontology. The system understands objects (Aircraft, Shipment, Customer) with properties and relationships. An agent can ‘find all late Shipments for this Customer, then trigger this Action’ without needing to understand database schemas. It reasons about real-world entities, not tables and columns.
Compare this to semantic layers built as YAML configuration files. These capture database entities, absent of relationships and rich context. You can feed these to an LLM, certainly. But the model is pattern-matching over metric definitions, not reasoning about a domain.
If you believe intelligence is statistical pattern recognition over structured data, you might conclude that feeding LLMs your semantic layer is sufficient. If you believe intelligence requires logical reasoning over formal knowledge structures, you’ll conclude that ontologies are non-negotiable for any domain with conceptual complexity. If you believe intelligence is context-dependent action, you’ll insist on capturing decision rationale and procedural knowledge.
Your theory of intelligence determines your architecture. And organisations now building ‘AI-ready’ infrastructure without examining their theory of intelligence may discover that their architectural choices constrain what kinds of AI are even possible.
What This Actually Means
The organisations that will thrive aren’t necessarily those with the most sophisticated technology. They’re those that think most clearly about what knowledge their domain requires and what kind of intelligence they need to operate effectively.
Some domains truly need only consistent metrics. If your competitive advantage is operational efficiency in well-understood processes, semantic layers may genuinely suffice. But domains with conceptual complexity (healthcare, supply chain, fraud detection, regulatory compliance) may need systems that can reason about concepts and relationships. Domains requiring explainable decision-making may need rich contextual knowledge about why actions were taken and under what authority.
Most organisations need some combination of these. But you cannot make this determination without examining the epistemological foundations of your approach. And this examination rarely happens in practice. Instead, organisations are asking ‘Should we use dbt or buy Palantir?’ without recognising they’re actually asking ‘Is our domain’s knowledge primarily empirical measurement or conceptual reasoning?’
The practical stakes are immediate. You’re being asked to build ‘AI-ready’ infrastructure. But AI-ready for what kind of intelligence? If your domain needs systems that reason about complex concepts, metric definitions may not suffice regardless of how cleanly they’re defined. You’ll need formal knowledge representation, which requires different skills, different timelines, and fundamentally different investments than data teams typically deliver.
Your competitors may be building knowledge architectures whilst you’re perfecting dashboards. Healthcare systems with clinical ontologies, life sciences companies with biomedical knowledge graphs, and intelligence agencies with operational context graphs made their epistemological commitments years ago. They now have AI systems that work at scale because they built architectures aligned with what their domains required. The gap between them and others isn’t technical sophistication. It’s epistemological clarity.
When organisations discover that their architecture cannot answer the questions their domain requires, or that their AI investments produce systems that cannot reason about their actual problems, the constraint won’t be computational power or model capability. It will be the philosophical assumptions embedded in architectural choices made years earlier by people who thought they were just picking between technical alternatives.
The question isn’t whether you have an epistemology. You do, implicit in every architectural choice. The question is whether you’ll examine it before it becomes your constraint.
Further reading:
↳ The philosophical commitments embedded in data architecture correspond to the ignorance taxonomy developed in The Dimensions of Not Knowing.
↳ The vocabulary confusion that makes these commitments invisible is examined in The Vocabulary Problem When Digital.
↳ Why AI teams are poorly positioned to notice these commitments is the subject of The Homogeneity Trap.
Garden notes
- What a frame is — the epistemological commitment hidden inside every architectural choice
- Why the frame cannot see itself — organisations cannot see the theory of knowledge embedded in their own systems
- Proxy capture — semantic layers as a proxy for knowledge
- The Dimensions of Not Knowing — companion essay: the taxonomy underlying these distinctions
- The Homogeneity Trap — companion essay: why AI teams are poorly positioned to notice this