
Every year for five years now, “IsADataThing” YouTuber Ashleigh Faith on has invited my ex-boss and Editor-In Chief of The Cagle Report Kurt Cagle and me to opine on knowledge graphs, the outlook for ontologists, graph retrieval generated automation (RAG) and the related issues. Of course we always look forward to discussing what we can expect during the coming year as well.
If you’d like to immerse yourself in the full conversation, the YouTube video of this year’s roundtable posted can be found here:
This time around, I thought I would transcribe and excerpt edited portions of this conversation and share them, so readers would be able to discover the individual topics we’ve covered.
This first excerpt discusses the disconnect in the job market for semantic and ontology experts, attributing it to corporate confusion over buzzwords like “knowledge graph” and “world models,” and the failure to commit to solving the major problems that these technologies make it possible to solve.
Without further ado, here’s the first excerpt:
Ashleigh: Demand for ontologists is high, but organizations can’t seem to identify the people who can actually do the work since many who have been let go still are looking. There is a disconnect between the ones trying to hire and the ones who have the skills.
My personal theory on this disconnect is simple: most businesses probably don’t even know what an ontologist or a knowledge graph is. They just hear all the buzzwords—’knowledge graph,’ ‘RAG,’ ‘graph RAG’—from the news and don’t actually get what they mean. They’re basically looking for a ‘unicorn’ employee. This is likely why people in our semantic space are having such a hard time landing jobs.
Take Ahren Lenhert for example (we all know Ahren!). When he looks at job descriptions for ontology roles, half of it sounds spot-on, but then it abruptly switches to asking for skills like Python and AI, which aren’t always a traditional match. While a knowledge engineer role actually has that overlap between semantics and AI, that’s not what most job postings look like. I’m genuinely curious what Kurt and Alan think is really going on out there.
Kurt: Well, there are a couple of factors at play. One is the knee-jerk reaction: ‘Oh my God, I just spent my entire budget on AI, the results are terrible, and now we need a quick fix—we need some kind of magic solution.’
Ashleigh: It’s magic. Yeah, I know. It’s definitely magic.
Kurt: You’re right, magic and unicorns might get you started on a good enterprise app, but I see two core issues driving the current conversation.
First, the concept of ontology is suddenly huge, largely thanks to the Palantir spiel. Even though what they call an ontology isn’t exactly standard RDF, their prominence has brought a basic awareness of graph technology into the spotlight.
Second, many experts, like Gary Marcus and Jeffery Funk, are arguing for a solid, foundational element—a “ground truth“—to make AI work. Similarly, Yann LeCun, while from the neural-net side of AI, has also been emphasizing the need to model this ground truth in reality.
The problem is, you simply cannot model ground truth with a language model. A language model is inherently opinionated; it’s a giant narrative super space based on its training data. It is good for describing or producing language, but it’s not necessarily reflective of truth or reality.
Ashleigh: It’s honestly so frustrating when people talk about the idea of “ground truth,” claiming, “This information is peer-reviewed” or “It’s from an authoritative source like the government.” The simple fact is, we’re human, and humans make mistakes—or even outright lie. You can lie by omission, or you might not even know you’re lying because it’s just your limited understanding of a situation. When we discuss “ground truth,” there are a ton of landmines. And then there are “world models”—the critical question is, whose world model are we even talking about?
Alan: That’s the whole issue with the term “world model”—it’s just another buzzword. You have 8,000 startups all scrambling to find a new way to stand out in 2026. They’re [almost] all talking about ‘world models’ and building them purely statistically. But if you rely on that monolithic, statistical approach, you’re just going to end up with the same mediocrity and unpredictability we’ve always had.And so, we inevitably have to deal with the buzzword.
The issue of who’s hiring whom shows a clear tendency to look at data science teams first and hire people who are already familiar with ontology within that data science circle, because that’s what’s comfortable and familiar to them.
Essentially, we’re dealing with a tribal dynamic within these organizations. I always refer back to the Master Algorithm author [Pedro Domingos, now an emeritus CS professor from the University of Washington], who discussed the machine learning tribes—the ‘symbolists’ were one of them. The challenge is, it feels like a huge leap to hire ontologists from a different tribe when the organization is only used to this one monolithic, statistical approach.
Ashleigh: In my experience, when you speak with a major company, they’re overwhelmed by diverse opinions on graph technology: Do we need an ontology? Should we use Graph RAG? Do we need entity resolution? What is the difference between a semantic layer, semantic search, and a knowledge graph?
I’ve learned that all this confusion can usually be traced back to the vendors. Each vendor pushes their own solution (RDF, Property Graph, etc.), which becomes their exclusive worldview. It’s ridiculous that we have to constantly deal with this, but it’s creating a lot of market confusion.
This complexity even seeps into job postings. When a company posts for an ‘ontologist,’ it creates buzz, but what are they really looking for? Often, they hire an ontologist who ends up stuck doing basic taxonomy work, or they hire an ontologist when they actually needed a taxonomist. These roles are related but not the same.
Beyond the technology (RDF vs. LPG), there are many different types of graphs: an analytics graph, an enterprise knowledge graph, a verification graph, a semantic query expansion graph—which one are they building?
The term “knowledge graph” has become so broad it’s like saying, ‘I have an API.’ Cool, but what does it actually do? Our specialized expertise is still essential because not everyone is an expert on which tool to use. Knowledge graphs are a powerful tool, but like a Swiss Army knife, they’re only useful if you know which part to deploy.
Kurt: Getting back to the core question: where exactly are the ontology jobs right now? I think part of what I see is influenced by a survivor bias, as I’m mostly seeing situations that people bring directly to me.
However, you commented earlier—before we started recording—that many companies are now essentially saying, “We can’t afford an ontologist, whatever that is.” Instead, they really just need someone to mentor, or handhold, I’ll say, not completely dismissively, their baby taxonomists so they can understand the concepts.
I’m skilling with taxonomists, which is fine. I think that if you’re dealing with taxonomy, you’re already dealing with one portion of what an ontologist should do.
I make it a big point of saying, okay, an ontologist is simply another kind of information architect.
The role of an ontologist is a specialization, but to understand its place, think of it within your organization’s Information Architecture (IA). The terms IA and Data Architect tend to be used interchangeably, but in both cases, you need two different skill sets: someone who can come in and work at the analytics level, and someone who can work effectively with the toolsets available.
For example, the necessary IA tools—such as “clawing”—have become an indispensable part of my own toolkit. However, the reality is that many people don’t fully grasp the power of new technologies. We talk about query languages, and then there’s the “spooky, scary” subject of SHACL that people are hearing strange things about (and I feel a little responsible for raising those topics).
Despite the need for this expertise, organizations are hesitant to commit to a full-time ontologist. This is primarily because an ontologist, like any architect, serves a design role. Once the design is complete, their day-to-day utility in that specific scenario often drops off dramatically.
We’re at a stage where I think that organizations are gonna start growing their own ontologists, but they won’t necessarily recognize that what they’re growing is an ontologist.
Alan: I’d only say this: I truly wish people would aim to solve the big, systemic problems instead of just tackling analytics, content management, or merely doing a little data management with their existing inventory.
Think about it—you could use one foundational method to solve a problem repeatedly, rather than just focusing on one small area. Stop looking for your keys only under the lamppost just because that’s where the light is. You won’t gain the full benefit unless you pursue the big-picture solution and implement the right foundational approach to achieve it. Without that, the benefits simply won’t materialize.
We talk about supply chain integration—that’s even more challenging than enterprise-wide solutions. When will people finally wake up, realize this is necessary, and just commit to getting it done?






Leave a Reply