
Nowadays, most announcements about a new round of VC funding for just about any AI make me queasy. Tracxn estimated there are now over 8,600 VC funded AI startups globally. As far as I can tell, perhaps 8,500 of these startups or more have the cart before the horse.
Almost none of the startups I’ve looked at–certainly not the ones with the most funding–are using best data, content or knowledge management practices via standards-based knowledge graphs and graphRAG in support of better AI.
Almost none are taking the fundamental data quality problems seriously enough to build a true semantic layer. While shallow semantic layers are in evidence, a deep, broad, logically articulated semantic layer is missing.

How can organizations claim to be moving to an agent-based paradigm without a true semantic layer? Agents need to be told what to do, explicitly. See the summary of my talk on agentic AI for more detail.
The dream of agentic AI versus today’s harsh reality
The Remote Labor Index is a benchmark developed by Scale AI and the Center for AI Safety that evaluates how well AI agents perform in real-world scenarios.
In the initial study, agents only performed well on 2.5 percent of freelance tasks.
Pradeep Sanyal, Chief AI officer at Capgemini, said recently, “Executives love to talk about ‘AI agents transforming work.’ Yet no one can point to a single dashboard showing how work actually flows. The agents are ready. The map isn’t. Across industries, teams are building copilots and assistants without understanding the processes they are meant to.”
Even the Pradeep Sanyals of the world may not be fully versed in the depth of the challenge enterprises face with a shift to an agent-based development paradigm.
Unless they update an antiquated, provincial data architecture that can’t build context awareness at scale, these startups won’t be able to effectively control their agents. Among professional services firms, EY is one of the few who have built a knowledge management foundation they can use for AI. I haven’t seen such a capability mentioned at Capgemini.

Yann LeCun at Meta, a key machine learning researcher, believes the LLM paradigm alone will not produce human-level machine intelligence. He spoke recently after he and six other data scientists and engineers were awarded the 2025 Queen Elizabeth Prize for Engineering.
The need for a cultural shift to semantic graphRAG
A culture shift to hybrid from monolithic AI has to happen before enterprises tackle multiple agents in earnest. Most AI startups focus on probabilistic approaches and disregard the deterministic side of modeling. This has been the polar opposite of the predominant attitude in AI research during the 1980s. That attitude focused entirely on symbolic AI: knowledge representations and rules.
In the 2020s, knowledge representation and rules–tried and true deterministic methods of information sharing and governance–are best developed and shared today using a semantic graph approach. A semantic graph by its very nature provides a desiloing, disambiguating, true semantic layer. Retrieval augmented generation can therefore benefit at scale from logically constructed, expert-in-the-loop, context engineering.
Getting points like these across to average data science and engineering teams, much less most executives, is a major challenge. Agentic AI noise predominates, and there’s not much attention paid to knowledge-first hybrid approaches. Most AI market participants think context engineering a semantic layer can be created the same way LLMs were created. Maslow’s hammer comes to mind here: “If the only tool you have is a hammer, you tend to see every problem as a nail.”
Contrarian AI: Fixing the data quality problem with a knowledge-first, hybrid approach
In the 2020s, you don’t want to be one of the many who merely focuses on doing things one way. You don’t want to be one of the 8,500 startups imitating each other. If history and the experience of the dotcom boom and bust is any judge, more than half of these startups will fail once the AI bubble bursts.
Instead, you want to be one of the few who are curious about doing things in several ways.You want to develop creative approaches to solving the difficult problems that the software industry have most often overlooked over the span of 30 years.
The opportunity is ripe now with a paradigm shift to make substantive change. You need both the yin and the yang, a hybrid approach to AI. You want some things to be deterministic — you want to say absolutely, you can’t do this but you can do that.
Findability requires rich metadata that can help agents navigate to the right answer. Unfortunately, data science teams are in the habit of stripping explicit relationship-rich information down, rather than enriching it.
Rich, standards-based, intricately and logically connected knowledge plus instance data fuels the neuro-symbolic AI approach, grounding neural nets with symbolic AI or knowledge representation.That’s a blending of pattern recognition facility with desiloing, disambiguation and reasoning at scale.

The result can be quite powerful when these two come together. That’s the Third Wave of AI.






Leave a Reply