
Software engineer and architect André Lindenberg recently shared the following points in a LinkedIn post:
“The most interesting AI architectures don’t start with models, they start with an operating system. At the center is an ontology layer that turns fragmented departmental data into a shared semantic contract that humans and agents can use to reason. Think MS‑DOS to Windows: the real breakthrough wasn’t a killer app, it was the integration layer that let everything talk and act together. Get that ontology right, and you get faster decisions, consistent metrics, and AI agents that actually understand your business instead of just your tables.”

André Lindenberg, LinkedIn, March 6, 2026, https://www.linkedin.com/feed/update/urn:li:activity:7435429696395419648. Used with permission.
Ontologies are conceptual, self-describing and connecting graph data models that can scale shared FAIR data and related shared understanding across supply chains. For this reason, sound ontologies are key to building a reliable knowledge foundation for interoperable digital twins.
Lindenberg’s main point here is that the best ontologies clear a path for the best operating systems: “Get that ontology right, and you get faster decisions, consistent metrics, and AI agents that actually understand your business instead of just your tables….”
LLMs and other statistical machine learning methods on their own can’t create shared understanding. Earlier this month, researchers Frédéric Berdoz, Leonardo Rugli, and Roger Wattenhofer of ETH Zurich reported results of a preliminary test of multi-agent consensus building ability. Their conclusion?
“Even in benign, no-stake settings without Byzantine agents, LLM-agent groups frequently fail to reach valid consensus within the round limit, and performance declines as group size increases. Under adversarial conditions, the likelihood of valid consensus decreases further, with failures primarily resulting from a failure to reach consensus, even within our limited threat model.
“These findings indicate that current LLM agents are not yet reliable social decision-makers: agreement, which is essential for cooperation, delegation, and safety-critical coordination, remains fragile in our controlled, no-stake testbed. Our study is limited by testing only a single Byzantine strategy and two model sizes from one family, and future work should investigate diverse adversarial behaviors, heterogeneous agent populations, and larger-scale deployments.”
This post compares and contrasts different data models that have some level of system or subsystem shared understanding capability. These different data model types all have utility. But unlike standards-based ontologies, most data model types aren’t self-describing or designed for broad interoperability. Therefore, they can’t scale effectively across supply chains.
This kind of universal scaling of open, shared understanding becomes more and more critical the more we decide to place our trust in multi-agent systems. Ecommerce is a core use case example. Local or regional interoperability is no longer enough. Enterprises have huge partner networks, and boundary crossing has to happen across these for agents to work effectively.
Database data models
Veteran modelers often embrace and extend, assuming a base level of utility for each model type. When it comes to different database types, for example, Field CTO – JSON Duality at Oracle Rick Houlihan observes that “Documents, graphs, time series, vectors, and relations aren’t competing paradigms. They’re complementary projections of the same underlying truth.”
The sense of harnessing the power of different database types is evident in GraphRAG, in which complex querying across different repositories and direct database retrieval from a semantic graph database management system (GDBMS) is a fundamental capability for enterprises seeking to sidestep LLM hallucinations.
But GraphRAG also offers similarity search from a vector database store that coexists with the semantic knowledge graph stored in the GDBMS.
In this scheme, the LLM mainly provides the front end, interpreting the user’s need and intent. Graphwise’s Graph AI Suite, for example, gives the user control over the means of retrieving the appropriate answer, either deterministic (direct structured database retrieval) or probabilistic (vector database).
Application suite data models
Software providers like Oracle and SAP have been providing unified data models for decades as a main feature of their proprietary platforms. These models have provided effective barriers to entry and competitive advantage for providers like these who sell to large enterprises to this day.
Most recently, small-to-medium business freemium software provider Odoo has been touting “all the tech in one platform.” That’s a way of saying Odoo provides a suite of applications based on a unified data model. The claim is that end users no longer need to confront application fragmentation; all the applications in the Odoo suite offer (at least some level of) interoperability.
The problem with the application-centric approach to interoperability is that the application landscape and supply networks are constantly changing, and the true control capability exists within a semantic layer.
UCP: Bare-bones agentic-oriented ecommerce in the wake of Schema.org
Google, Shopify and Stripe’s evident strategy with UCP has been to emulate the success of Schema.org, which saw broad adoption by web developers. According to CMSWire, by 2024, 72 percent of websites on the first page of results had some kind of Schema markup. Often it was just FAQ, product, organization and article markup.
Agent engine optimization (AEO) changes the demand side of the picture for effective data models substantially. Agents need to take effective, efficient action and at the same time mitigate risk.
UCP, on the face of it, is a bold step toward agent-oriented ecommerce. A January 2026 Google blogpost puts it this way: “UCP enables seamless commerce journeys between consumer surfaces, businesses, and payment providers. It is built to work with existing retail infrastructure, and is compatible with Agent Payments Protocol (AP2) to provide secure agentic payments support. It also provides businesses flexible ways to integrate via APIs, Agent2Agent (A2A), and the Model Context Protocol (MCP).”
UCP’s promise is that a customer asks an AI assistant to find a product, and it handles discovery, comparison, and checkout autonomously. The protocol layer for that is largely solved — Google, Shopify, and Stripe have built the pipes.
These UCP pipes move data, but the problem is, pipes alone don’t understand the data. The larger ecommerce environment is left undescribed and unconnected.
The major ecommerce problem UCP doesn’t solve
Left to their own devices and without sufficient semantic metadata to guide them contextually, agents spend a lot of time spinning their wheels.
CEO and founder of The Cyber Boardroom and MyFeed-AI startups Denis Cruz puts it this way:”The problem gets worse with agentic workflows. The ‘just figure it out’ pattern — give an agent a goal and let it iterate until it works — burns through tokens at an astonishing rate. Each iteration sends the full context plus the new attempt. The agent doesn’t know when to stop, because there’s always something to improve. Left unchecked, a single agentic task can consume millions of tokens.”
Ecommerce ontologies: The rest of the domain modeling puzzle
Several established vocabulary and ontology standards cover core ecommerce domains. For products, GS1 handles identification, Schema.org provides a broad but shallow web content model, while eCl@ss and UNSPSC serve industrial and B2B classification needs. Customer identity is the province of FOAF, vCard, and DPV (for privacy/consent). Supply chain tracking relies on GS1 EPCIS and UN/CEFACT, while FIBO covers financial instruments and transactions. Reviews and sustainability have partial coverage through Schema.org and W3C SOSA/SSN respectively, but both lack depth.
However, many gaps remain. No standard exists for modeling consumer preferences and values — this space is entirely dominated by proprietary systems. There’s also no ontology for multi-merchant comparison sessions or conditional purchasing logic. AI agent authorization and delegation lacks any semantic standard. Merchant data trust and verification, real-time situational context (e.g., a traveler needing a carry-on-sized item quickly), post-purchase lifecycle events like repair and resale, and jurisdiction-aware regulatory compliance are all similarly unaddressed.
The overarching theme: existing standards handle identification and classification well, but the richer, more dynamic semantics needed for intelligent, agent-driven commerce remain largely unbuilt.
The work left to be done
The foundational problem, and the largest competitive opportunity, remains the lack of broad, shared understanding. Without a robust, open ontology layer to provide a shared semantic contract, the next generation of AI agents will be left burning through resources and failing to achieve the consensus required for complex consumer journeys.
The true breakthrough for agentic commerce will not be a new killer app, but the underlying knowledge infrastructure that allows all systems—human and artificial—to understand and reason about the business with consistent metrics and trust.
The existing standards, from GS1 for product identification to FIBO for finance, provide islands of semantic clarity. But the remaining gaps in shared semantics are not minor inconveniences; they are indicators of where agents will continue to fail, unless enterprises deploy enough ontologists and knowledge architecture to fill the understanding gaps.
For more information:
Berdoz, Frédéric, Leonardo Rugli, and Roger Wattenhofer. “Can AI Agents Agree?” arXiv preprint, submitted March 1, 2026. https://doi.org/10.48550/arXiv.2603.01213..
Cruz, Denis. “If You’re Spending a Lot of Money on LLMs, You Have an Engineering Problem.” LinkedIn, March 6, 2026. https://www.linkedin.com/pulse/youre-spending-lot-money-llms-you-have-engineering-problem-dinis-cruz-yq5ie.
Handa, Amit, and Ashish Gupta. “Under the Hood: Universal Commerce Protocol (UCP).” Google Developers Blog, January 11, 2026. https://developers.googleblog.com/under-the-hood-universal-commerce-protocol-ucp/.
Houlihan, Rick. “20 years, we’ve been solving the wrong problem. 🎯 We built document databases because relational was ‘too rigid,’.” LinkedIn, February 2026. https://www.linkedin.com/posts/rickhoulihan_post-conference-workshops-activity-7422725159616356352-lddD/.
Lindenberg, André. ““The most interesting AI architectures don’t start with models, they start with an operating system.”” LinkedIn, March 6, 2024. https://www.linkedin.com/feed/update/urn:li:activity:7435429696395419648.
Mishra, Gaurav. “The Growing Importance of Schema.org in the AI Era.” CMSWire, December 11, 2024. https://www.cmswire.com/digital-experience/the-growing-importance-of-schemaorg-in-the-ai-era/.






Leave a Reply