Jani Petkova, a Senior Content Manager at Graphwise, wrote an abbreviated and updated version of my August webinar “How to Think About Agentic AI’s Challenges and Opportunities”. She’s distilled the topics discussed and the main messages in a helpful and clear way. Many of the topics covered are of long-term or perennial interest, in any case.

In this post, I’m sharing her distillation and some images of selected relevant slides from that webinar. You can find and watch the full webinar at https://graphwise.ai/event/webinar-how-to-think-about-agentic-ais-challenges-and-opportunities/

AI’s Black Box

When it comes to agentic AI, you are dealing with the classic black box computing scenario where garbage in means garbage out. So, before diving into complex AI initiatives, organizations need to focus on getting their inputs right first. There are some critical elements to focus on to get your data ready.

First up is identification and naming — unified identifiers to disambiguate names, which for most enterprises means upgrading those siloed identity management schemes. Organizations also need to collect and observe what’s happening in their business processes and be scientific about it. A good example is the way Pharma and Healthcare and their risks measure and compare data in their data lifecycle management.

Then there’s modeling; we’re not just talking about statistical machine learning here. Graph domain modeling and ontologies are essential because without that context, you simply don’t have quality data.

Finally, a refinement and iteration process — feedback loops built into your business processes — enables you to continuously identify opportunities for improvement with your AI agents.

Understanding agents and their risks

Let’s get some context for agents. Robotics, for example, are embodied agents with both a physical and online presence. The distinction is that disembodied agents just don’t have that physical presence, but the information is still comparable.

Robots and agents have been around for decades. Michael Wooldridge, a researcher at Oxford, wrote that effective agent-based systems need two capabilities: they need to operate independently without direct intervention, and they need the ability to represent our best interests while interacting with other humans or systems.

Here’s the big challenge: agents won’t always act in your best interest, so you have to impose controls over them. The less effective your controls are, the more risk there is.

At Defcon 2025, white hats found they could hijack Microsoft’s Copilot studio agents. They could dump complete CRM databases and do things without any human verification — basically autonomous agents with no real governance. The way agents operate, they can violate all sorts of norms, and you have system-wide exposure with different attack vectors becoming more evident. You’ll see news reports about these security risks emerging because of agents.

Making AI technologies work together

The big challenge with agentic AI is that it needs to work with existing technologies that weren’t designed to integrate with each other. For example, microservices are small, componentized functions that accomplish individual tasks, but all that functionality is trapped in silos. Or digital twins, which are great at simulation and prediction but are rather passive and don’t take action. That’s where agentic AI comes in — it can act.

However, most AI teams focus on probabilistic approaches and disregard the deterministic side of modeling. You need both. You want some things to be deterministic — you want to say absolutely, you can’t do this but you can do that. That’s the neuro-symbolic AI approach, combining neural nets with symbolic AI or knowledge representation.

The real issue is that agents can take action, but how do they know which actions to take on which objects? Digital twins operate at one level of abstraction, microservices at another. They don’t know how to interact with other models properly, and you have functionality trapped everywhere without proper description and disambiguation.

Why knowledge graphs are essential

Knowledge graphs solve this integration problem by providing multiple tiers of abstraction with visibility and transparency between them. You can move from one tier to another and understand how everything connects. Knowledge graphs give you the description and disambiguation capability to make sure microservices are used appropriately, and they enable digital twins to interact across different models.

What’s at stake is building interoperable knowledge models and creating FAIR data (findable, accessible, interoperable and reusable) with interlinked instance data that enables boundary-crossing interaction. This is contextualized computing where you are building business contexts online so you can serve different parts of the business and work together across contexts.

IBM recognizes this hybrid approach and talks about how agentic AI evolves microservices through a reasoning layer, dynamic reasoning, and semantic understanding. These are all terms associated with the knowledge graph and modeling approach that Graphwise has embraced for a long time.

The business case for semantic technologies

When you see a challenge like this, there is a huge opportunity for the semantics community — to seize this agent-based paradigm and use it to open the door to enterprise transformation. The more things evolve into these new modes of AI applications, the more we see that we need knowledge and data to drive them. You have to accurately describe and connect the entities, relationships, and domains for each process. The right knowledge has to imbue the right data with machine-readable and human-reviewable actionability.

Agents can take action, but what kind of action? Are we going to have unintended consequences? Gartner is well aware of this. In their data engineering imperatives, they talk about improvements that have to be made to data engineering and say you need to have semantics at the top of your list. The goal is to reduce hallucination and bias, but also reduce risk.

The data engineering imperative is to ease integration and delivery of integrated data sets and pipelines. Graphwise has said that many companies spend 40% or more of their IT budget on some form of integration, and it’s all siloed with several different methods being used.

With a knowledge graph approach — semantics and standards — there is a unification that can happen here. Once you have your data assets and knowledge assets unified, you have a DataOps impact. You can do things much more efficiently and reduce the amount you are spending on integration considerably.

MCP: a popular but limited tool

Recently, MCP (Model Context Protocol) has come up a lot. It is an agent enabler with clients and servers. The clients invoke tools and resources and organize prompts, while the server side exposes them in the environment. With those things together, agents can take advantage of what’s out there.

It’s a capability, but there is also a risk. The protocol is called the context protocol, but it just gives you access to whatever context is made explicit, which is far from sufficient — you still have to build your contexts to use agents safely and effectively.

MCP is popular and Graphwise supports it, but it’s not a magic bullet. Some say it’s a recipe for fragmentation because of all the third-party libraries you need. Others say it’s not taking advantage of lessons learned in remote procedure calls (RPC) and general or gRPC.

There is also the contract problem. When agents cross boundaries between companies, they are contractually obliged to follow deterministic rules — it’s a legal situation. MCP just has simple JSON schemas, so you can’t guarantee that AI interactions follow specified contracts. That’s a deal breaker for regulated organizations.

Why modeling and observation matter

All of this means better observation and modeling will get you a long way toward where you want to go with AI. Something like 40% of agentic AI projects fail, and sometimes that percentage gets much higher. When it comes to who to emulate, the scientific community has been working beneficially with modeling for many years.

Take two-time Nobel laureate Frederick Sanger, who sequenced insulin in cattle, then RNA and DNA. He was insightful about the patterns he was seeing and had a visual model in his mind of how these things could be interacting. That was his hypothesis that he tested throughout his career. His discoveries led to the ability to synthesize and manufacture insulin and to map the entire human genome. Modeling is powerful, and the right scientific observation and measurement techniques are equally powerful.

The critical role of domain expertise

This scientific modeling approach requires deep domain knowledge to be effective. Domain expertise is critical to agent-based systems. Michael Iantosca, senior director of content platforms and knowledge engineering at Avalara, is very vocal about what he’s doing with agents. He acknowledges these aren’t autonomous agents yet — he calls them state machines at this point. They are deterministic, and he’s empowering them with understanding of business processes down to the domain expert level.

One thing Michael says is that domain experts need to understand AI and agentic concepts to know what’s doable. Many organizations need to improve their ability to collaborate here. Thinking about the agent paradigm, you need domain experts in the mix. You need that deep domain expertise to make business process optimization possible with agents. It’s piece-by-piece optimization, and the critical path is the domain experts and what’s happening in departments.

Knowledge graphs as the foundation for multi-agent systems

UC Berkeley did interesting research on multiple agents. You’ll see a lot of hype about multiple agents and swarms of agents. But Berkeley’s research indicated that agents mainly communicate via unstructured text. In a multi-agent environment where agents are communicating significantly with one another, that’s where semantics and the technologies Graphwise offers come into play. Disambiguation becomes critical so agents can be clear in what they are saying. Probabilistic machine learning methods aren’t good enough for that. The neuro-symbolic approach is what’s required here.

There’s power in agentic AI systems, but it’s rooted in knowledge graph capabilities. If you don’t have those capabilities — shared discovery and visibility, disambiguation, AI-ready database that supports multiple LLMs and MCP — then the rest of these things at the agent level or UI level or action level don’t function effectively.

Management, control, addressing vulnerabilities, transparency, linked domain models — that all happens at the graph level, at the data knowledge layer. It’s a data layer-centric approach to an effective agentic system, one in which knowledge and rich relations ships are the connecting glue. We need to get the word out on the power of the graph in this AI world. Agents are really underscoring the need for this capability.

Understanding GraphRAG

That leads us to what a GraphRAG solution is and why it’s important. Retrieval-augmented generation is a process where the LLM can go to a database source, query for context, and the data source responds with the relevant information. RAG is a mechanism for the LLM to get disambiguated, reliable information from structured data sources. Graphwise GraphRAG is knowledge graph-based and uses the graph for reliable context that helps answer questions without as many hallucinations.

GraphRAG is part of the Graphwise Graph AI Suite, which offers comprehensive capabilities for building and deploying knowledge graph solutions. The suite includes extensive tooling for metadata modeling, extraction tools, and different standards that are supported. It also provides significant automation — drawing from a long history with natural language processing that helps with disambiguation on the statistical side, while helping you build symbolic AI knowledge representation models and the instance data associated with them.

Real-world integration and applications

Graphwise has clients in different areas — from content to knowledge to data — so they understand the bigger picture, which not many vendors do. Knowledge management clients, data management clients, and content management clients all inform this Graph AI suite with their experience. The suite has entity linking — linkage to external data sources like Wikidata, SNOMED, and other industry-specific ontologies that you can link to with a unique identifier. It’s very empowering.

The suite supports Microsoft 365 and non-Microsoft sources, and it factors in how Copilot is becoming part of the user interface. Graphwise uses Copilot in this context, so you can work within your 365 app and have the Graphwise knowledge graph capability available. Serious integration exists with ServiceNow, Confluence, Salesforce, and other enterprise apps in addition to 365. You can actually find things in SharePoint now if you are using the Graphwise Graph AI Suite.

Wrapping up

Getting agentic AI right comes down to what you put into the system. You need detangled, de-siloed identifiers across all tiers of abstraction. You need systematic knowledge and data lifecycle management — the kind of scientific observation approach that verticals like Healthcare and Pharma have in their blood. And you need semantic deterministic graph models of business operations that complement probabilistic machine learning and language models.

Without these inputs at the data and knowledge layer, you are just putting garbage into the black box.

The good news is that semantic technologies give you the ability to iterate on business process improvement in ways that weren’t possible before. Rather than working at the application layer with BPMS or robotic process automation, you are working at the data layer where real transformation happens. 

This is exciting not just for agentic AI, but for workforce evolution and how organizations can fundamentally improve their operations. The companies that get this right — that invest in proper modeling, observation, and knowledge graphs — will be the ones that make agentic AI work effectively and safely.

Q&As

Question 1: Can you explain what semantic metadata is?

Answer: Semantic metadata has to do with the RDF stack (Resource Description Framework), which is triples. Triples are subjects, predicates and objects. The predicates are the critical component because they’re the relationships between entities. In a relational database, you don’t have that relationship richness. In a graph, you have all this relationship richness, and that’s how you build context.

Question 2: Is GraphRAG the interconnection between the knowledge graph and the AI agent?

Answer: Many people are finding the Model Context Protocol I talked about very useful for agents to take action. What GraphRAG does is to help you with this articulated contextualized data. You have the contextualized knowledge and data that your company has spent a long time refining and putting in place. So GraphRAG is the mechanism for tapping this full context.

Question 3: What’s a simple way to get started to show the value of using a graph database with agentic architecture?

Answer: The simplest way is to focus on the minimum viable product that you could possibly consider in this context. You can look at Graphwise and its semantic tooling to begin with because it is user-friendly and you can get started in a quick way. Not with a really ambitious OWL ontology, but you can start with a linked open vocabulary like the Simple Knowledge Organization System (SKOS) or Schema.org. You can also see many other examples online of how to get started with Schema.org. 

Question 4: What is the impact of AI agents on AI for compliance?

Answer: In terms of compliance, it’s usually a lagging indicator. Technology advances first, then compliance happens later. Regulatory agencies need time to get their arms around emerging tech. Here’s an example: years ago, big banks built FIBO (Financial Industry Business Ontology). Just putting it out there made it possible for regulators to latch onto it and imagine the possibilities for real-time reporting on derivatives trades. So regulators are interested in this capability, but you have to think about resources and political circumstances. It’s very situation specific, so it depends.

The webinar as mentioned has the full story line and all the slides: https://graphwise.ai/event/webinar-how-to-think-about-agentic-ais-challenges-and-opportunities/

One response to “Distillation of the “How to Think About Agentic AI Challenges and Opportunities” Webinar”

  1. […] Distillation of the “How to Think About Agentic AI Challenges and Opportunities” Webinar — This condensed version of a Graphwise webinar walks through what organizations must get right before they deploy agentic AI — starting with data quality and structured inputs. It covers the classic garbage-in/garbage-out problem and what it takes to solve it architecturally. […]

Leave a Reply

Trending

Discover more from The GraphRAG Curator

Subscribe now to keep reading and get access to the full archive.

Continue reading