
Owner and Founder
Contextually LLC
Enterprises are rushing to deploy agentic AI, yet many find themselves stuck — not because the technology isn’t capable, but because the knowledge it needs to function reliably simply isn’t there. Data has been collected in abundance. Meaning has not. The result is AI systems that hallucinate, misfire, and cost far more to operate than they should.
Jessica Talisman has spent her career at the intersection of library science and knowledge engineering, and she sees a clear diagnosis for this problem: enterprises have historically outsourced their knowledge work and underinvested in the infrastructure that turns raw data into structured, reliable meaning. In this conversation, she makes the case that the solution already exists — and that it has been quietly running the world’s largest interconnected knowledge network for decades.
The Knowledge Ownership Crisis
Enterprises have historically outsourced a great deal of their knowledge work — basic metadata, organization, classification — while investing heavily in data collection and storage. What enterprises have not invested in is transforming that data into meaning and context.
Now, as they attempt to build reliable AI systems, enterprises are discovering that they lack the internal skill sets — those of ontologists, taxonomists, knowledge architects — and the infrastructure to own and manage their own knowledge. The transition is proving expensive and slow as a result.
A Librarian’s Mindset as a Solution
Talisman advocates for applying the core principles of library science to enterprise knowledge infrastructure. This means embracing open standards — specifically W3C semantic technologies URIs, Linked Data, and the RDF stack — and prioritising service to all users rather than optimising for internal objectives or vendor monetisation. The result is a knowledge architecture that is interoperable and extensible: one that can grow with the organisation and remain usable across systems.
The Tool-First Trap
A persistent obstacle is the application-centric mentality that leads organisations to ask first: what vendor, what tool? This framing misidentifies the problem. The necessary open standards and tooling already exist. The real missing pieces are skill set and intentional architecture. Chasing vendor solutions before establishing a knowledge foundation tends to entrench lock-in and defer the harder, more important work.
The Critical Human Element
Knowledge is invariably human. Attempts to capture tacit knowledge purely through automation — recording process traces, scraping outputs — are insufficient because they miss the judgement required to decide what constitutes knowledge in the first place. Humans must remain in the loop. This is reflected in current enterprise reality: manual mapping between systems remains widespread, expensive, and fragile.
Context, Cost, and Architecture
The performance and cost of AI systems are directly tied to context quality. The onus is on the user to supply well-structured, high-quality context — and the architecture of that context directly affects token spend. Intentional knowledge structuring is not merely a quality concern; it is a financial one.
The knowledge infrastructure problem is not a technology problem. The standards exist. The tooling exists, and much of it is open.
What is missing is the institutional will to treat knowledge as a first-class asset — to hire for it, design for it, and own it.
Organisations that continue to reach for vendor solutions before building this foundation will find themselves repeatedly paying the price in AI systems that underperform because the context that’s fed to AI’s today and systems generally is poor.
The library science community solved this problem at scale, openly, decades ago. The question for enterprises is whether they are willing to learn from this community that has long helped users find and tap into the knowledge they need.
Jessica Talisman’s writings and company site can be found here:
open.substack.com (Intentional Arrangement Substack) and ontologypipeline.com (Company)
What follows are the YouTube video of the March 9th, 2026 interview with Jessica, and an edited transcript directly below.
Edited Interview with Jessica Talisman
Alan Morrison: Hey everybody, we’re online with Jessica Talisman, and I’m Alan Morrison, the GraphRAG Curator. This is another episode of the Curator Podcast. Welcome, Jessica. It’s great to see you.
Jessica Talisman: Thanks, Alan. Great to see you.
Alan Morrison: Yeah, it’s been a while. We’re both enjoying the California weather.
Jessica Talisman: Yeah, it’s been pretty spectacular.
Alan Morrison: Yeah. So, for those who don’t know Jessica, she has been in and around knowledge graphs and such things for over a couple of decades now, and has a background in library science. She knows a lot about libraries, but then she went over to the “dark side” with enterprises, and she’s got tremendous experience on that side too. So, we’re going to talk about those two worlds today. And Jessica, for those who are interested, tell us more about your background, what you’re doing lately, and how people can catch up with you.
Jessica Talisman: I have a background in both teaching—I have an undergraduate degree in history and I’ve done some work in that area—a master’s in teaching, and a master’s in information and library science. My background includes working in what’s called GLAM (galleries, libraries, archives, and museums), so the cultural heritage domain. And I’ve also worked in enterprises, as you said. A lot of my work centers around reliable knowledge infrastructures, which doesn’t happen necessarily through relational databases. It’s about structuring knowledge, knowledge management, and knowledge infrastructures to be useful for enterprises or for libraries.
Right now, I have a couple of threads going. One is Knowledge Graph Academy. Tony Seale, Katariina Kari, and myself created Knowledge Graph Academy, and we just graduated our first cohort of 23 graduates, 23 new ontologists, which is incredible. It was a pretty rigorous course, but we got everyone through successfully, and they loved it. So I’m doing that, and then I also consult.
I’m working with anywhere from a startup to larger enterprises. Mostly what I do is help with architectures and determining the best path forward to start to build and realize these knowledge infrastructures that can benefit AI.
Alan Morrison: So, a lot going on for you, and it seems like it’s good timing. 2026 seems like it’s on the up-and-up over 2025. Is that your impression, too?
Jessica Talisman: Yes, it is my impression, although the transition in the growth period is very painful. I find that a lot of enterprises are at odds with themselves. They know they need context, they know they need meaning, they know they need more than statistical predictions, but they’re trying to figure out the starting point.
If I were to give an analogy, it’s like a tangled ball of yarn, and you’re trying to put your finger on the thread or the end or the starting place of where you can start to build context, but then also it’s helping to support and grow the skill sets from within the organization.
Up until now, organizations have outsourced a lot of that knowledge work, but we’re finding it more critical for organizations to own their own knowledge in their own knowledge infrastructure so they actually can encode the actual meaning, where that has been lost in the past. It’s really important for AI and AI engineers and data producers and consumers to understand the actual context within their organizations.
Alan Morrison: That’s interesting how you phrase that. When I heard the word “outsourcing” related to knowledge, it’s surprising that companies would outsource that. But the first thing I thought of was, well, maybe everybody’s just using SAP’s data model or Salesforce’s data model, and these large suites are accumulating all of this stuff and trying to help people make sense of it. Is that sort of what you mean by outsourcing, or do you go further than that?
Jessica Talisman: We can even look at data labelers, which is a thankless job that’s outsourced elsewhere, or knowledge management infrastructures where you just have document stores—oftentimes that is outsourced. Just the fundamental organization of things. I’m also writing a book right now and I have a Substack called Intentional Arrangement where I talk about this quite a bit, which is the outsourcing of these skill sets.
So, yes, we deal with our own data infrastructures. When it comes to knowledge management, there aren’t that many knowledge managers or those types of positions, or even like taxonomists or ontologists, within organizations. Most of them throw their hands up and say, “Well, we don’t have the funds for these types of roles and skill sets within our organization.” Because they’ve gone all-in on data collection and data storage, never taking it to the next stage of, “Let’s make meaning out of all this data we collected.” We’re mostly concerned with our data warehouses, our CRM, and the storage of data points and data streams and data collection, rather than how do we make this useful and transform that data into information with definitions and meaning.
That would mean that you need not only tagging, but you need taxonomies, things like ontologies, metadata—anything within that sort of sphere or ecosystem. Adjacent to this, we’re also seeing the same phenomena with data governance where there aren’t that many people acting as data governors or data stewards within organizations. It’s just an area that organizations have not taken seriously enough to fund those roles.
Alan Morrison: Having gone back with enterprises to the 1990s personally, I remember Nick Carr’s book, IT Doesn’t Matter, and basically the message in his writings at that point was, “This stuff is all commoditized, you can just pick a suite vendor and subscribe to a suite. And you really don’t need to spend a lot of attention on that because it’s all commodity.” He just couldn’t have been more wrong about this. And the thing about it was, there was this application-centric mentality at the time, and it still is, really—where it’s all about the application, it’s not about the knowledge or the data.
Jessica Talisman: The first question that comes out of anyone’s mouth when we start discussing architectures and knowledge infrastructures and how we start with taxonomies or glossaries or metadata or whatever the starting point is, is “What vendor? What tool?” So if the conversation starts there with some sort of subscription or an application or a platform, what about the skill sets? What about the actual day-to-day work that happens? What’s the precursor to buying a tool? Can you build these things?
We’re starting to see this with some of the murmurings about a “SaaS apocalypse”—is this the end of that sort of model? Now that we can build tools and stacks and infrastructure so much more easily with the assistance of AI, is this something that we can build and support on our own, separate from a vendor or application, right? And I think that’s part of the divide, and when we talk about knowledge infrastructures or knowledge management or how we imbue these systems with meaning, the W3C Semantic Web standards for taxonomies and ontologies have always been open and available. A lot of the tooling and components of the stack are open and freely available.
So when you ask, “What tool should I use?” Well, it’s all right there. It’s not necessarily strung to a tool or connected to a tool or relying on a tool. It’s not proprietary; it’s interoperable, machine-readable, and extensible, and all of the things that go outside of the bounds of a tool.
Alan Morrison: I came from Navy Intelligence, and it was all about the collection lifecycle and just making this information reusable. I’m dating myself, but this was an entirely manual process back then. You were transcribing and getting all the conversations down, and the method of sharing was rigorous because it was manual. And now that people have these tools, they sort of assume that they don’t have to do all of these things anymore. But that’s not exactly right, is it?
Jessica Talisman: No. And that’s what’s so fascinating from a socio-technical perspective is that we’re looking for every opportunity to automate before we augment. One of the issues with that within the knowledge space is knowledge invariably is human. Humans own knowledge. We get to decide what is knowledge and what isn’t knowledge. So in that vein, humans have to be a part of knowledge, knowledge infrastructures, knowledge building. We have to build our own repositories and libraries and archives of knowledge so that it’s structured and can lend context. That human element I think is very much a part of it.
Something that we’ve been skating around recently, with like the context graph hub, that’s a perfect example—the idea that we can take decision traces and execution traces, these just streams of data in isolation, and somehow add context. But the issue is that knowledge exists. Context exists because you’re anchoring it within a larger compendium of knowledge. That’s the only way to do it.
Alan Morrison: It’s exactly how I reacted to that Gupta and Garg article in December from Foundation Capital. It was, “Okay, so they have a piece of this thing, and really the agent collection piece of this and what the agent can discover and log, but all the rest of the context really has to still be built. You have to have those layers of abstraction. You’re really just sort of building a mirrorworld, and an agent’s not going to do that for you.”
Jessica Talisman: No. And you’re still going to deal with issues like competing duplicates or near duplicates. That’s still going to be a reality coming in through decision traces.
The larger issue is that this is not a new domain. This is an established domain that’s existed for quite some time, called procedural knowledge, process, and procedural knowledge. So when we talk about it in context, there’s more than just decision traces and execution traces.
There are so many traces, many, many traces, but we know for example, with process mining—process mining, full disclosure, has always said it cannot capture tacit knowledge. Another thing that is a problem is we think, “Okay, we can automate, we can capture tacit knowledge.” Once tacit knowledge is recorded in writing in any shape or form, even if it’s in code, it’s no longer tacit knowledge.
So we say, “Extract it from people’s heads.” The problem is that unless we use Neuralink, you can’t extract it from people’s heads. There’s a necessary human in the loop as part of the knowledge elicitation process.
Alan Morrison: And this gets back to the whole librarian mindset. One of the reasons I wanted to have you on is because you’re building this bridge from library science to data science. Tell us about the library mindset as far as this context graph is concerned. So if librarians were encountering this context graph mentality or situation here, how would they deal with it?
Because they’ve already built this infrastructure, and it’s so frustrating to see that all this stuff has been built over, you know, in Dublin Core and all of this stuff has been built for years and years, and nobody’s paying attention to the fact it’s been built and can just be used.
Jessica Talisman: A lot of it goes back to the fact that it’s all open source and widely available. You don’t need a special tool. You just need to build the infrastructure to support it with openly available tooling. But it is a mindset, and I think that’s the most critical thing when librarians approach data problems.
First of all, we decouple certain user data from certain other streams of data, so that’s important to protect security. There’s really a safeguarding of users and a treatment of users that is not first and foremost about monetizing user data, which is the antithesis of what we do with enterprises. That’s one of the big monetization streams.
But most of all, everything is built in service to users. So, it’s not for ourselves, our own internal objectives as a company, or for other teams; it’s really for all users. So, there’s a democratization within the platform. Then we make everything widely available. In terms of metadata schemas and exchanges, we have very open endpoints, APIs, and crosswalks available so that you can integrate with the system.
Essentially, it’s like a knowledge commons that has boundaries around it in terms of how things can interoperate, but it is also able to separate concerns so that things like decision traces exist, but there are methodologies for resolving and connecting those traces into the larger knowledge infrastructure. And so the treatment of these things, always grounding things in knowledge, is done for the benefit of the end user, and the end user could be an AI agent or a human, and that distinction has existed for a very long time.
Alan Morrison: When you talk about the end user, I think about the beginnings of the web and Tim Berners-Lee and then the Semantic Web. That’s still a term that you use sometimes, and you don’t think it’s… of course it has some baggage in some ways and that it didn’t really take off the way people expected it to, but it’s still incredibly useful. Is that correct?
Jessica Talisman: I actually had a good little exchange with Veronica back about this because it is confusing for a lot of people. When I reference the Semantic Web, I’m talking about the founding principles of the Semantic Web and keeping it to that. It’s not about the failed realization of that sort of architecture, but more the idea of using URIs as identifiers, the use of Linked Data, and the use of RDF to describe things. It’s these simple constructs that exist to substantiate an interoperable, extensible, meaningful architecture. And it’s essentially how we build symbolic AI and neuro-symbolic AI, using some of those principles.
And so if we can all gather around these same standards and principles, then it helps to again imbue knowledge into a system, meaning into a system, beyond just syntactic and just-in-time context within context windows.
Alan Morrison: Well, it seems like it’s a natural for scientists to really think along these lines. Do you find that the scientific community has more of this librarian mindset?
Jessica Talisman: Yes, and I think thanks to… we see certain domains where these types of infrastructures are critical. The National Library of Medicine is a great example, which helps to manage SNOMED, which is a huge ontology and several different vocabularies that forms an RDF graph. That’s widely used.
Same with pharmaceuticals. We used to see it more prevalently across government. It’s more common within EU government structures that these exist because they invest in librarians, and librarians help to architect these knowledge infrastructures to support this type of interoperability.
We also see it in finance. That’s another area we see growing right now, where they’re—if we look over at Capital One, they’re hiring teams of ontologists, JP Morgan Chase, Bloomberg, Vanguard—they’re all investing heavily because they know that it’s absolutely critical for reliable AI. Otherwise, when you have these highly regulated industries, that’s where the rubber hits the road and it’s not an option, because the implications of failing or of incorrect answers from AI are pretty large.
Alan Morrison: And I’m sure that others in other industries are well aware of that. So let’s think about this from a workforce perspective, and let’s think about the industries you didn’t mention and how they can get started.
One of the things I was thinking about was, there are people who’ve been working with process modeling in the enterprise over the years. Has their work been something to build on top of too? I mean, it seems like that’s an artifact that could be used.
Jessica Talisman: Yes. So like process mining, where those artifacts—process modeling—absolutely. There are ways to connect these models. Most recently, I just published a piece in my ontology series part three where I dedicated a bunch of time to talking about just this. A huge portion is about how we incorporate metadata using Dublin Core. What are the interfaces? What are the surfaces where we can support integration and support the transition from these traditional information or data infrastructures to help encode meaning in those infrastructures?
It’s not that we’re trying to pull the rug out from one process and exchange it for another, but it’s leveraging what exists. So metadata systems, metadata schemas, serialization/deserialization formats, process knowledge, process mining—anything that exists in that space.
SharePoint, as an example. Microsoft published a crosswalk to SKOS from the SharePoint schema, I think in August or September of 2025. So there are tons of levers and things that you can leverage from within an organization that are essential for actually making up the larger knowledge ecosystem. It doesn’t exist just as RDF. It doesn’t exist in isolation, but we use more. It’s multimodal; we can think of it that way within an organization. It has to be…
Alan Morrison: So it seems like enterprises have had these fits and starts of trying to systematize process. We’ve had things like—well, not too long ago we had Robotic Process Automation, for example, which was true like in 2018, 2019. So basically running these macros, recording what you’re doing, going from app… it’s so fragmented in today’s enterprise environment. You’ve got dozens of different apps you’re working across, and it’s so much trouble just to try to get a workflow in place.
And then you’re sort of trying to record the macro in an RPA sense so you don’t have to do as much rote work yourself. But then those things could be so brittle that they don’t work for a long time if you ever get them to work.
Jessica Talisman: And I will tell you that at every single organization I’ve been at for the past 10 years, I have not been in an organization where manual mapping was not a thing. That’s the reality. Within most organizations, people are hand-mapping across systems still and calling it “stitching.” Right. So, we have processes in place. We may pretend like they’re automated. Yes, they may be automated after you hand-map things, but the reality is that no one’s really nailed it. No one’s really been able to handle things at scale where they can reliably integrate using automation.
Alan Morrison: Well, it seems like if you’re going from scientific libraries over to the data science side of things, there’s this whole historical record that needs to be accumulated, all the provenance and the lineage that needs to be maintained. You know, there are so many habits that a scientific librarian might be able to share with data scientists, for example, to let them know, “This is how you build this record and make it reusable.”
Jessica Talisman: Well, and that’s the thing: it’s always the record. In library science, the record comes first. And a lot of that is derived because these systems first and foremost are bibliographic record systems. The bibliographic records are encoded in a format called MARC 21 with something called RDA, which is Resource Description and Access, and there’s a very specific format.
All of those records natively are not in RDF, but each of those records are available in a number of different formats, including RDF. So the idea is to have a format and then have a way to transform things into the various formats to make them useful. If they’re not useful, then how do you connect the wonderful provenance and decision, you know, all of that, the traces or whatever you want to call it, the lineages? How do you translate that into a system that’s not prepared to capture or encode those things?
What I’ve been saying for a while for the past two years is I think the real opportunity here is the more people that understand, for example, ontologies, RDF, taxonomies, metadata—this sort of symbolic world—the more opportunity we have for innovation.
A huge opportunity for innovation is in ETL and ELT processes, in sort of transformer architectures that are less brittle and less manual, that help to make files and records and those sorts of things available in more than one format, because the reality is we’re all using different formats, different coding strategies, and that creates this disconnect because then within our own organizations, we’re speaking different languages.
Alan Morrison: So, when you’re talking to data scientists and engineers and they’re the ones that are curious about this symbolic approach, what are they saying? What are they asking about?
Jessica Talisman: They’re asking… the first is always “What tool? What vendor? What tool?” which shows our reliance. It’s more telling of our reliance on the sort of vendor lock-in ecosystem, that you have to choose a team and then buy into that team or methodology. So that’s the first question.
The second question is, “No one wants to do this.” It’s the elephant in the room—the knowledge management, and I’m calling it knowledge management because essentially that’s what it is: it’s how we manage our knowledge and make and transform it into context.
And then the third thing is, “Okay, so if we can get people on board, then because right now this is what we’re doing,” and it’s usually some sort of process that involves some complex queries in SQL tables or across SQL tables. “So if we go this route, then what’s the cost?” So, tool, no one’s interested, the socio-technical, no one wants to do it—it’s like the hot potato—and then, “How much is it going to cost?”
Alan Morrison: I would kind of get depressed and discouraged after those kinds of conversations, but you don’t seem to be depressed or discouraged about it. So, you’re managing to have follow-on discussions with these people, and are you bringing some of them along? Some of them are really diving into this, aren’t they?
Jessica Talisman: Yeah, I think so. And it’s… I approach it as typical. What’s unknown is usually what we’re afraid of. We’re all afraid of the unknown. And so, a big part of it is education. The second part is being able to touch and feel. So without ready demos or ways to show and exhibit what this type of architecture looks like and what it means….
For example, if I were to show you an environment and say, “Okay, your SPARQL query of this ontology can become a Restful API call. And as the shape of the and the things within the graph change, you’re going to get more or less results. But you can have stability by querying a knowledge graph, saving that as a Restful API call, and then being able to populate an environment, an app, or whatever—even a RAG instance—via this methodology.”
A lot of people have not touched and felt these things, have not seen them in practice. It’s interesting how many people… when we start discussing this, I’ll give you a little example of where this happened in real time. Usually I’ll default and say, “Well, Wikidata is a knowledge graph. That’s an RDF knowledge graph. You touch and feel it probably every day.”
My experience when I was at an unnamed company was meeting with the director of a very, very large project where they were trying to create aliases for terms within their agentic workflow and within their environments—product to consumer-facing. So, instead of using the API from Wikidata, instead of exploring the integration of a graph, of using the Wikidata RDF graph in order to get aliases, they decided to scrape all of Wikidata. And then they stripped all of the semantics, all the RDF, out of the scrape and kept only the groupings of aliases. And those groupings then they mapped by hand to what existed in their internal system.
So when you consider an implementation like that, which is static because you’re doing a point-in-time snapshot of what aliases or alternative labels would be for the things within your system—similes, synonyms—you have that. Then you have the effort of hand-mapping. You have to strip all the data. The cost of that implementation alone was more expensive than if you just used the native graph.
So when we’re actually looking at the cost efficiency of things—and this includes even using the context window—the shape and the structure of your context matters. It’s been proven that your token cost decreases the better your context is structured in order to imbue meaning. So the math is not really justifying the activities from within organizations. That leads itself to the question, “Okay, well, clearly education is one part, probably one of the most important parts, to start educating and upskilling people in place, because if we continue to run into the wall with the same mistakes over and over again, it’s unlikely that those systems are going to improve.”
I think that many people have been sitting on their hands and holding their breath hoping that because the models have been improving at such a rapid rate, that one of those updates is going to somehow magically manifest context. But that hasn’t happened yet.
Alan Morrison: No, no. And it really is still a garbage-in, garbage-out scenario where the input is really critical. And you mentioned the cost factor—you’re writing about the financial aspect of these tokens, how they’re quantifying things. You want to talk about the insights that you’ve uncovered in this phase of your writing?
Jessica Talisman: Yes. I’ve been researching and writing now about context and context windows. How did we arrive at this term context? We’re using it as a noun, a verb, and an adjective all at once. It’s the thing that everyone needs, but because it shapeshifts depending on how that word’s being used, ultimately it’s the one thing that we all need. But it also happens to be the one thing that’s attached to tokens or spend with AI.
So we can’t divorce context from the cost of tokens because that’s how the cost of tokens is estimated or predicted—based on your context and the use of the context window. The truth is that the context window can only hold so much, and the memory of a system is invariably attached to the context window.
So the performance of any AI implementation is then attached to context, which is attached to tokens and spend. But ultimately the onus is on the user or the customer, meaning us, to supply the context so that we have reliable AI.
The problem with that premise is that context itself—the shape, the form, how it needs to be presented, what does quality look like—has not been defined. Right? So, we know we all need context. We know it’s this thing, but there’s nothing that says it needs to be in this shape. None of the AI companies are saying, “You need to present it in this format. You need to have it architected structurally so that it presents itself. It’s going to be optimal if you present it as a neuro-symbolic system.”
No one’s saying that explicitly. Context becomes anyone’s guess. Context could be a single word in some instances. It could be a sentence in some instances. It could be a markdown file in some instances. It’s a wild gamut; it’s just who knows?
Alan Morrison: You listen to this from a human and a psychological perspective and you realize the wrong incentives could be easily encouraged with an approach like what you’ve just described. How do we get to the right incentives for users to do the right kinds of things with their information?
Jessica Talisman: I still think it’s a problem in education and upskilling, I really, really do. Because when humans have the information and knowledge, like any other sort of education—if I wanted to study to become a data engineer or data scientist or machine learning engineer, there’s a certain curriculum of knowledge and then I become proficient in that knowledge. It could be on-the-job learning, that’s fine, but the idea is having the foundations of a discipline.
And so, ironically, part of what’s lost—and I think this is leaking into how we value education as a whole as a society—is that we think there’s this concept of ‘remix,’ there’s this concept of immediate information and knowledge that we can get out of a model just by a query, that a model can teach us something on the job, on the spot.
The problem is that there are decades of research. There are decades of papers and implementations of these systems that have been iterative. It’s building on the shoulders of giants. And the irony is that is the nature of knowledge: that it’s iterative and it builds upon itself, but it relies on prior generations. It relies on prior knowledge. That is the nature of knowledge.
So building up human knowledge and reskilling, upskilling people in their seats gives them the opportunity to pivot and to build systems that are intentional, that are really meant to build knowledge infrastructures—not by way of doing a complex SQL query to try to force knowledge out of a relational database, but by intentionally managing your knowledge and building knowledge infrastructures.
Alan Morrison: Yeah, it makes all the sense in the world, but that’s our bias, I’m sure. And so what are you seeing this year that you haven’t seen before that’s encouraging?
Jessica Talisman: I think I was really encouraged teaching the Knowledge Graph Academy, building out that curriculum and teaching it, and having 23 students total go through the program and seeing the light bulbs go off. And realizing that in 42 hours, we were able to get people to a point where each of them could go out into the world and build reliable infrastructures and manage knowledge. So that’s encouraging, and it’s just the fact that humans have an insane capability to learn when given the opportunity. So that’s very encouraging.
Alan Morrison: Well, it just seems like there’s this yawning need at universities to upgrade their curriculums so that there are these courses like the ones that you’re offering. And you could probably count on one hand the number of really strong programs there might be worldwide for this sort of thing. It’s just astounding that there isn’t more.
Jessica Talisman: As an example, I went through a master’s in Information and Library Science, and among the 60 American Library Association-approved schools—none of them undergraduate, it’s all graduate curriculum for a reason because it relies on your previous or prior knowledge within your own domain or discipline to build upon that at a graduate level. What’s really, really interesting is that each library science program will have usually a collection of different focus areas or concentrations. So mine happened to be informatics. I was fortunate enough to learn Semantic Web techniques. Not all programs will have that.
We tend to think it’s usually… Library science up until about the ’60s was called information science. There was never the term library science. And then once computers started coming online, that’s when the split happened and they said, “Okay, we’re going to call this library science.”
And most programs are still called Library and Information Science, but then information science became its own track or its own domain or discipline. Librarians, if you ask them, will always say, “We’re still information scientists,” because that was the original label for a really long time.
But the point is that these sorts of programs and opportunities exist. Are there enough of them? No. But universities have a huge opportunity, whether it be on a certificate level or baked into curriculum. Having this sort of track, for example, a professor who chooses to integrate knowledge graphs into their curriculum is going to do a huge service for their students. A huge service.
Right now, curriculum is lagging behind at the rate that academia lags behind, which is pretty considerable because it’s really hard to get smart people in a room to make deterministic decisions in the moment. It takes a while.
Alan Morrison: Yeah. Well, this has been a super conversation, Jessica. We covered a lot of ground. Are there things we didn’t cover we should be covering today?
Jessica Talisman: I think something that is important is opening yourself up or understanding things from a systems perspective, not just within your own domain or narrow domain, but actually expanding your toolset is probably one of the biggest opportunities that anyone has at this moment.
And so, studying how library infrastructures work… I was going to index just a little bit more into that. Library infrastructures are able to manage in real-time the incoming new material and resources, outgoing material and resources. It’s the largest interconnected network of its sort that’s supported by knowledge graphs worldwide.
If you look up in a library system, you want to find a particular resource or book in any library, it’s going to serve you results in order from the closest location and the availability of that resource, which sounds pretty much like decision traces to a certain extent. And so, being able to study other systems where this has been done successfully, I think is one of the best places to start—understanding it by looking at what exists.
Alan Morrison: Yeah, we’re really fortunate that all of this has already been built. And I’m going to say it again: Let’s not reinvent the wheel here. We’ve got these methods. They work. They scale. And we’re not going to get around it. We have to… the elephant’s going to be there, and so we have to deal with the elephant. Thank you so much for walking us through this. It’s been great.






Leave a Reply