For at give dig den bedst mulige oplevelse bruger dette websted cookies. Gennemgå vores Fortrolighedspolitik og Servicevilkår for at lære mere.
Forstået!
Indhold leveret af Larry Swanson. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af Larry Swanson eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
Player FM - Podcast-app Gå offline med appen Player FM !
Host Francesca Amiker sits down with directors Joe and Anthony Russo, producer Angela Russo-Otstot, stars Millie Bobby Brown and Chris Pratt, and more to uncover how family was the key to building the emotional core of The Electric State . From the Russos’ own experiences growing up in a large Italian family to the film’s central relationship between Michelle and her robot brother Kid Cosmo, family relationships both on and off of the set were the key to bringing The Electric State to life. Listen to more from Netflix Podcasts . State Secrets: Inside the Making of The Electric State is produced by Netflix and Treefort Media.…
Indhold leveret af Larry Swanson. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af Larry Swanson eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
Margaret Warren As a 10-year-old photographer, Margaret Warren would jot down on the back of each printed photo metadata about who took the picture, who was in it, and where it was taken. Her interest in image metadata continued into her adult life, culminating the creation of ImageSnippets, a service that lets anyone add linked open data descriptions to their images. We talked about: her work to make images more discoverable with metadata connected via a knowledge graph how her early childhood history as a metadata strategist, her background in computing technology, and her personal interest in art and a photography shows up in her product, ImageSnippets her takes on the basics of metadata strategy and practice the many types of metadata: descriptive, administrative, technical, etc. the role of metadata in the new AI world some of the good and bad reasons that social media platforms might remove metadata from images privacy implications of metadata in social media the linked data principles that she applies in ImageSnippets and how they're managed in the product's workflow her wish that CMSs and social media platforms would not strip the metadata from images as they ingest them the lightweight image ontology that underlies her ImageSnippets product her prediction that the importance of metadata that supports provenance, demonstrates originality, and sets context will continue to grow in the future Margaret's bio Margaret Warren is a technologist, researcher and artist/content creator. She is the founder and CEO of Metadata Authoring Systems whose mission is to make the most obscure images on the web findable, and easily accessible by describing and preserving them in the most precise ways possible. To assist with this mission, she is the creator of a system called, ImageSnippets which can be used by anyone to build linked data descriptions of images into graphs. She is also a research associate with the Florida Institute of Human and Machine Cognition, one of the primary organizers of a group called The Dataworthy Collective and is a member of the IPTC (International Press and Telecommunications Council) photo-metadata working group and the Research Data Alliance charter on Collections as Data. As a researcher, Margaret's primary focus is at the intersection of semantics, metadata, knowledge representation and information science particularly around visual content, search and findability. She is deeply interested in how people describe what they experience visually and how to capture and formalize this knowledge into machine readable structures. She creates tools and processes for humans but augmented by machine intelligence. Many of these tools are useful for unifying the many types of metadata and descriptions of images - including the very important context element - into ontology infused knowledge graphs. Her tools can be used for tasks as advanced as complex domain modeling but can also facilitate image content to be shared and published while staying linked to it's metadata across workflows. Learn more and connect with Margaret online LinkedIn Patreon Bluesky Substack ImageSnippets Metadata Authoring Systems personal and art site IPTC links IPTC Photo Metadata Software that supports IPTC Photo Metadata Get IPTC Photo Metadata Browser extensions for IPTC Photo Metadata Resource not mentioned in podcast (but very useful for examining structured metadata in web pages) OpenLink Structured Data Sniffer (OSDS) Video Here’s the video version of our conversation: https://youtu.be/pjoAAq5zuRk Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 21. Nowadays, we are all immersed in a deluge of information and media, especially images. The real value of these images is captured in the metadata about them. Without information about the history of an image, its technical details,
Indhold leveret af Larry Swanson. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af Larry Swanson eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
Margaret Warren As a 10-year-old photographer, Margaret Warren would jot down on the back of each printed photo metadata about who took the picture, who was in it, and where it was taken. Her interest in image metadata continued into her adult life, culminating the creation of ImageSnippets, a service that lets anyone add linked open data descriptions to their images. We talked about: her work to make images more discoverable with metadata connected via a knowledge graph how her early childhood history as a metadata strategist, her background in computing technology, and her personal interest in art and a photography shows up in her product, ImageSnippets her takes on the basics of metadata strategy and practice the many types of metadata: descriptive, administrative, technical, etc. the role of metadata in the new AI world some of the good and bad reasons that social media platforms might remove metadata from images privacy implications of metadata in social media the linked data principles that she applies in ImageSnippets and how they're managed in the product's workflow her wish that CMSs and social media platforms would not strip the metadata from images as they ingest them the lightweight image ontology that underlies her ImageSnippets product her prediction that the importance of metadata that supports provenance, demonstrates originality, and sets context will continue to grow in the future Margaret's bio Margaret Warren is a technologist, researcher and artist/content creator. She is the founder and CEO of Metadata Authoring Systems whose mission is to make the most obscure images on the web findable, and easily accessible by describing and preserving them in the most precise ways possible. To assist with this mission, she is the creator of a system called, ImageSnippets which can be used by anyone to build linked data descriptions of images into graphs. She is also a research associate with the Florida Institute of Human and Machine Cognition, one of the primary organizers of a group called The Dataworthy Collective and is a member of the IPTC (International Press and Telecommunications Council) photo-metadata working group and the Research Data Alliance charter on Collections as Data. As a researcher, Margaret's primary focus is at the intersection of semantics, metadata, knowledge representation and information science particularly around visual content, search and findability. She is deeply interested in how people describe what they experience visually and how to capture and formalize this knowledge into machine readable structures. She creates tools and processes for humans but augmented by machine intelligence. Many of these tools are useful for unifying the many types of metadata and descriptions of images - including the very important context element - into ontology infused knowledge graphs. Her tools can be used for tasks as advanced as complex domain modeling but can also facilitate image content to be shared and published while staying linked to it's metadata across workflows. Learn more and connect with Margaret online LinkedIn Patreon Bluesky Substack ImageSnippets Metadata Authoring Systems personal and art site IPTC links IPTC Photo Metadata Software that supports IPTC Photo Metadata Get IPTC Photo Metadata Browser extensions for IPTC Photo Metadata Resource not mentioned in podcast (but very useful for examining structured metadata in web pages) OpenLink Structured Data Sniffer (OSDS) Video Here’s the video version of our conversation: https://youtu.be/pjoAAq5zuRk Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 21. Nowadays, we are all immersed in a deluge of information and media, especially images. The real value of these images is captured in the metadata about them. Without information about the history of an image, its technical details,
Ole Olesen-Bagneux In every enterprise, says Ole Olesen-Bagneux, the information you need to understand your organization's metadata is already there. It just needs to be discovered and documented. Ole's Meta Grid can be as simple as a shared, curated collection of documents, diagrams, and data but might also be expressed as a knowledge graph. Ole appreciates "North Star" architectures like microservices and the Data Mesh but presents the Meta Grid as a simpler way to manage enterprise metadata. We talked about: his work as Chief Evangelist at Actian his forthcoming book, "Fundamentals of Metadata Management" how he defines his Meta Grid: an integration architecture that connects metadata across metadata repositories his definition of metadata and its key characteristic, that it's always in two places at once how the Meta Grid compares with microservices architectures and organizing concepts like Data Mesh the nature of the Meta Grid as a small, simple, and slow architecture which is not technically difficult to achieve his assertion that you can't build a Meta Grid because it already exists in every organization the elements of the Meta Grid: documents, diagrams or pictures, and examples of data how knowledge graphs fit into the Meta Grid his appreciation for "North Star" architectures like Data Mesh but also how he sees the Meta Grid as a more pragmatic approach to enterprise metadata management the evolution of his new book from a knowledge graph book to his elaboration on the "slow" nature of the Meta Grid, in particular how its metadata focus contrasts with faster real-time systems like ERPs the shape of the team topology that makes Meta Grid work Ole's bio Ole Olesen-Bagneux is a globally recognized thought leader in metadata management and enterprise data architecture. As VP, Chief Evangelist at Actian, he drives industry awareness and adoption of modern approaches to data intelligence, drawing on his extensive expertise in data management, metadata, data catalogs, and decentralized architectures. An accomplished author, Ole has written The Enterprise Data Catalog (O’Reilly, 2023). He is currently working on Fundamentals of Metadata Management (O’Reilly, 2025), introducing a novel metadata architecture known as the Meta Grid. With a PhD in Library and Information Science from the University of Copenhagen, his unique perspective bridges traditional information science with modern data management. Before joining Actian, Ole served as Chief Evangelist at Zeenea, where he played a key role in shaping and communicating the company’s technology vision. His industry experience includes leadership roles in enterprise architecture and data strategy at major pharmaceutical companies like Novo Nordisk.Ole is passionate about scalable metadata architectures, knowledge graphs, and enabling organizations to make data truly discoverable and usable. Connect with Ole online LinkedIn Substack Medium Resources mentioned in this interview Fundamentals of Metadata Management, Ole's forthcoming book Data Management at Scale by Piethein Strengholt Fundamentals of Data Engineering by Joe Reis and Matt Housley Meta Grid as a Team Topology, Substack article Stewart Brand's Pace Layers Video Here’s the video version of our conversation: https://youtu.be/t01IZoegKRI Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 28. Every modern enterprise wrestles with the scale, the complexity, and the urgency of understanding their data and metadata. So, by necessity, comprehensive architectural approaches like microservices and the data mesh are complex, big, and fast. Ole Olesen-Bagneux proposes a simple, small, and slow way for enterprises to cultivate a shared understanding of their enterprise knowledge, a decentralized approach to metadata strategy that he calls the Meta Grid. Interview transcript Larry: Hi,…
Andrea Volpini Your organization's brand is what people say about you after you've left the room. It's the memories you create that determine how people think about you later. Andrea Volpini says that the same dynamic applies in marketing to AI systems. Modern brand managers, he argues, need to understand how both human and machine memory work and then use that knowledge to create digital memories that align with how AI systems understand the world. We talked about: his work as CEO at WordLift, a company that builds knowledge graphs to help companies automate SEO and other marketing activities a recent experiment he did during a talk at an AI conference that illustrates the ability of applications like Grok and ChatGPT to build and share information in real time the role of memory in marketing to current AI architectures his discovery of how the agentic approach he was taking to automating marketing tasks was actually creating valuable context for AI systems the mechanisms of memory in AI systems and an analogy to human short- and long-term memory the similarities he sees in how the human neocortex forms memories and how the knowledge about memory is represented in AI systems his practice of representing entities as both triples and vectors in his knowledge graph how he leverages his understanding of the differences in AI models in his work the different types of memory frameworks to account for in both the consumption and creation of AI systems: semantic, episodic, and procedural his new way of thinking about marketing: as a memory-creation process the shift in focus that he thinks marketers need to make, "creating good memories for AI in order to protect their brand values" Andrea's bio Andrea Volpini is the CEO of WordLift and co-founder of Insideout10. With 25 years of experience in semantic web technologies, SEO, and artificial intelligence, he specializes in marketing strategies. He is a regular speaker at international conferences, including SXSW, TNW Conference, BrightonSEO, The Knowledge Graph Conference, G50, Connected Data and AI Festival. Andrea has contributed to industry publications, including the Web Almanac by HTTP Archive. In 2013, he co-founded RedLink GmbH, a commercial spin-off focused on semantic content enrichment, natural language processing, and information extraction. Connect with Andrea online LinkedIn X Bluesky WordLift Video Here’s the video version of our conversation: https://youtu.be/do-Y7w47CZc Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 27. Some experts describe the marketing concept of branding as, What people say about you after you’ve left the room. It's the memories they form of your company that define your brand. Andrea Volpini sees this same dynamic unfolding as companies turn their attention to AI. To build a memorable brand online, modern marketers need to understand how both human and machine memory work and then focus on creating memories that align with how AI systems understand the world. Interview transcript Larry: Hi, everyone. Welcome to episode number 27 of the Knowledge Graph Insights podcast. I am really delighted today to welcome to the show Andrea Volpini. Andrea is the CEO and the founder at WordLift, a company based in Rome. Tell the folks a little bit more about WordLift and what you're up to these days, Andrea. Andrea: Yep. So we build knowledge graphs and to help brands automate their SEO and marketing efforts using large language model and AI in general. Larry: Nice. Yeah, and you're pretty good at this. You've been doing this a while and you had a recent success story, I think that shows, that really highlights some of your current interests in your current work. Tell me about your talk in Milan and the little demonstration you did with that. Andrea: Yeah, yeah, so it was last week at AI Festival,…
Jacobus Geluk The arrival of AI agents creates urgency around the need to guide and govern them. Drawing on his 15-year history in building reliable AI solutions for banks and other enterprises, Jacobus Geluk sees a standards-based data-product marketplace as the key to creating the thriving data economy that will enable AI agents to succeed at scale. Jacobus launched the effort to create the DPROD data-product description specification, creating the supply side of the data market. He's now forming a working group to document the demand side, a "use-case tree" specification to articulate the business needs that data products address. We talked about: his work as CEO at Agnos.ai, an enterprise knowledge graph and AI consultancy the working group he founded in 2023 which resulted in the DPROD specification to describe data products an overview of the data-product marketplace and the data economy the need to account for the demand side of the data marketplace the intent of his current work on to address the disconnect between tech activities and business use cases how the capabilities of LLMs and knowledge graphs complement each other the origins of his "use-case tree" model in a huge banking enterprise knowledge graph he built ten years ago how use case trees improve LLM-driven multi-agent architectures some examples of the persona-driven, tech-agnostic solutions in agent architectures that use-case trees support the importance of constraining LLM action with a control layer that governs agent activities, accounting for security, data sourcing, and issues like data lineage and provenance the new Use Case Tree Work Group he is forming the paradox in the semantic technology industry now of a lack of standards in a field with its roots in W3C standards Jacobus' bio Jacobus Geluk is a Dutch Semantic Technology Architect and CEO of agnos.ai, a UK-based consulting firm with a global team of experts specializing in GraphAI — the combination of Enterprise Knowledge Graphs (EKG) with Generative AI (GenAI). Jacobus has over 20 years of experience in data management and semantic technologies, previously serving as a Senior Data Architect at Bloomberg and Fellow Architect at BNY Mellon, where he led the first large-scale production EKG in the financial industry. As a founding member and current co-chair of the Enterprise Knowledge Graph Forum (EKGF), Jacobus initiated the Data Product Workgroup, which developed the Data Product Ontology (DPROD) — a proposed OMG standard for consistent data product management across platforms. Jacobus can claim to have coined the term "Enterprise Knowledge Graph (EKG)" more than 10 years ago, and his work has been instrumental in advancing semantic technologies in financial services and other information-intensive industries. Connect with Jacobus online LinkedIn Agnos.ai Resources mentioned in this podcast DPROD specification Enterprise Knowledge Graph Forum Object Management Group Use Case Tree Method for Business Capabilities DCAT Data Catalog Vocabulary Video Here’s the video version of our conversation: https://youtu.be/J0JXkvizxGo Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 26. In an AI landscape that will soon include huge groups of independent software agents acting on behalf of humans, we'll need solid mechanisms to guide the actions of those agents. Jacobus Geluk looks at this situation from the perspective of the data economy, specifically the data-products marketplace. He helped develop the DPROD specification that describes data products and is now focused on developing use-case trees that describe the business needs that they address. Interview transcript Larry: Okay. Hi everyone. Welcome to episode number 26 of the Knowledge Graph Insights podcast. I am really happy today to welcome to the show, Jacobus Geluk. Sorry, I try to speak Dutch, do my best.…
Rebecca Schneider Skills that Rebecca Schneider learned in library science school - taxonomy, ontology, and semantic modeling - have only become more valuable with the arrival of AI technologies like LLMs and the growing interest in knowledge graphs. Two things have stayed constant across her library and enterprise content strategy work: organizational rigor and the need to always focus on people and their needs. We talked about: her work as Co-Founder and Executive Director at AvenueCX, an enterprise content strategy consultancy her background as a "recovering librarian" and her focus on taxonomies, metadata, and structured content the importance of structured content in LLMs and other AI applications how she balances the capabilities of AI architectures and the needs of the humans that contribute to them the need to disambiguate the terms that describe the span of the semantic spectrum the crucial role of organization in her work and how you don't to have formally studied library science to do it the role of a service mentality in knowledge graph work how she measures the efficiency and other benefits of well-organized information how domain modeling and content modeling work together in her work her tech-agnostic approach to consulting the role of metadata strategy into her work how new AI tools permit easier content tagging and better governance the importance of "knowing your collection," not becoming a true subject matter expert but at least getting familiar with the content you are working with the need to clean up your content and data to build successful AI applications Rebecca's bio Rebecca is co-founder of AvenueCX, an enterprise content strategy consultancy. Her areas of expertise include content strategy, taxonomy development, and structured content. She has guided content strategy in a variety of industries: automotive, semiconductors, telecommunications, retail, and financial services. Connect with Rebecca online LinkedIn email: rschneider at avenuecx dot com Video Here’s the video version of our conversation: https://youtu.be/ex8Z7aXmR0o Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 25. If you've ever visited the reference desk at your local library, you've seen the service mentality that librarians bring to their work. Rebecca Schneider brings that same sensibility to her content and knowledge graph consulting. Like all digital practitioners, her projects now include a lot more AI, but her work remains grounded in the fundamentals she learned studying library science: organizational rigor and a focus on people and their needs. Interview transcript Larry: Hi, everyone. Welcome to episode number 25 of the Knowledge Graph Insights podcast. I am really excited today to welcome to the show Rebecca Schneider. Rebecca is the co-founder and the executive director at AvenueCX, a consultancy in the Boston area. Welcome, Rebecca. Tell the folks a little bit more about what you're up to these days. Rebecca: Hi, Larry. Thanks for having me on your show. Hello, everyone. My name is Rebecca Schneider. I am a recovering librarian. I was a trained librarian, worked in a library with actual books, but for most of my career, I have been focusing on enterprise content strategy. Furthermore, I typically focus on taxonomies, metadata, structured content, and all of that wonderful world that we live in. Larry: Yeah, and we both come out of that content background and have sort of converged on the knowledge graph background together kind of over the same time period. And it's really interesting, like those skills that you mentioned, the library science skills of taxonomy, metadata, structured, and then the application of that in structured content in the content world, how, as you've got in more and more into knowledge graph stuff, how has that background, I guess...…
Ashleigh Faith With her 15-year history in the knowledge graph industry and her popular YouTube channel, Ashleigh Faith has informed and inspired a generation of graph practitioners and enthusiasts. She's an expert on semantic modeling, knowledge graph construction, and AI architectures and talks about those concepts in ways that resonate both with her colleagues and with newcomers to the field. We talked about: her popular IsA DataThing YouTube channel the crucial role of accurately modeling actual facts in semantic practice and AI architectures her appreciation of the role of knowledge graphs in aligning people in large organizations around concepts and the various words that describe them the importance of staying focused on the business case for knowledge graph work, which has become both more important with the arrival of LLMs and generative AI the emergence of more intuitive "talk to your graph" interfaces some of her checklist items for onboarding aspiring knowledge graph engineers how to decide whether to use a property graph or a knowledge graph, or both her hope that more RDF graph vendors will offer a free tier so that people can more easily experiment with them approaches to AI architecture orchestration the enduring importance of understanding how information retrieval works Ashleigh's bio Ashleigh Faith has her PhD in Advanced Semantics and over 15 years of experience working on graph solutions across the STEM, government, and finance industries. Outside of her day-job, she is the Founder and host of the IsA DataThing YouTube channel and podcast where she tries to demystify the graph space. Connect with Ashleigh online LinkedIn IsA DataThing YouTube channel Video Here’s the video version of our conversation: https://youtu.be/eMqLydDu6oY Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 24. One way to understand the entity resolution capabilities of knowledge graphs is to picture on old-fashioned telephone operator moving plugs around a switchboard to make the right connections. Early in her career, that's one way that Ashleigh Faith saw the power of knowledge graphs. She has since developed sophisticated approaches to knowledge graph construction, semantic modeling, and AI architectures and shares her deeply informed insights on her popular YouTube channel. Interview transcript Larry: Hi, everyone. Welcome to episode number 24 of the Knowledge Graph Insights Podcast. I am super extra delighted today to welcome to the show Ashleigh Faith. Ashleigh is the host of the awesome YouTube channel IsA DataThing, which has thousands of subscribers, thousands of monthly views. I think it's many people's entry point into the knowledge graph world. Welcome, Ashleigh. Great to have you here. Tell the folks a little bit more about what you're up to these days. Ashleigh: Thanks, Larry. I've known you for quite some time. I'm really excited to be here today. What about me? I do a lot of semantic and AI stuff for my day job. But yeah, I think my main passion is also helping others get involved, understand some of the concepts a little bit better for the semantic space and now the neuro-symbolic AI. That's AI and knowledge graphs coming together. That is quite a hot topic right now, so lots and lots of untapped potential in what we can talk about. I do most of that on my channel. Larry: Yeah. I will refer people to your channel because we've got only a half-hour today. It's ridiculous. Ashleigh: Yeah. Larry: We just talked for an hour before we went on the air. It's ridiculous. What I'd really like to focus on today is the first stage in any of this, the first step in any of these knowledge graph implementations or any of this stuff is modeling. I think about it from a designerly perspective. I do a lot of mental model discernment, user research kind of stuff, and then conceptual modeling to agree on things.…
Panos Alexopoulos Any knowledge graph or other semantic artifact must be modeled before it's built. Panos Alexopoulos has been building semantic models since 2006. In 2020, O'Reilly published his book on the subject, "Semantic Modeling for Data." The book covers the craft of semantic data modeling, the pitfalls practitioners are likely to encounter, and the dilemmas they'll need to overcome. We talked about: his work as Head of Ontology at Textkernel and his 18-year history working with symbolic AI and semantic modeling his definition and description of the practice of semantic modeling and its three main characteristics: accuracy, explicitness, and agreement the variety of artifacts that can result from semantic modeling: database schemas, taxonomies, hierarchies, glossaries, thesauri, ontologies, etc. the difference between identifying entities with human understandable descriptions in symbolic AI and numerical encodings in sub-symbolic AI the role of semantic modeling in RAG and other hybrid AI architectures a brief overview of data modeling as a practice how LLMs fit into semantic modeling: as sources of information to populate a knowledge graph, as coding assistants, and in entity and relation extraction other techniques besides NLP and LLMs that he uses in his modeling practice: syntactic patterns, heuristics, regular expressions, etc. the role of semantic modeling and symbolic AI in emerging hybrid AI architectures the importance of defining the notion of "autonomy" as AI agents emerge Panos' bio Panos Alexopoulos has been working since 2006 at the intersection of data, semantics and software, contributing in building intelligent systems that deliver value to business and society. Born and raised in Athens, Greece, Panos currently works as a principal educator at OWLTECH, developing and delivering training workshops that provide actionable knowledge and insights for data and AI practitioners. He also works as Head of Ontology at Textkernel BV, in Amsterdam, Netherlands, leading a team of data professionals in developing and delivering a large cross-lingual Knowledge Graph in the HR and Recruitment domain. Panos has published several papers at international conferences, journals and books, and he is a regular speaker in both academic and industry venues. He is also the author of the O’Reilly book “Semantic Modeling for Data – Avoiding Pitfalls and Dilemmas”, a practical and pragmatic field guide for data practitioners that want to learn how semantic data modeling is applied in the real world. Connect with Panos online LinkedIn Video Here’s the video version of our conversation: https://youtu.be/ENothdlfYGA Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 23. In order to build a knowledge graph or any other semantic artifact, you first need to model the concepts you're working with, and that model needs to be accurate, to explicitly represent all of the ideas you're working with, and to capture human agreements about them. Panos Alexopoulos literally wrote the book on semantic modeling for data, covering both the principles of modeling as well as the pragmatic concerns of real-world modelers. Interview transcript Larry: Hi everyone. Welcome to episode number 23 of the Knowledge Graph Insights podcast. I am really excited today to welcome to the show Panos Alexopoulos. Panos is the head of ontology at Textkernel, a company in Amsterdam that works on knowledge graphs for the HR and recruitment world. Welcome, Panos. Tell the folks a little bit more about what you're doing these days. Panos: Hi Larry. Thank you very much for inviting me to your podcast. I'm really happy to be here. Yeah, so as you said, I'm head of ontology at Textkernel. Actually, I've been working in the field of data semantics, knowledge graph ontologies for almost now 18 years, even before the era of machine learning,…
Mike Pool Mike Pool sees irony in the fact that semantic-technology practitioners struggle to use the word "semantics" in ways that meaningfully advance conversations about their knowlege-representation work. In a recent LinkedIn post, Mike even proposed a moratorium on the use of the word. We talked about: his multi-decade career in knowledge representation and ontology practice his opinion that we might benefit from a moratorium on the term "semantics" the challenges in pinning down the exact scope of semantic technology how semantic tech permits reusability and enables scalability the balance in semantic practice between 1) ascribing meaning in tech architectures independent of its use in applications and 2) considering end-use cases the importance of staying domain-focused as you do semantic work how to stay pragmatic in your choice of semantic methods how reification of objects is not inherently semantic but does create a framework for discovering meaning how to understand and capture subtle differences in meaning of seemingly clear terms like "merger" or "customer" how LLMs can facilitate capturing meaning Mike's bio Michael Pool works in the Office of the CTO at Bloomberg, where he is working on a tool to create and deploy ontologies across the firm. Previously, he was a principal ontologist on the Amazon Product Knowledge team, and has also worked to deploy semantic technologies/approaches and enterprise knowledge graphs at a number of big banks in New York City. Michael also spent a couple of years on the famous Cyc project and has evaluated knowledge representation technologies for DARPA. He has also worked on tooling to integrate probabilistic and semantic models and oversaw development of an ontology to support a consumer-facing semantic search engine. He lives in New York City and loves to run around in circles in Central Park. Connect with Mike online LinkedIn Video Here’s the video version of our conversation: https://youtu.be/JlJjBWGwSDg Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 22. The word "semantics" is often used imprecisely by semantic-technology practitioners. It can describe a wide array of knowledge-representation practices, from simple glossaries and taxonomies to full-blown enterprise ontologies, any of which may be summarized in a conversation as "semantics." Mike Pool thinks that this dynamic - using a word that lacks precise meaning while assuming that it communicates a lot - may justify a moratorium on the use of the term. Interview transcript Larry: Hi everyone, welcome to episode number 22 of the Knowledge Graph Insights podcast. I'm really happy today to welcome to the show Mike Pool. Mike is a longtime ontologist, a couple of decades plus. He recently took a position at Bloomberg. But he made this really provocative post on LinkedIn lately that I want to flesh out today, and we'll talk more about that throughout the rest of the show. Welcome, Mike, tell the folks a little bit more about what you're up to these days. Mike: Hey, thank you, Larry. Yeah. As you noted, I've just taken a position with Bloomberg and for these many years that you alluded to, I've been very heavily focused on building, doing knowledge representation in general. In the last let's say decade or so I've been particularly focused on using ontologies and knowledge graphs in large banks, or large organizations at least, to help organize disparate data, to make it more accessible, breakdown data silos, et cetera. It's particularly relevant in the finance industry where things can be sliced and diced in so many different ways. I find there's a really important use case in the financial space but in large organizations in general, in my opinion, for using ontology. So that's a lot of what I've been thinking about, to make that more accessible to the organization and to help them build these ontologies and utilize th...…
Margaret Warren As a 10-year-old photographer, Margaret Warren would jot down on the back of each printed photo metadata about who took the picture, who was in it, and where it was taken. Her interest in image metadata continued into her adult life, culminating the creation of ImageSnippets, a service that lets anyone add linked open data descriptions to their images. We talked about: her work to make images more discoverable with metadata connected via a knowledge graph how her early childhood history as a metadata strategist, her background in computing technology, and her personal interest in art and a photography shows up in her product, ImageSnippets her takes on the basics of metadata strategy and practice the many types of metadata: descriptive, administrative, technical, etc. the role of metadata in the new AI world some of the good and bad reasons that social media platforms might remove metadata from images privacy implications of metadata in social media the linked data principles that she applies in ImageSnippets and how they're managed in the product's workflow her wish that CMSs and social media platforms would not strip the metadata from images as they ingest them the lightweight image ontology that underlies her ImageSnippets product her prediction that the importance of metadata that supports provenance, demonstrates originality, and sets context will continue to grow in the future Margaret's bio Margaret Warren is a technologist, researcher and artist/content creator. She is the founder and CEO of Metadata Authoring Systems whose mission is to make the most obscure images on the web findable, and easily accessible by describing and preserving them in the most precise ways possible. To assist with this mission, she is the creator of a system called, ImageSnippets which can be used by anyone to build linked data descriptions of images into graphs. She is also a research associate with the Florida Institute of Human and Machine Cognition, one of the primary organizers of a group called The Dataworthy Collective and is a member of the IPTC (International Press and Telecommunications Council) photo-metadata working group and the Research Data Alliance charter on Collections as Data. As a researcher, Margaret's primary focus is at the intersection of semantics, metadata, knowledge representation and information science particularly around visual content, search and findability. She is deeply interested in how people describe what they experience visually and how to capture and formalize this knowledge into machine readable structures. She creates tools and processes for humans but augmented by machine intelligence. Many of these tools are useful for unifying the many types of metadata and descriptions of images - including the very important context element - into ontology infused knowledge graphs. Her tools can be used for tasks as advanced as complex domain modeling but can also facilitate image content to be shared and published while staying linked to it's metadata across workflows. Learn more and connect with Margaret online LinkedIn Patreon Bluesky Substack ImageSnippets Metadata Authoring Systems personal and art site IPTC links IPTC Photo Metadata Software that supports IPTC Photo Metadata Get IPTC Photo Metadata Browser extensions for IPTC Photo Metadata Resource not mentioned in podcast (but very useful for examining structured metadata in web pages) OpenLink Structured Data Sniffer (OSDS) Video Here’s the video version of our conversation: https://youtu.be/pjoAAq5zuRk Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 21. Nowadays, we are all immersed in a deluge of information and media, especially images. The real value of these images is captured in the metadata about them. Without information about the history of an image, its technical details,…
Jans Aasman Hybrid AI architectures get more complex every day. For Jans Aasman, large language models and generative AI are just the newest additions to his toolkit. Jans has been building advanced hybrid AI systems for more than 15 years, using knowledge graphs, symbolic logic, and machine learning - and now LLMs and gen AI - to build advanced AI systems for Fortune 500 companies. We talked about: his knowledge graph and neuro-symbolic work as the CEO of Franz the crucial role of a visionary knowledge graph champion in KG adoption in enterprises the two types of KG champions he has encountered: the magic-seeking, forward-looking technologist and the more pragmatic IT leader trying to better organize their operation the AI architectural patterns and themes he has seen emerge over the past 25 years: logic, reasoning, event-based KGs, machine learning, and of course gen AI and LLMs how gen AI lets him do things he couldn't have imagined five years ago the enduring importance of enterprise taxonomies, especially in RAG architectures which business entities need to be understood to answer complex business questions his approach to neuro-symbolic AI, seeing it as a "fluid interplay between a knowledge graph, symbolic logic, machine learning, and generative AI" the power of "magic predicates" a common combination of AI technologies and human interactions that can improve medical diagnosis and care decisions his strong belief in keeping humans in the loop in AI systems his observation that technology and business leaders seeing the need for "a symbolic approach next to generative AI" his take on the development of reasoning capabilities of LLMs how the code-generation capabilities of LLMs are more beneficial to senior programmers and may even impede the work of less experiences coders Jans' bio Jans Aasman is a Ph.D. psychologist and expert in Cognitive Science - as well as CEO of Franz Inc., an early innovator in Artificial Intelligence and provider of Knowledge Graph Solutions based on AllegroGraph. As both a scientist and CEO, Dr. Aasman continues to break ground in the areas of Artificial Intelligence and Knowledge Graphs as he works hand-in-hand with numerous Fortune 500 organizations as well as government entities worldwide. Connect with Jans online LinkedIn email: ja at franz dot com Video Here’s the video version of our conversation: https://www.youtube.com/watch?v=SZBZxC8S1Uk Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 20. The mix of technologies in hybrid artificial intelligence systems just keeps getting more interesting. This might seem like a new phenomenon, but long before our LinkedIn feeds were clogged with posts about retrieval augmented generation and neuro-symbolic architectures, Jans Aasman was building AI systems that combined knowledge graphs, symbolic logic, and machine learning. Large language models and generative AI are just the newest technologies in his AI toolkit. Interview transcript Larry: Hi, everyone. Welcome to episode number 20 of the Knowledge Graph Insights podcast. I am really delighted today to welcome to the show Jans Aasmann. Jans is, he originally started out as a psychologist and he got into cognitive science. For the past 20 years, he's run a company called Franz, where he's the CEO doing neuro-symbolic AI, so welcome, Jans. Tell the folks a little bit more about what you're doing these days. Jans: We help companies build knowledge graphs, but with the special angle that we now offer neuro-symbolic AI so that we, in a very fluid way, mix traditional symbolic logic and the traditional machine learning with the new generative AI. We do this in every possible combination that you could think of. Larry: Who? Jans: These applications might be in healthcare or in call centers or in publishing. It's many, many, many different domains it supplies. Larry:…
Juan Sequeda Knowledge graph technology has been around for decades. The benefits so far accruing to only a few big enterprises and tech companies. Juan Sequeda sees large language models as a critical enabler for the broader adoption of KGs. With their capacity to accelerate the acquisition and use of valuable business knowledge, LLMs offer a path to a better return on your enterprise's investment in semantics. We talked about: his work data.world as Principal scientist and the head of the AI lab at data.world the new discovery and knowledge-acquisition capabilities that LLMs give knowledge engineers a variety of business benefits that unfold from these new capabilities the payoff of investing in semantics and knowledge: "one plus one is greater than two" how semantic understanding and the move from a data-first world to a knowledge-first world helps businesses make better decisions and become more efficient the pendulum swings in the history of the development of AI and knowledge systems his research with Dean Allemang on how knowledge graphs can help LLMs improve the accuracy of answers of questions posed to enterprise relational databases the role of industry benchmarks in understanding the return on your invest in semantics the importance of treating semantics as a first-class citizen how business leaders can recognize and take advantage of the semantics and knowledge work that is already happening in their organizations Juan's bio Juan Sequeda is the Principal Scientist and Head of the AI Lab at data.world. He holds a PhD in Computer Science from The University of Texas at Austin. Juan’s research and industry work has been on the intersection of data and AI, with the goal to reliably create knowledge from inscrutable data, specifically designing and building Knowledge Graph for enterprise data and metadata management. Juan is the co-author of the book “Designing and Building Enterprise Knowledge Graph” and the co-host of Catalog and Cocktails, an honest, no-bs, non-salesy data podcast. Connect with Juan online LinkedIn Catalog & Cocktails podcast Video Here’s the video version of our conversation: https://youtu.be/xZq12K7GvB8 Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 19. The AI pendulum has been swinging back and forth for many decades. Juan Sequeda argues that we're now at a point in the advancement of AI technology where businesses can fully reap its long-promised benefits. The key is a semantic understanding of your business, captured in a knowledge graph. Juan sees large language models as a critical enabler of this capability, in particular the ability of LLMs to accelerate the acquisition and use of valuable business knowledge. Interview transcript Larry: Hi, everyone. Welcome to episode number 19 of the Knowledge Graph Insights podcast. I am really delighted today to welcome to the show Juan Sequeda. Juan is the principal scientist and the head of the AI lab at data.world. He's also the co-host of the really good popular podcast, Catalog & Cocktails. So welcome, Juan. Tell the folks a little bit more about what you're up to these days. Juan: Hey, very great. Thank you so much for having me. Great to chat with you. So what am I up to now these days? Obviously, knowledge graphs is something that is my entire life of what I've been doing. This was before it was called knowledge graphs. I would say that the last year, year-and-a-half, almost two years now, I would say, is been understanding the relationship between knowledge graphs and LLMs. If people have been following our work, what we've been doing a lot has been on understanding how to use knowledge graphs to increase the accuracy for your chat with your data system, so be able to do question answering over your structured SQL databases and how knowledge graphs increase the accuracy of that. So we can chat about that. Juan:…
Velkommen til Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.
Slut dig til verdens bedste podcast-app for at styre dine yndlings shows online og afspille dem offline på vores Android og iOS apps. Det er gratis og nemt!