Gå offline med appen Player FM !
Kevin Cohen on Neum AI - Weaviate Podcast #70!
Manage episode 382325910 series 3524543
Hey everyone! Thank you so much for watching the 70th episode of the Weaviate podcast with Neum AI CTO and Co-Founder Kevin Cohen! I first met Kevin when he was debugging an issue with his distributed node utilization and have since learned so much from him about how he sees the space of Data Ingestion, also commonly referenced as ETL for LLMs! There are so many interesting parts to this from the general flow of data connectors, chunkers and metadata extractors, embedding inference, and the last leg of the mile of importing the vectors to a Vector DB such as Weaviate! I really loved how Kevin broke down the distributed messaging queue and system design for orchestrating data ingestion at massive scale such as dealing with failures and optimizing the infrastructure as code setup. We also discussed things like new use cases with quadrillion scale vector indexes and the role of knowledge graphs in all this! I really hope you enjoy the podcast, please check out this amazing article below from Neum AI! https://medium.com/@neum_ai/retrieval-augmented-generation-at-scale-building-a-distributed-system-for-synchronizing-and-eaa29162521 Chapters 0:00 Check this out! 1:18 Welcome Kevin! 1:58 Founding Neum AI 6:55 Data Ingestion, End-to-End Overview 9:10 Chunking and Metadata Extraction 14:20 Embedding Cache 16:57 Distributed Messaging Queues 22:15 Embeddings Cache ELI5 25:30 Customizing Weaviate Kubernetes 38:10 Multi-Tenancy and Resource Allocation 39:20 Billion-Scale Vector Search 45:05 Knowledge Graphs 52:10 Y Combinator Experience
108 episoder
Manage episode 382325910 series 3524543
Hey everyone! Thank you so much for watching the 70th episode of the Weaviate podcast with Neum AI CTO and Co-Founder Kevin Cohen! I first met Kevin when he was debugging an issue with his distributed node utilization and have since learned so much from him about how he sees the space of Data Ingestion, also commonly referenced as ETL for LLMs! There are so many interesting parts to this from the general flow of data connectors, chunkers and metadata extractors, embedding inference, and the last leg of the mile of importing the vectors to a Vector DB such as Weaviate! I really loved how Kevin broke down the distributed messaging queue and system design for orchestrating data ingestion at massive scale such as dealing with failures and optimizing the infrastructure as code setup. We also discussed things like new use cases with quadrillion scale vector indexes and the role of knowledge graphs in all this! I really hope you enjoy the podcast, please check out this amazing article below from Neum AI! https://medium.com/@neum_ai/retrieval-augmented-generation-at-scale-building-a-distributed-system-for-synchronizing-and-eaa29162521 Chapters 0:00 Check this out! 1:18 Welcome Kevin! 1:58 Founding Neum AI 6:55 Data Ingestion, End-to-End Overview 9:10 Chunking and Metadata Extraction 14:20 Embedding Cache 16:57 Distributed Messaging Queues 22:15 Embeddings Cache ELI5 25:30 Customizing Weaviate Kubernetes 38:10 Multi-Tenancy and Resource Allocation 39:20 Billion-Scale Vector Search 45:05 Knowledge Graphs 52:10 Y Combinator Experience
108 episoder
Minden epizód
×Velkommen til Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.