25 subscribers
Gå offline med appen Player FM !
Podcasts der er værd at lytte til
SPONSORERET


Episode 39: From Models to Products: Bridging Research and Practice in Generative AI at Google Labs
Manage episode 452087052 series 3317544
Hugo speaks with Ravin Kumar,*Senior Research Data Scientist at Google Labs. Ravin’s career has taken him from building rockets at SpaceX to driving data science and technology at Sweetgreen, and now to advancing generative AI research and applications at Google Labs and DeepMind. His multidisciplinary experience gives him a rare perspective on building AI systems that combine technical rigor with practical utility.
In this episode, we dive into:
• Ravin’s fascinating career path, including the skills and mindsets needed to work effectively with AI and machine learning models at different stages of the pipeline.
• How to build generative AI systems that are scalable, reliable, and aligned with user needs.
• Real-world applications of generative AI, such as using open weight models such as Gemma to help a bakery streamline operations—an example of delivering tangible business value through AI.
• The critical role of UX in AI adoption, and how Ravin approaches designing tools like Notebook LM with the user journey in mind.
We also include a live demo where Ravin uses Notebook LM to analyze my website, extract insights, and even generate a podcast-style conversation about me. While some of the demo is visual, much can be appreciated through audio, and we’ve added a link to the video in the show notes for those who want to see it in action. We’ve also included the generated segment at the end of the episode for you to enjoy.
LINKS
- The livestream on YouTube
- Google Labs
- Ravin's GenAI Handbook
- Breadboard: A library for prototyping generative AI applications
As mentioned in the episode, Hugo is teaching a four-week course, Building LLM Applications for Data Scientists and SWEs, co-led with Stefan Krawczyk (Dagworks, ex-StitchFix). The course focuses on building scalable, production-grade generative AI systems, with hands-on sessions, $1,000+ in cloud credits, live Q&As, and guest lectures from industry experts.
Listeners of Vanishing Gradients can get 25% off the course using this special link or by applying the code VG25 at checkout.
47 episoder
Manage episode 452087052 series 3317544
Hugo speaks with Ravin Kumar,*Senior Research Data Scientist at Google Labs. Ravin’s career has taken him from building rockets at SpaceX to driving data science and technology at Sweetgreen, and now to advancing generative AI research and applications at Google Labs and DeepMind. His multidisciplinary experience gives him a rare perspective on building AI systems that combine technical rigor with practical utility.
In this episode, we dive into:
• Ravin’s fascinating career path, including the skills and mindsets needed to work effectively with AI and machine learning models at different stages of the pipeline.
• How to build generative AI systems that are scalable, reliable, and aligned with user needs.
• Real-world applications of generative AI, such as using open weight models such as Gemma to help a bakery streamline operations—an example of delivering tangible business value through AI.
• The critical role of UX in AI adoption, and how Ravin approaches designing tools like Notebook LM with the user journey in mind.
We also include a live demo where Ravin uses Notebook LM to analyze my website, extract insights, and even generate a podcast-style conversation about me. While some of the demo is visual, much can be appreciated through audio, and we’ve added a link to the video in the show notes for those who want to see it in action. We’ve also included the generated segment at the end of the episode for you to enjoy.
LINKS
- The livestream on YouTube
- Google Labs
- Ravin's GenAI Handbook
- Breadboard: A library for prototyping generative AI applications
As mentioned in the episode, Hugo is teaching a four-week course, Building LLM Applications for Data Scientists and SWEs, co-led with Stefan Krawczyk (Dagworks, ex-StitchFix). The course focuses on building scalable, production-grade generative AI systems, with hands-on sessions, $1,000+ in cloud credits, live Q&As, and guest lectures from industry experts.
Listeners of Vanishing Gradients can get 25% off the course using this special link or by applying the code VG25 at checkout.
47 episoder
Усі епізоди
×
1 Episode 47: The Great Pacific Garbage Patch of Code Slop with Joe Reis 1:19:12

1 Episode 46: Software Composition Is the New Vibe Coding 1:08:57

1 Episode 45: Your AI application is broken. Here’s what to do about it. 1:17:30

1 Episode 44: The Future of AI Coding Assistants: Who’s Really in Control? 1:34:11

1 Episode 43: Tales from 400+ LLM Deployments: Building Reliable AI Agents in Production 1:01:03

1 Episode 42: Learning, Teaching, and Building in the Age of AI 1:20:03

1 Episode 41: Beyond Prompt Engineering: Can AI Learn to Set Its Own Goals? 43:51

1 Episode 40: What Every LLM Developer Needs to Know About GPUs 1:43:34

1 Episode 39: From Models to Products: Bridging Research and Practice in Generative AI at Google Labs 1:43:28

1 Episode 38: The Art of Freelance AI Consulting and Products: Data, Dollars, and Deliverables 1:23:47

1 Episode 37: Prompt Engineering, Security in Generative AI, and the Future of AI Research Part 2 50:36

1 Episode 36: Prompt Engineering, Security in Generative AI, and the Future of AI Research Part 1 1:03:46

1 Episode 35: Open Science at NASA -- Measuring Impact and the Future of AI 58:13

1 Episode 34: The AI Revolution Will Not Be Monopolized 1:42:51

1 Episode 33: What We Learned Teaching LLMs to 1,000s of Data Scientists 1:25:10

1 Episode 32: Building Reliable and Robust ML/AI Pipelines 1:15:10

1 Episode 31: Rethinking Data Science, Machine Learning, and AI 1:36:04

1 Episode 30: Lessons from a Year of Building with LLMs (Part 2) 1:15:23

1 Episode 29: Lessons from a Year of Building with LLMs (Part 1) 1:30:21

1 Episode 28: Beyond Supervised Learning: The Rise of In-Context Learning with LLMs 1:05:38

1 Episode 27: How to Build Terrible AI Systems 1:32:24

1 Episode 26: Developing and Training LLMs From Scratch 1:51:35

1 Episode 25: Fully Reproducible ML & AI Workflows 1:20:38

1 Episode 24: LLM and GenAI Accessibility 1:30:03

1 Episode 23: Statistical and Algorithmic Thinking in the AI Age 1:20:37

1 Episode 22: LLMs, OpenAI, and the Existential Crisis for Machine Learning Engineering 1:20:07

1 Episode 21: Deploying LLMs in Production: Lessons Learned 1:08:11

1 Episode 20: Data Science: Past, Present, and Future 1:26:39

1 Episode 19: Privacy and Security in Data Science and Machine Learning 1:23:19

1 Episode 18: Research Data Science in Biotech 1:12:42

1 Episode 17: End-to-End Data Science 1:16:04

1 Episode 16: Data Science and Decision Making Under Uncertainty 1:23:15

1 Episode 15: Uncertainty, Risk, and Simulation in Data Science 53:30

1 Episode 14: Decision Science, MLOps, and Machine Learning Everywhere 1:09:01

1 Episode 13: The Data Science Skills Gap, Economics, and Public Health 1:22:41

1 Episode 12: Data Science for Social Media: Twitter and Reddit 1:32:45

1 Episode 11: Data Science: The Great Stagnation 1:45:38

1 Episode 10: Investing in Machine Learning 1:26:33

1 9: AutoML, Literate Programming, and Data Tooling Cargo Cults 1:41:42

1 Episode 8: The Open Source Cybernetic Revolution 1:05:57

1 Episode 7: The Evolution of Python for Data Science 1:02:31

1 Episode 6: Bullshit Jobs in Data Science (and what to do about them) 1:27:01

1 Episode 5: Executive Data Science 1:48:14

1 Episode 4: Machine Learning at T-Mobile 1:44:10

1 Episode 3: Language Tech For All 1:32:33
Velkommen til Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.