145 subscribers
Gå offline med appen Player FM !
What is the Future of Streaming Data?
Manage episode 424654009 series 2355972
What’s the next big thing in the future of streaming data? In this episode, Greg DeMichillie (VP of Product and Solutions Marketing, Confluent) talks to Kris about the future of stream processing in environments where the value of data lies in their ability to intercept and interpret data.
Greg explains that organizations typically focus on the infrastructure containers themselves, and not on the thousands of data connections that form within. When they finally realize that they don't have a way to manage the complexity of these connections, a new problem arises: how do they approach managing such complexity? That’s where Confluent and Apache Kafka® come into play - they offer a consistent way to organize this seemingly endless web of data so they don't have to face the daunting task of figuring out how to connect their shopping portals or jump through hoops trying different ETL tools on various systems.
As more companies seek ways to manage this data, they are asking some basic questions:
- How to do it?
- Do best practices exist?
- How can we get help?
The next question for companies who have already adopted Kafka is a bit more complex: "What about my partners?” For example, companies with inventory management systems use supply chain systems to track product creation and shipping. As a result, they need to decide which emails to update, if they need to write custom REST APIs to sit in front of Kafka topics, etc. Advanced use cases like this raise additional questions about data governance, security, data policy, and PII, forcing companies to think differently about data.
Greg predicts this is the next big frontier as more companies adopt Kafka internally. And because they will have to think less about where the data is stored and more about how data moves, they will have to solve problems to make managing all that data easier. If you're an enthusiast of real-time data streaming, Greg invites you to attend the Kafka Summit (London) in May and Current (Austin, TX) for a deeper dive into the world of Apache Kafka-related topics now and beyond.
EPISODE LINKS
- What’s Ahead of the Future of Data Streaming?
- If Streaming Is the Answer, Why Are We Still Doing Batch?
- All Current 2022 sessions and slides
- Kafka Summit London 2023
- Current 2023
- Watch the video version of this podcast
- Kris Jenkins’ Twitter
- Streaming Audio Playlist
- Join the Confluent Community
- Learn more with Kafka tutorials, resources, and guides at Confluent Developer
- Live demo: Intro to Event-Driven Microservices with Confluent
- Use PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
Kapitler
1. Intro (00:00:00)
2. How did Greg get started with event streaming? (00:07:11)
3. What is the value of data streaming in Apache Kafka? (00:13:22)
4. Event logs vs REST APIs (00:18:45)
5. What are the stages of Kafka adoption? (00:21:44)
6. What is the next big frontier in Kafka adoption? (00:25:41)
7. How do we get to the next stage of streaming data faster? (00:33:01)
8. It's a wrap! (00:39:56)
265 episoder
Manage episode 424654009 series 2355972
What’s the next big thing in the future of streaming data? In this episode, Greg DeMichillie (VP of Product and Solutions Marketing, Confluent) talks to Kris about the future of stream processing in environments where the value of data lies in their ability to intercept and interpret data.
Greg explains that organizations typically focus on the infrastructure containers themselves, and not on the thousands of data connections that form within. When they finally realize that they don't have a way to manage the complexity of these connections, a new problem arises: how do they approach managing such complexity? That’s where Confluent and Apache Kafka® come into play - they offer a consistent way to organize this seemingly endless web of data so they don't have to face the daunting task of figuring out how to connect their shopping portals or jump through hoops trying different ETL tools on various systems.
As more companies seek ways to manage this data, they are asking some basic questions:
- How to do it?
- Do best practices exist?
- How can we get help?
The next question for companies who have already adopted Kafka is a bit more complex: "What about my partners?” For example, companies with inventory management systems use supply chain systems to track product creation and shipping. As a result, they need to decide which emails to update, if they need to write custom REST APIs to sit in front of Kafka topics, etc. Advanced use cases like this raise additional questions about data governance, security, data policy, and PII, forcing companies to think differently about data.
Greg predicts this is the next big frontier as more companies adopt Kafka internally. And because they will have to think less about where the data is stored and more about how data moves, they will have to solve problems to make managing all that data easier. If you're an enthusiast of real-time data streaming, Greg invites you to attend the Kafka Summit (London) in May and Current (Austin, TX) for a deeper dive into the world of Apache Kafka-related topics now and beyond.
EPISODE LINKS
- What’s Ahead of the Future of Data Streaming?
- If Streaming Is the Answer, Why Are We Still Doing Batch?
- All Current 2022 sessions and slides
- Kafka Summit London 2023
- Current 2023
- Watch the video version of this podcast
- Kris Jenkins’ Twitter
- Streaming Audio Playlist
- Join the Confluent Community
- Learn more with Kafka tutorials, resources, and guides at Confluent Developer
- Live demo: Intro to Event-Driven Microservices with Confluent
- Use PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
Kapitler
1. Intro (00:00:00)
2. How did Greg get started with event streaming? (00:07:11)
3. What is the value of data streaming in Apache Kafka? (00:13:22)
4. Event logs vs REST APIs (00:18:45)
5. What are the stages of Kafka adoption? (00:21:44)
6. What is the next big frontier in Kafka adoption? (00:25:41)
7. How do we get to the next stage of streaming data faster? (00:33:01)
8. It's a wrap! (00:39:56)
265 episoder
All episodes
×

1 Apache Kafka 3.5 - Kafka Core, Connect, Streams, & Client Updates 11:25


1 A Special Announcement from Streaming Audio 1:18


1 How to use Data Contracts for Long-Term Schema Management 57:28




1 Next-Gen Data Modeling, Integrity, and Governance with YODA 55:55


1 Migrate Your Kafka Cluster with Minimal Downtime 1:01:30


1 Real-Time Data Transformation and Analytics with dbt Labs 43:41


1 What is the Future of Streaming Data? 41:29


1 What can Apache Kafka Developers learn from Online Gaming? 55:32


1 Apache Kafka 3.4 - New Features & Improvements 5:13


1 How to use OpenTelemetry to Trace and Monitor Apache Kafka Systems 50:01


1 What is Data Democratization and Why is it Important? 47:27


1 Git for Data: Managing Data like Code with lakeFS 30:42


1 Using Kafka-Leader-Election to Improve Scalability and Performance 51:06


1 Real-Time Machine Learning and Smarter AI with Data Streaming 38:56
Velkommen til Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.