#284 Breaking Down the Monolith - Incentivizing Good Choices - Interview w/ Frederik Nielsen
Manage episode 393968730 series 3293786
Please Rate and Review us on your podcast app of choice!
Get involved with Data Mesh Understanding's free community roundtables and introductions: https://landing.datameshunderstanding.com/
If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see here
Episode list and links to all available episode transcripts here.
Provided as a free resource by Data Mesh Understanding. Get in touch with Scott on LinkedIn.
Transcript for this episode (link) provided by Starburst. You can download their Data Products for Dummies e-book (info-gated) here and their Data Mesh for Dummies e-book (info gated) here.
Frederik's LinkedIn: https://www.linkedin.com/in/frederikgnielsen/
In this episode, Scott interviewed Frederik Nielsen, Engineering Manager at Pandora (the jewelry one, not the music one 😅).
Some key takeaways/thoughts from Frederik's point of view:
- Your data technology and architecture choices incentivize certain behaviors. Consider what behaviors you want before you lock yourself in to anything.
- Advice to past data mesh self: "construct a data architecture and platform that can adapt to the business requirements and wishes [which] will change over time." Build a composable platform as it's "easier to adapt to changing business requirements." Focus on decentralization features and make it decoupled and composable.
- Trying to go too wide with your data mesh implementation at the start with all your domains makes it harder to really find your groove and build momentum.
- Cost transparency can be a big driver for data mesh adoption. Teams want to understand their costs and many organizations are driving cost cutting initiatives. Decomposing the monolithic approach to data means better understanding the cost of individual pieces of data work.
- Relatedly, when teams are responsible for their own costs, it's easier to spot when someone is making tradeoffs related to cost. It's a more tangible decision and can be a conscious decision to take on tech debt.
- When taking a concept like data mesh to the highest levels in the organization, attach it to tangible use cases. Make it something that is worth their while, the 'juice must be worth the squeeze'. Focus on the strategic business goals and priorities.
- It's okay to leverage management consultants. But your data ownership should very clearly be internal - external parties should not own any aspects if you want long-term success. Regarding consultants: "you would rather be driving them than them driving you."
- It's absolutely normal for some teams to be more data mature than others. If teams raise their hands saying they need help with their data work, your culture is mature enough where teams ask for help and they should get it where possible.
- ?Controversial?: It's potentially better to focus on your more data mature teams first when going with data mesh so you can move faster early.
- If possible, create golden paths or pre-configured approaches for less data mature teams to be able to still create data products.
- It can be hard to show domains why they should move to data mesh. Focusing on use cases is probably the best approach but finding use cases enticing enough to each domain can be a challenge 😅
- Tying your data initiatives to the company strategic priorities is crucial to get buy-in. E.g. personalization and omni-channel experience - how do you tie your use cases back to what is most important to the business?
- At the heart of it, data mesh should be about driving business outcomes - especially the ones people really care about. Focus on that and you will have a far higher chance of success and getting/maintaining buy-in.
- Make sure you build your data products in a scalable way. That means understanding when you need to put information into separate data products instead of trying to combine it all into one - that is just a mini enterprise data warehouse / microlith.
- If your data team is remaining quite productive but the backlog is ever increasing as is the time between request and delivery, then your central data team might be a bottleneck. Consider addressing that with something like data mesh.
- ?Controversial?: Less mature domains can get a more "watered down" version of data mesh as they learn to actually manage and own their data. You don't need to start with the most complicated aspects and use cases first. Scott note: this can be a slippery slope
- When mapping out potential use cases, ask the amount of effort - if it's even possible - to execute in your existing (non-data mesh) architecture. If it's not possible, data mesh can mean far more data capability for the organization, which can be a great selling point.
- A decentralized architecture can mean cost savings by getting far more fine grained, e.g. shutting off test environments over the weekend or at night. You can find places to be more efficient far more easily.
Frederik started with a bit about how their initial data mesh journey started - and it wasn't great 😅 It was led by management consultants and was focused on real-time data with a very tangible use case. However, two things came from it: 1) a better understanding of what data mesh should actually be used for and 2) buy-in around a very specific use case at the highest levels. So while there was a misinterpretation of data mesh and the use case wasn't the best fit, there was still excitement about the term - and somewhat the actual meaning broadly internally. Making it tangible got people to see the potential benefits.
Cost transparency has been a major driver for data mesh internally according to Frederik. Because the costs in a large monolithic stack are very opaque, decomposing the architecture has led to a far better understanding of the cost of individual pieces of work. Because inflation concerns were a big factor for retail in 2023, there was a bigger focus on cost reductions. Being able to give teams the freedom to take different approaches but making them responsible for the costs has led to better cost efficiency - teams can choose more costly methods but those decisions are more exposed. Also, because you have much finer-grained control, there are far more levers to pull when it comes to cost savings, e.g. shutting off test and dev environments at night or scaling up and down dynamically.
Frederik talked about a common pattern when moving to data mesh: some teams are more data mature than others. There will be plenty of teams that need help when it comes to data mesh, especially building good data products. They are considering creating a sort of golden path or easy button approach for those teams that aren't as mature as well to make things relatively pre-configured instead of having to make many complex decisions.
When driving buy-in at the wider level for data mesh, Frederik talked about pitching data mesh as an entire organization transformation versus pitching use case by use case. He believes it's probably better to focus on the use cases but it can be hard to focus on the complete picture of everything you need when you are also focused on specific use cases. It's always a balance between what is needed only for the use case and what is good for the overall company approach to data.
For Frederik, there are two big company strategic priorities: personalization and omni-channel experience (experience across in-store and online). So much of what they have been focusing on is finding use cases that tie into at least one of the priorities because then there will be executive support. Constantly tying the data work back to what people care about shows an understanding of the business instead of doing data work for the sake of data work. However, these are very big challenges across many domains and teams. So making sure to do things in a scalable way and finding the right balance between data products with still high interoperability is crucial.
When discussing bottlenecks, Frederik talked about how the measure for the centralized data team becoming a bottleneck was when the time between a data request and the actually delivery was expanding. The backlog was ballooning even though the data team was quite productive. Many people will feel the pain of the increasing time to delivery, leverage that while still showing a productive team. If you are executing well but aren't succeeding, you need a new strategy.
Frederik talked about the fact your data technology and architecture decisions will incentivize certain behaviors. A monolithic platform incentivizes monolithic ownership and handing off work, responsibilities, etc. When they introduced Kafka, it enabled them to push ownership upstream to data producers because the new technology allowed data producers to more easily own their data. It's of course difficult to incentivize your desired behaviors but always think about what you want to happen and try to make that the easy/happy path.
When it comes to ownership of data, Frederik thinks maturity really matters. When you want to go down the path of data mesh, trying to get every domain to really be advanced with data is just not that realistic. Some teams just don't see data as their focus so if they won't leverage much data for analytical or ML/AI use cases, they are less likely to want to own their data. And less capable quite frankly.
Circling back to tangible use cases, Frederik talked about one really key use case that saw a really big uptake that they couldn't really accomplish before going the data mesh route. Being able to tie something to actual impact, that really helped people get more interested. Whether that is a business capability or directly impacting a business metric. Similarly, when trying to find new use cases, the team did a lot of user journey mapping. The data for that user journey lives in many systems so you need lots of teams participating to make the data available but it can have a big impact on business. Many companies probably can't do something that complex in their existing architecture. You can use the inability to do amazing things in your existing architecture as a potential selling point.
Learn more about Data Mesh Understanding: https://datameshunderstanding.com/about
Data Mesh Radio is hosted by Scott Hirleman. If you want to connect with Scott, reach out to him on LinkedIn: https://www.linkedin.com/in/scotthirleman/
If you want to learn more and/or join the Data Mesh Learning Community, see here: https://datameshlearning.com/community/
If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see here
All music used this episode was found on PixaBay and was created by (including slight edits by Scott Hirleman): Lesfm, MondayHopes, SergeQuadrado, ItsWatR, Lexin_Music, and/or nevesf
422 episoder