Building internal expertise around big data in a large organization is a major competitive advantage. However, it can be a difficult process due to compliance needs and the need to scale globally on day one. In this episode Jesper Søgaard and Keld Antonsen share the story of starting and growing the big data group at LEGO. They discuss the challenges of being at global scale from the start, hiring and training talented engineers, prototyping and deploying new systems in the cloud, and what they have learned in the process. This is a useful conversation for engineers, managers, and leadership who are interested in building enterprise big data systems.
Do you want to try out some of the tools and applications that you heard about on the Data Engineering Podcast? Do you have some ETL jobs that need somewhere to run? Check out Linode at promo.linode.com/dataengineeringpodcast or use the code dataengineering2018 and get a $20 credit (that’s 4 months free!) to try out their fast and reliable Linux virtual servers. They’ve got lightning fast networking and SSD servers with plenty of power and storage to run whatever you want to experiment on.
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute.
- Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
- To help other people find the show please leave a review on iTunes, or Google Play Music, tell your friends and co-workers, and share it on social media.
- Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
- Your host is Tobias Macey and today I’m interviewing Keld Antonsen and Jesper Soegaard about the data infrastructure and analytics that powers LEGO
- How did you get involved in the area of data management?
- My understanding is that the big data group at LEGO is a fairly recent development. Can you share the story of how it got started?
- What kinds of data practices were in place prior to starting a dedicated group for managing the organization’s data?
- What was the transition process like, migrating data silos into a uniformly managed platform?
- What are the biggest data challenges that you face at LEGO?
- What are some of the most critical sources and types of data that you are managing?
- What are the main components of the data infrastructure that you have built to support the organizations analytical needs?
- What are some of the technologies that you have found to be most useful?
- Which have been the most problematic?
- What does the team structure look like for the data services at LEGO?
- Does that reflect in the types/numbers of systems that you support?
- What types of testing, monitoring, and metrics do you use to ensure the health of the systems you support?
- What have been some of the most interesting, challenging, or useful lessons that you have learned while building and maintaining the data platforms at LEGO?
- How have the data systems at Lego evolved over recent years as new technologies and techniques have been developed?
- How does the global nature of the LEGO business influence the design strategies and technology choices for your platform?
- What are you most excited for in the coming year?
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
- LEGO Group
- ERP (Enterprise Resource Planning)
- Predictive Analytics
- Prescriptive Analytics
- Center Of Excellence
- Continuous Integration
- Apache NiFi