Data is a critical element to every role in an organization, which is also what makes managing it so challenging. With so many different opinions about which pieces of information are most important, how it needs to be accessed, and what to do with it, many data projects are doomed to failure. In this episode Chris Bergh explains how taking an agile approach to delivering value can drive down the complexity that grows out of the varied needs of the business. Building a DataOps workflow that incorporates fast delivery of well defined projects, continuous testing, and open lines of communication is a proven path to success.
DataKitchen offers the first end-to-end DataOps Platform that empowers teams to reclaim control of their data pipelines and deliver business value instantly, without errors. The platform automates and coordinates all the people, tools, and environments in your entire data analytics organization – everything from orchestration, testing and monitoring to development and deployment. It’s DataOps Delivered.
Your data platform needs to be scalable, fault tolerant, and performant, which means that you need the same from your cloud provider. Linode has been powering production systems for over 17 years, and now they’ve launched a fully managed Kubernetes platform. With the combined power of the Kubernetes engine for flexible and scalable deployments, and features like dedicated CPU instances, GPU instances, and object storage you’ve got everything you need to build a bulletproof data pipeline. If you go to dataengineeringpodcast.com/linode today you’ll even get a $100 credit to use on building your own cluster, or object storage, or reliable backups, or… And while you’re there don’t forget to thank them for being a long-time supporter of the Data Engineering Podcast!
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, a 40Gbit public network, fast object storage, and a brand new managed Kubernetes platform, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. And for your machine learning workloads, they’ve got dedicated CPU and GPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show!
- If DataOps sounds like the perfect antidote to your pipeline woes, DataKitchen is here to help. DataKitchen’s DataOps Platform automates and coordinates all the people, tools, and environments in your entire data analytics organization – everything from orchestration, testing and monitoring to development and deployment. In no time, you’ll reclaim control of your data pipelines so you can start delivering business value instantly, without errors. Go to dataengineeringpodcast.com/datakitchen today to learn more and thank them for supporting the show!
- Your host is Tobias Macey and today I’m welcoming back Chris Bergh to talk about ways that DataOps principles can help to reduce organizational complexity
- How did you get involved in the area of data management?
- How are typical data and analytic teams organized? What are their roles and structure?
- Can you start by giving an outline of the ways that complexity can manifest in a data organization?
- What are some of the contributing factors that generate this complexity?
- How does the size or scale of an organization and their data needs impact the segmentation of responsibilities and roles?
- How does this organizational complexity play out within a single team? For example between data engineers, data scientists, and production/operations?
- How do you approach the definition of useful interfaces between different roles or groups within an organization?
- What are your thoughts on the relationship between the multivariate complexities of data and analytics workflows and the software trend toward microservices as a means of addressing the challenges of organizational communication patterns in the software lifecycle?
- How does this organizational complexity play out between multiple teams?
- For example between centralized data team and line of business self service teams?
- Isn’t organizational complexity just ‘the way it is’? Is there any how in getting out of meetings and inter team conflict?
- What are some of the technical elements that are most impactful in reducing the time to delivery for different roles?
- What are some strategies that you have found to be useful for maintaining a connection to the business need throughout the different stages of the data lifecycle?
- What are some of the signs or symptoms of problematic complexity that individuals and organizations should keep an eye out for?
- What role can automated testing play in improving this process?
- How do the current set of tools contribute to the fragmentation of data workflows?
- Which set of technologies are most valuable in reducing complexity and fragmentation?
- What advice do you have for data engineers to help with addressing complexity in the data organization and the problems that it contributes to?
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
- Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you’ve learned something or tried out a project from the show then tell us about it! Email firstname.lastname@example.org) with your story.
- To help other people find the show please leave a review on iTunes and tell your friends and co-workers
- Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
- NASA Ames Research Center
- Conway’s Law
- Random Forest
- K-Means Clustering
- Intuit Superglue
- Master Data Management
- Great Expectations
- Continuous Integration
- Continuous Delivery
- W. Edwards Deming
- The Joel Test
- Joel Spolsky
- DataOps Blog