With the proliferation of data sources to give a more comprehensive view of the information critical to your business it is even more important to have a canonical view of the entities that you care about. Is customer number 342 in your ERP the same as Bob Smith on Twitter? Using master data management to build a data catalog helps you answer these questions reliably and simplify the process of building your business intelligence reports. In this episode the head of product at Tamr, Mark Marinelli, discusses the challenges of building a master data set, why you should have one, and some of the techniques that modern platforms and systems provide for maintaining it.
Skafos is the machine learning deployment platform that provides data scientists with end-to-end support throughout the machine learning lifecycle. Skafos maximizes tool and framework interoperability and offers real-time insights and the ability to be up and running with cloud-based production scale infrastructure instantaneously.
Request a demo at dataengineeringpodcast.com/metis-machine to learn more about how Metis Machine is operationalizing data science.
Your data platform needs to be scalable, fault tolerant, and performant, which means that you need the same from your cloud provider. Linode has been powering production systems for over 17 years, and now they’ve launched a fully managed Kubernetes platform. With the combined power of the Kubernetes engine for flexible and scalable deployments, and features like dedicated CPU instances, GPU instances, and object storage you’ve got everything you need to build a bulletproof data pipeline. If you go to dataengineeringpodcast.com/linode today you’ll even get a $100 credit to use on building your own cluster, or object storage, or reliable backups, or… And while you’re there don’t forget to thank them for being a long-time supporter of the Data Engineering Podcast!
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- When you’re ready to build your next pipeline you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to run a bullet-proof data platform. Go to dataengineeringpodcast.com/linode to get a $20 credit and launch a new server in under a minute.
- You work hard to make sure that your data is reliable and accurate, but can you say the same about the deployment of your machine learning models? The Skafos platform from Metis Machine was built to give your data scientists the end-to-end support that they need throughout the machine learning lifecycle. Skafos maximizes interoperability with your existing tools and platforms, and offers real-time insights and the ability to be up and running with cloud-based production scale infrastructure instantaneously. Request a demo at dataengineeringpodcast.com/metis-machine to learn more about how Metis Machine is operationalizing data science.
- Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
- Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
- Your host is Tobias Macey and today I’m interviewing Mark Marinelli about data mastering for modern platforms
- How did you get involved in the area of data management?
- Can you start by establishing a definition of data mastering that we can work from?
- How does the master data set get used within the overall analytical and processing systems of an organization?
- What is the traditional workflow for creating a master data set?
- What has changed in the current landscape of businesses and technology platforms that makes that approach impractical?
- What are the steps that an organization can take to evolve toward an agile approach to data mastering?
- At what scale of company or project does it makes sense to start building a master data set?
- What are the limitations of using ML/AI to merge data sets?
- What are the limitations of a golden master data set in practice?
- Are there particular formats of data or types of entities that pose a greater challenge when creating a canonical format for them?
- Are there specific problem domains that are more likely to benefit from a master data set?
- Once a golden master has been established, how are changes to that information handled in practice? (e.g. versioning of the data)
- What storage mechanisms are typically used for managing a master data set?
- Are there particular security, auditing, or access concerns that engineers should be considering when managing their golden master that goes beyond the rest of their data infrastructure?
- How do you manage latency issues when trying to reference the same entities from multiple disparate systems?
- What have you found to be the most common stumbling blocks for a group that is implementing a master data platform?
- What suggestions do you have to help prevent such a project from being derailed?
- What resources do you recommend for someone looking to learn more about the theoretical and practical aspects of data mastering for their organization?
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
- Multi-Dimensional Database
- Master Data Management
- EDW (Enterprise Data Warehouse)
- Waterfall Development Method
- Agile Development Method
- Feature Engineering
- Data Catalog
- RDBMS (Relational Database Management System)