The most complicated part of data engineering is the effort involved in making the raw data fit into the narrative of the business. Master Data Management (MDM) is the process of building consensus around what the information actually means in the context of the business and then shaping the data to match those semantics. In this episode Malcolm Hawker shares his years of experience working in this domain to explore the combination of technical and social skills that are necessary to make an MDM project successful both at the outset and over the long term.
Ascend.io, the Data Automation Cloud, provides the most advanced automation for data and analytics engineering workloads. Ascend.io unifies the core capabilities of data engineering—data ingestion, transformation, delivery, orchestration, and observability—into a single platform so that data teams deliver 10x faster. With 95% of data teams already at or over capacity, engineering productivity is a top priority for enterprises. Ascend’s Flex-code user interface empowers any member of the data team—from data engineers to data scientists to data analysts—to quickly and easily build and deliver on the data and analytics workloads they need. And with Ascend’s DataAware™ intelligence, data teams no longer spend hours carefully orchestrating brittle data workloads and instead rely on advanced automation to optimize the entire data lifecycle. Ascend.io runs natively on data lakes and warehouses and in AWS, Google Cloud and Microsoft Azure.
Go to dataengineeringpodcast.com/ascend to find out more.
Your data platform needs to be scalable, fault tolerant, and performant, which means that you need the same from your cloud provider. Linode has been powering production systems for over 17 years, and now they’ve launched a fully managed Kubernetes platform. With the combined power of the Kubernetes engine for flexible and scalable deployments, and features like dedicated CPU instances, GPU instances, and object storage you’ve got everything you need to build a bulletproof data pipeline. If you go to dataengineeringpodcast.com/linode today you’ll even get a $100 credit to use on building your own cluster, or object storage, or reliable backups, or… And while you’re there don’t forget to thank them for being a long-time supporter of the Data Engineering Podcast!
Tonic.ai matches development and staging environments to production by rapidly equipping teams with high-quality data at scale. With regulations and breaches on the rise, production data is no longer safe (or legal) for developers to use, but creating test data in-house is a complex chore that eats into valuable engineering resources. With Tonic, teams no longer need to choose between productivity and security—they get both rapidly and with ease. Shorten your development cycle, eliminate the need for cumbersome data pipeline work, and mathematically guarantee the privacy of your data. Through its data de-identification, advanced subsetting, and synthetic scaling technologies, Tonic makes it possible to create a true mirror of production in the safety of a developer landscape so you can work on real product and steer clear of surprises at release time.
Go to dataengineeringpodcast.com/tonic to sign up for a free 2-week sandbox account and give Tonic.ai a try!
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show!
- Random data doesn’t do it — and production data is not safe (or legal) for developers to use. What if you could mimic your entire production database to create a realistic dataset with zero sensitive data? Tonic.ai does exactly that. With Tonic, you can generate fake data that looks, acts, and behaves like production because it’s made from production. Using universal data connectors and a flexible API, Tonic integrates seamlessly into your existing pipelines and allows you to shape and size your data to the scale, realism, and degree of privacy that you need. The platform offers advanced subsetting, secure de-identification, and ML-driven data synthesis to create targeted test data for all of your pre-production environments. Your newly mimicked datasets are safe to share with developers, QA, data scientists—heck, even distributed teams around the world. Shorten development cycles, eliminate the need for cumbersome data pipeline work, and mathematically guarantee the privacy of your data, with Tonic.ai. Data Engineering Podcast listeners can sign up for a free 2-week sandbox account, go to dataengineeringpodcast.com/tonic today to give it a try!
- RudderStack helps you build a customer data platform on your warehouse or data lake. Instead of trapping data in a black box, they enable you to easily collect customer data from the entire stack and build an identity graph on your warehouse, giving you full visibility and control. Their SDKs make event streaming from any app or website easy, and their state-of-the-art reverse ETL pipelines enable you to send enriched data to any cloud tool. Sign up free… or just get the free t-shirt for being a listener of the Data Engineering Podcast at dataengineeringpodcast.com/rudder.
- Data teams are increasingly under pressure to deliver. According to a recent survey by Ascend.io, 95% in fact reported being at or over capacity. With 72% of data experts reporting demands on their team going up faster than they can hire, it’s no surprise they are increasingly turning to automation. In fact, while only 3.5% report having current investments in automation, 85% of data teams plan on investing in automation in the next 12 months. 85%!!! That’s where our friends at Ascend.io come in. The Ascend Data Automation Cloud provides a unified platform for data ingestion, transformation, orchestration, and observability. Ascend users love its declarative pipelines, powerful SDK, elegant UI, and extensible plug-in architecture, as well as its support for Python, SQL, Scala, and Java. Ascend automates workloads on Snowflake, Databricks, BigQuery, and open source Spark, and can be deployed in AWS, Azure, or GCP. Go to dataengineeringpodcast.com/ascend and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $5,000 when you become a customer.
- Your host is Tobias Macey and today I’m interviewing Malcolm Hawker about master data management strategies for the enterprise
- How did you get involved in the area of data management?
- Can you start by giving your definition of what MDM is and the scope of activities/functions that it includes?
- How have evolutions in the data landscape shifted the conversation around MDM?
- Can you describe what Profisee is and the story behind it?
- What was your path to joining Profisee and what is your role in the business?
- Who are the target customers for Profisee?
- What are the challenges that they typically experience that leads them to MDM as a solution for their problems?
- How does the narrative around data observability/data quality from tools such as Great Expectations, Monte Carlo, etc. differ from the data quality benefits of a MDM strategy?
- How do recent conversations around semantic/metrics layers compare to the way that MDM approaches the problem of domain modeling?
- What are the steps to defining an MDM strategy for an organization or business unit?
- Once there is a strategy, what are the tactical elements of the implementation?
- What is the role of the toolchain in that implementation? (e.g. Spark, dbt, Airflow, etc.)
- Can you describe how Profisee is implemented?
- How does the customer base inform the architectural approach that Profisee has taken?
- Can you describe the adoption process for an organization that is using Profisee for their MDM?
- Once an organization has defined and adopted an MDM strategy, what are the ongoing maintenance tasks related to the domain models?
- What are the most interesting, innovative, or unexpected ways that you have seen MDM used?
- What are the most interesting, unexpected, or challenging lessons that you have learned while working in MDM?
- When is Profisee the wrong choice?
- What do you have planned for the future of Profisee?
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
- Thank you for listening! Don’t forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you’ve learned something or tried out a project from the show then tell us about it! Email firstname.lastname@example.org) with your story.
- To help other people find the show please leave a review on Apple Podcasts and tell your friends and co-workers
- MDM == Master Data Management
- CRM == Customer Relationship Management
- ERP == Enterprise Resource Planning
- Levenshtein Distance Algorithm
- CDP == Customer Data Platform