Data lineage is the common thread that ties together all of your data pipelines, workflows, and systems. In order to get a holistic understanding of your data quality, where errors are occurring, or how a report was constructed you need to track the lineage of the data from beginning to end. The complicating factor is that every framework, platform, and product has its own concepts of how to store, represent, and expose that information. In order to eliminate the wasted effort of building custom integrations every time you want to combine lineage information across systems Julien Le Dem introduced the OpenLineage specification. In this episode he explains his motivations for starting the effort, the far-reaching benefits that it can provide to the industry, and how you can start integrating it into your data platform today. This is an excellent conversation about how competing companies can still find mutual benefit in co-operating on open standards.
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
- RudderStack’s smart customer data pipeline is warehouse-first. It builds your customer data warehouse and your identity graph on your data warehouse, with support for Snowflake, Google BigQuery, Amazon Redshift, and more. Their SDKs and plugins make event streaming easy, and their integrations with cloud applications like Salesforce and ZenDesk help you go beyond event streaming. With RudderStack you can use all of your customer data to answer more difficult questions and then send those insights to your whole customer data stack. Sign up free at dataengineeringpodcast.com/rudder today.
- When it comes to serving data for AI and ML projects, do you feel like you have to rebuild the plane while you’re flying it across the ocean? Molecula is an enterprise feature store that operationalizes advanced analytics and AI in a format designed for massive machine-scale projects without having to manage endless one-off information requests. With Molecula, data engineers manage one single feature store that serves the entire organization with millisecond query performance whether in the cloud or at your data center. And since it is implemented as an overlay, Molecula doesn’t disrupt legacy systems. High-growth startups use Molecula’s feature store because of its unprecedented speed, cost savings, and simplified access to all enterprise data. From feature extraction to model training to production, the Molecula feature store provides continuously updated feature access, reuse, and sharing without the need to pre-process data. If you need to deliver unprecedented speed, cost savings, and simplified access to large scale, real-time data, visit dataengineeringpodcast.com/molecula and request a demo. Mention that you’re a Data Engineering Podcast listener, and they’ll send you a free t-shirt.
- Your host is Tobias Macey and today I’m interviewing Julien Le Dem about Open Lineage, a new standard for structuring metadata to enable interoperability across the ecosystem of data management tools.
- How did you get involved in the area of data management?
- Can you start by giving an overview of what the Open Lineage project is and the story behind it?
- What is the current state of the ecosystem for generating and sharing metadata between systems?
- What are your goals for the OpenLineage effort?
- What are the biggest conceptual or consistency challenges that you are facing in defining a metadata model that is broad and flexible enough to be widely used while still being prescriptive enough to be useful?
- What is the current state of the project? (e.g. code available, maturity of the specification, etc.)
- What are some of the ideas or assumptions that you had at the beginning of this project that have had to be revisited as you iterate on the definition and implementation?
- What are some of the projects/organizations/etc. that have committed to supporting or adopting OpenLineage?
- What problem domain(s) are best suited to adopting OpenLineage?
- What are some of the problems or use cases that you are explicitly not including in scope for OpenLineage?
- For someone who already has a lineage and/or metadata catalog, what is involved in evolving that system to work well with OpenLineage?
- What are some of the downstream/long-term impacts that you anticipate or hope that this standardization effort will generate?
- What are some of the most interesting, unexpected, or challenging lessons that you have learned while working on the OpenLineage effort?
- What do you have planned for the future of the project?
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
- Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you’ve learned something or tried out a project from the show then tell us about it! Email email@example.com) with your story.
- To help other people find the show please leave a review on iTunes and tell your friends and co-workers
- Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
- Apache Parquet
- Doug Cutting
- Apache Arrow
- Service Oriented Architecture
- Data Lineage
- Apache Atlas
- Apache Spark
- JSON Schema
- Great Expectations
- Data Mesh
- The map is not the territory
- Apache Flink
- Apache Storm
- Kafka Streams
- Stone Soup
- Apache Beam
- Linux Foundation AI & Data
Support Data Engineering Podcast