Reflections On Designing A Data Platform From Scratch

00:00:00
/
00:40:21

February 27th, 2022

40 mins 21 secs

Your Host

About this Episode

Summary

Building a data platform is a complex journey that requires a significant amount of planning to do well. It requires knowledge of the available technologies, the requirements of the operating environment, and the expectations of the stakeholders. In this episode Tobias Macey, the host of the show, reflects on his plans for building a data platform and what he has learned from running the podcast that is influencing his choices.

Announcements

  • Hello and welcome to the Data Engineering Podcast, the show about modern data management
  • When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
  • Atlan is a collaborative workspace for data-driven teams, like Github for engineering or Figma for design teams. By acting as a virtual hub for data assets ranging from tables and dashboards to SQL snippets & code, Atlan enables teams to create a single source of truth for all their data assets, and collaborate across the modern data stack through deep integrations with tools like Snowflake, Slack, Looker and more. Go to dataengineeringpodcast.com/atlan today and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $3000 on an annual subscription
  • TimescaleDB, from your friends at Timescale, is the leading open-source relational database with support for time-series data. Time-series data is time stamped so you can measure how a system is changing. Time-series data is relentless and requires a database like TimescaleDB with speed and petabyte-scale. Understand the past, monitor the present, and predict the future. That’s Timescale. Visit them today at dataengineeringpodcast.com/timescale
  • RudderStack helps you build a customer data platform on your warehouse or data lake. Instead of trapping data in a black box, they enable you to easily collect customer data from the entire stack and build an identity graph on your warehouse, giving you full visibility and control. Their SDKs make event streaming from any app or website easy, and their state-of-the-art reverse ETL pipelines enable you to send enriched data to any cloud tool. Sign up free… or just get the free t-shirt for being a listener of the Data Engineering Podcast at dataengineeringpodcast.com/rudder.
  • I’m your host, Tobias Macey, and today I’m sharing the approach that I’m taking while designing a data platform

Interview

  • Introduction
  • How did you get involved in the area of data management?
  • What are the components that need to be considered when designing a solution?
    • Data integration (extract and load)
      • What are your data sources?
      • Batch or streaming (acceptable latencies)
    • Data storage (lake or warehouse)
      • How is the data going to be used?
      • What other tools/systems will need to integrate with it?
      • The warehouse (Bigquery, Snowflake, Redshift) has become the focal point of the "modern data stack"
    • Data orchestration
      • Who will be managing the workflow logic?
    • Metadata repository
      • Types of metadata (catalog, lineage, access, queries, etc.)
    • Semantic layer/reporting
    • Data applications
  • Implementation phases
    • Build a single end-to-end workflow of a data application using a single category of data across sources
    • Validate the ability for an analyst/data scientist to self-serve a notebook powered analysis
    • Iterate
  • Risks/unknowns
    • Data modeling requirements
    • Specific implementation details as integrations across components are built
    • When to use a vendor and risk lock-in vs. spend engineering time

Contact Info

Parting Question

  • From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Support Data Engineering Podcast