Real World Change Data Capture At Datacoral

00:00:00
/
00:49:58

March 22nd, 2021

49 mins 58 secs

Your Host

About this Episode

Summary

The world of business is becoming increasingly dependent on information that is accurate up to the minute. For analytical systems, the only way to provide this reliably is by implementing change data capture (CDC). Unfortunately, this is a non-trivial undertaking, particularly for teams that don’t have extensive experience working with streaming data and complex distributed systems. In this episode Raghu Murthy, founder and CEO of Datacoral, does a deep dive on how he and his team manage change data capture pipelines in production.

Announcements

  • Hello and welcome to the Data Engineering Podcast, the show about modern data management
  • When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
  • Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Go to dataengineeringpodcast.com/datafold today to start a 30-day trial of Datafold. Once you sign up and create an alert in Datafold for your company data, they will send you a cool water flask.
  • RudderStack’s smart customer data pipeline is warehouse-first. It builds your customer data warehouse and your identity graph on your data warehouse, with support for Snowflake, Google BigQuery, Amazon Redshift, and more. Their SDKs and plugins make event streaming easy, and their integrations with cloud applications like Salesforce and ZenDesk help you go beyond event streaming. With RudderStack you can use all of your customer data to answer more difficult questions and then send those insights to your whole customer data stack. Sign up free at dataengineeringpodcast.com/rudder today.
  • Your host is Tobias Macey and today I’m interviewing Raghu Murthy about his recent work of making change data capture more accessible and maintainable

Interview

  • Introduction
  • How did you get involved in the area of data management?
  • Can you start by giving an overview of what CDC is and when it is useful?
  • What are the alternatives to CDC?
    • What are the cases where a more batch-oriented approach would be preferable?
  • What are the factors that you need to consider when deciding whether to implement a CDC system for a given data integration?
    • What are the barriers to entry?
  • What are some of the common mistakes or misconceptions about CDC that you have encountered in your own work and while working with customers?
  • How does CDC fit into a broader data platform, particularly where there are likely to be other data integration pipelines in operation? (e.g. Fivetran/Airbyte/Meltano/custom scripts)
  • What are the moving pieces in a CDC workflow that need to be considered as you are designing the system?
    • What are some examples of the configuration changes necessary in source systems to provide the needed log data?
  • How would you characterize the current landscape of tools available off the shelf for building a CDC pipeline?
    • What are your predictions about the potential for a unified abstraction layer for log-based CDC across databases?
  • What are some of the potential performance/uptime impacts on source databases, both during the initial historical sync and once you hit a steady state?
    • How can you mitigate the impacts of the CDC pipeline on the source databases?
  • What are some of the implementation details that application developers DBAs need to be aware of for data modeling in the source systems to allow for proper replication via CDC?
  • Are there any performance challenges that need to be addressed in the consumers or destination systems? e.g. parallelism
  • Can you describe the technical implementation and architecture that you use for implementing CDC?
    • How has the design evolved as you have grown the scale and sophistication of your system?
  • In the destination system, what data modeling decisions need to be made to ensure that the replicated information is usable for anlytics?
    • What additional attributes need to be added to track things like row modifications, deletions, schema changes, etc.?
    • How do you approach treatment of data copies in the DWH? (e.g. ELT – keep all source tables and use DBT for converting relevant tables into star/snowflake/data vault/wide tables)
  • What are your thoughts on the viability of a data lake as the destination system? (e.g. S3/Parquet or Trino/Drill/etc.)
  • CDC is a topic that is generally reserved for coversations about databases, but what are some of the other systems that we could think about implementing CDC? e.g. APIs and third party data sources
  • How can we integrage CDC into metadata/lineage tooling?
  • How do you handle observability of CDC flows?
    • What is involved in debugging a replication flow?
  • How can we build data quality checks into CDC workflows?
  • What are some of the most interesting, innovative, or unexpected ways that you have seen CDC used?
  • What are the most interesting, unexpected, or challenging lessons that you have learned from digging deep into CDC implementation?
  • When is CDC the wrong choice?
  • What are some of the industry or technology trends around CDC that you are most excited by?

Contact Info

Parting Question

  • From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

  • Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
  • Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
  • If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story.
  • To help other people find the show please leave a review on iTunes and tell your friends and co-workers
  • Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat

Links

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Support Data Engineering Podcast