Streaming

Confluent Schema Registry with Ewen Cheslack-Postava - Episode 10

Summary

To process your data you need to know what shape it has, which is why schemas are important. When you are processing that data in multiple systems it can be difficult to ensure that they all have an accurate representation of that schema, which is why Confluent has built a schema registry that plugs into Kafka. In this episode Ewen Cheslack-Postava explains what the schema registry is, how it can be used, and how they built it. He also discusses how it can be extended for other deployment targets and use cases, and additional features that are planned for future releases.

Preamble

  • Hello and welcome to the Data Engineering Podcast, the show about modern data infrastructure
  • When you’re ready to launch your next project you’ll need somewhere to deploy it. Check out Linode at dataengineeringpodcast.com/linode and get a $20 credit to try out their fast and reliable Linux virtual servers for running your data pipelines or trying out the tools you hear about on the show.
  • Continuous delivery lets you get new features in front of your users as fast as possible without introducing bugs or breaking production and GoCD is the open source platform made by the people at Thoughtworks who wrote the book about it. Go to dataengineeringpodcast.com/gocd to download and launch it today. Enterprise add-ons and professional support are available for added peace of mind.
  • Go to dataengineeringpodcast.com to subscribe to the show, sign up for the newsletter, read the show notes, and get in touch.
  • You can help support the show by checking out the Patreon page which is linked from the site.
  • To help other people find the show you can leave a review on iTunes, or Google Play Music, and tell your friends and co-workers
  • Your host is Tobias Macey and today I’m interviewing Ewen Cheslack-Postava about the Confluent Schema Registry

Interview

  • Introduction
  • How did you get involved in the area of data engineering?
  • What is the schema registry and what was the motivating factor for building it?
  • If you are using Avro, what benefits does the schema registry provide over and above the capabilities of Avro’s built in schemas?
  • How did you settle on Avro as the format to support and what would be involved in expanding that support to other serialization options?
  • Conversely, what would be involved in using a storage backend other than Kafka?
  • What are some of the alternative technologies available for people who aren’t using Kafka in their infrastructure?
  • What are some of the biggest challenges that you faced while designing and building the schema registry?
  • What is the tipping point in terms of system scale or complexity when it makes sense to invest in a shared schema registry and what are the alternatives for smaller organizations?
  • What are some of the features or enhancements that you have in mind for future work?

Contact Info

Parting Question

  • From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Buzzfeed Data Infrastructure with Walter Menendez - Episode 7

Summary

Buzzfeed needs to be able to understand how its users are interacting with the myriad articles, videos, etc. that they are posting. This lets them produce new content that will continue to be well-received. To surface the insights that they need to grow their business they need a robust data infrastructure to reliably capture all of those interactions. Walter Menendez is a data engineer on their infrastructure team and in this episode he describes how they manage data ingestion from a wide array of sources and create an interface for their data scientists to produce valuable conclusions.

Preamble

  • Hello and welcome to the Data Engineering Podcast, the show about modern data management
  • When you’re ready to launch your next project you’ll need somewhere to deploy it. Check out Linode at dataengineeringpodcast.com/linode and get a $20 credit to try out their fast and reliable Linux virtual servers for running your data pipelines or trying out the tools you hear about on the show.
  • Continuous delivery lets you get new features in front of your users as fast as possible without introducing bugs or breaking production and GoCD is the open source platform made by the people at Thoughtworks who wrote the book about it. Go to dataengineeringpodcast.com/gocd to download and launch it today. Enterprise add-ons and professional support are available for added peace of mind.
  • Go to dataengineeringpodcast.com to subscribe to the show, sign up for the newsletter, read the show notes, and get in touch.
  • You can help support the show by checking out the Patreon page which is linked from the site.
  • To help other people find the show you can leave a review on iTunes, or Google Play Music, and tell your friends and co-workers
  • Your host is Tobias Macey and today I’m interviewing Walter Menendez about the data engineering platform at Buzzfeed

Interview

  • Introduction
  • How did you get involved in the area of data management?
  • How is the data engineering team at Buzzfeed structured and what kinds of projects are you responsible for?
  • What are some of the types of data inputs and outputs that you work with at Buzzfeed?
  • Is the core of your system using a real-time streaming approach or is it primarily batch-oriented and what are the business needs that drive that decision?
  • What does the architecture of your data platform look like and what are some of the most significant areas of technical debt?
  • Which platforms and languages are most widely leveraged in your team and what are some of the outliers?
  • What are some of the most significant challenges that you face, both technically and organizationally?
  • What are some of the dead ends that you have run into or failed projects that you have tried?
  • What has been the most successful project that you have completed and how do you measure that success?

Contact Info

Links

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Dask with Matthew Rocklin - Episode 2

Summary

There is a vast constellation of tools and platforms for processing and analyzing your data. In this episode Matthew Rocklin talks about how Dask fills the gap between a task oriented workflow tool and an in memory processing framework, and how it brings the power of Python to bear on the problem of big data.

Preamble

  • Hello and welcome to the Data Engineering Podcast, the show about modern data infrastructure
  • Go to dataengineeringpodcast.com to subscribe to the show, sign up for the newsletter, read the show notes, and get in touch.
  • You can help support the show by checking out the Patreon page which is linked from the site.
  • To help other people find the show you can leave a review on iTunes, or Google Play Music, and tell your friends and co-workers
  • Your host is Tobias Macey and today I’m interviewing Matthew Rocklin about Dask and the Blaze ecosystem.

Interview with Matthew Rocklin

  • Introduction
  • How did you get involved in the area of data engineering?
  • Dask began its life as part of the Blaze project. Can you start by describing what Dask is and how it originated?
  • There are a vast number of tools in the field of data analytics. What are some of the specific use cases that Dask was built for that weren’t able to be solved by the existing options?
  • One of the compelling features of Dask is the fact that it is a Python library that allows for distributed computation at a scale that has largely been the exclusive domain of tools in the Hadoop ecosystem. Why do you think that the JVM has been the reigning platform in the data analytics space for so long?
  • Do you consider Dask, along with the larger Blaze ecosystem, to be a competitor to the Hadoop ecosystem, either now or in the future?
  • Are you seeing many Hadoop or Spark solutions being migrated to Dask? If so, what are the common reasons?
  • There is a strong focus for using Dask as a tool for interactive exploration of data. How does it compare to something like Apache Drill?
  • For anyone looking to integrate Dask into an existing code base that is already using NumPy or Pandas, what does that process look like?
  • How do the task graph capabilities compare to something like Airflow or Luigi?
  • Looking through the documentation for the graph specification in Dask, it appears that there is the potential to introduce cycles or other bugs into a large or complex task chain. Is there any built-in tooling to check for that before submitting the graph for execution?
  • What are some of the most interesting or unexpected projects that you have seen Dask used for?
  • What do you perceive as being the most relevant aspects of Dask for data engineering/data infrastructure practitioners, as compared to the end users of the systems that they support?
  • What are some of the most significant problems that you have been faced with, and which still need to be overcome in the Dask project?
  • I know that the work on Dask is largely performed under the umbrella of PyData and sponsored by Continuum Analytics. What are your thoughts on the financial landscape for open source data analytics and distributed computation frameworks as compared to the broader world of open source projects?

Keep in touch

Links

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA