Improving The Performance Of Cloud-Native Big Data At Netflix Using The Iceberg Table Format with Ryan Blue - Episode 52

00:00:00
/
00:53:45

October 14th, 2018

53 mins 45 secs

Your Host

About this Episode

Summary

With the growth of the Hadoop ecosystem came a proliferation of implementations for the Hive table format. Unfortunately, with no formal specification, each project works slightly different which increases the difficulty of integration across systems. The Hive format is also built with the assumptions of a local filesystem which results in painful edge cases when leveraging cloud object storage for a data lake. In this episode Ryan Blue explains how his work on the Iceberg table format specification and reference implementation has allowed Netflix to improve the performance and simplify operations for their S3 data lake. This is a highly detailed and technical exploration of how a well-engineered metadata layer can improve the speed, accuracy, and utility of large scale, multi-tenant, cloud-native data platforms.

Preamble

  • Hello and welcome to the Data Engineering Podcast, the show about modern data management
  • When you’re ready to build your next pipeline you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to run a bullet-proof data platform. Go to dataengineeringpodcast.com/linode to get a $20 credit and launch a new server in under a minute.
  • Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
  • Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
  • Your host is Tobias Macey and today I’m interviewing Ryan Blue about Iceberg, a Netflix project to implement a high performance table format for batch workloads

Interview

  • Introduction
  • How did you get involved in the area of data management?
  • Can you start by explaining what Iceberg is and the motivation for creating it?
    • Was the project built with open-source in mind or was it necessary to refactor it from an internal project for public use?


  • How has the use of Iceberg simplified your work at Netflix?

  • How is the reference implementation architected and how has it evolved since you first began work on it?

    • What is involved in deploying it to a user’s environment?


  • For someone who is interested in using Iceberg within their own environments, what is involved in integrating it with their existing query engine?

    • Is there a migration path for pre-existing tables into the Iceberg format?


  • How is schema evolution managed at the file level?

    • How do you handle files on disk that don’t contain all of the fields specified in a table definition?


  • One of the complicated problems in data modeling is managing table partitions. How does Iceberg help in that regard?

  • What are the unique challenges posed by using S3 as the basis for a data lake?

    • What are the benefits that outweigh the difficulties?


  • What have been some of the most challenging or contentious details of the specification to define?

    • What are some things that you have explicitly left out of the specification?


  • What are your long-term goals for the Iceberg specification?

    • Do you anticipate the reference implementation continuing to be used and maintained?


Contact Info

Parting Question

  • From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Support Data Engineering Podcast