Performing Fast Data Analytics Using Apache Kudu - Episode 64

00:00:00
/
00:50:46

January 6th, 2019

50 mins 46 secs

Your Host

About this Episode

Summary

The Hadoop platform is purpose built for processing large, slow moving data in long-running batch jobs. As the ecosystem around it has grown, so has the need for fast data analytics on fast moving data. To fill this need the Kudu project was created with a column oriented table format that was tuned for high volumes of writes and rapid query execution across those tables. For a perfect pairing, they made it easy to connect to the Impala SQL engine. In this episode Brock Noland and Jordan Birdsell from PhData explain how Kudu is architected, how it compares to other storage systems in the Hadoop orbit, and how to start integrating it into you analytics pipeline.

Preamble

  • Hello and welcome to the Data Engineering Podcast, the show about modern data management
  • When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute.
  • Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
  • To help other people find the show please leave a review on iTunes, or Google Play Music, tell your friends and co-workers, and share it on social media.
  • Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
  • Your host is Tobias Macey and today I’m interviewing Brock Noland and Jordan Birdsell about Apache Kudu and how it is able to provide fast analytics on fast data in the Hadoop ecosystem

Interview

  • Introduction
  • How did you get involved in the area of data management?
  • Can you start by explaining what Kudu is and the motivation for building it?
    • How does it fit into the Hadoop ecosystem?
    • How does it compare to the work being done on the Iceberg table format?


  • What are some of the common application and system design patterns that Kudu supports?

  • How is Kudu architected and how has it evolved over the life of the project?

  • There are many projects in and around the Hadoop ecosystem that rely on Zookeeper as a building block for consensus. What was the reasoning for using Raft in Kudu?

  • How does the storage layer in Kudu differ from what would be found in systems like Hive or HBase?

    • What are the implementation details in the Kudu storage interface that have had the greatest impact on its overall speed and performance?


  • A number of the projects built for large scale data processing were not initially built with a focus on operational simplicity. What are the features of Kudu that simplify deployment and management of production infrastructure?

  • What was the motivation for using C++ as the language target for Kudu?

    • If you were to start the project over today what would you do differently?


  • What are some situations where you would advise against using Kudu?

  • What have you found to be the most interesting/unexpected/challenging lessons learned in the process of building and maintaining Kudu?

  • What are you most excited about for the future of Kudu?

Contact Info

Parting Question

  • From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Support Data Engineering Podcast