Putting Apache Spark Into Action with Jean Georges Perrin - Episode 60

00:00:00
/
00:50:31

December 9th, 2018

50 mins 31 secs

Your Host

About this Episode

Summary

Apache Spark is a popular and widely used tool for a variety of data oriented projects. With the large array of capabilities, and the complexity of the underlying system, it can be difficult to understand how to get started using it. Jean George Perrin has been so impressed by the versatility of Spark that he is writing a book for data engineers to hit the ground running. In this episode he helps to make sense of what Spark is, how it works, and the various ways that you can use it. He also discusses what you need to know to get it deployed and keep it running in a production environment and how it fits into the overall data ecosystem.

Preamble

  • Hello and welcome to the Data Engineering Podcast, the show about modern data management
  • When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute.
  • Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
  • Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
  • Your host is Tobias Macey and today I’m interviewing Jean Georges Perrin, author of the upcoming Manning book Spark In Action 2nd Edition, about the ways that Spark is used and how it fits into the data landscape

Interview

  • Introduction
  • How did you get involved in the area of data management?
  • Can you start by explaining what Spark is?
    • What are some of the main use cases for Spark?
    • What are some of the problems that Spark is uniquely suited to address?
    • Who uses Spark?


  • What are the tools offered to Spark users?

  • How does it compare to some of the other streaming frameworks such as Flink, Kafka, or Storm?

  • For someone building on top of Spark what are the main software design paradigms?

    • How does the design of an application change as you go from a local development environment to a production cluster?


  • Once your application is written, what is involved in deploying it to a production environment?

  • What are some of the most useful strategies that you have seen for improving the efficiency and performance of a processing pipeline?

  • What are some of the edge cases and architectural considerations that engineers should be considering as they begin to scale their deployments?

  • What are some of the common ways that Spark is deployed, in terms of the cluster topology and the supporting technologies?

  • What are the limitations of the Spark programming model?

    • What are the cases where Spark is the wrong choice?


  • What was your motivation for writing a book about Spark?

    • Who is the target audience?


  • What have been some of the most interesting or useful lessons that you have learned in the process of writing a book about Spark?

  • What advice do you have for anyone who is considering or currently using Spark?

Contact Info

Parting Question

  • From your perspective, what is the biggest gap in the tooling or technology for data management today?

Book Discount

  • Use the code poddataeng18 to get 40% off of all of Manning’s products at manning.com

Links

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Support Data Engineering Podcast