Apache Spark is a popular and widely used tool for a variety of data oriented projects. With the large array of capabilities, and the complexity of the underlying system, it can be difficult to understand how to get started using it. Jean George Perrin has been so impressed by the versatility of Spark that he is writing a book for data engineers to hit the ground running. In this episode he helps to make sense of what Spark is, how it works, and the various ways that you can use it. He also discusses what you need to know to get it deployed and keep it running in a production environment and how it fits into the overall data ecosystem.
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute.
- Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
- Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
- Your host is Tobias Macey and today I’m interviewing Jean Georges Perrin, author of the upcoming Manning book Spark In Action 2nd Edition, about the ways that Spark is used and how it fits into the data landscape
- How did you get involved in the area of data management?
- Can you start by explaining what Spark is?
- What are some of the main use cases for Spark?
- What are some of the problems that Spark is uniquely suited to address?
- Who uses Spark?
- What are the tools offered to Spark users?
- How does it compare to some of the other streaming frameworks such as Flink, Kafka, or Storm?
- For someone building on top of Spark what are the main software design paradigms?
- How does the design of an application change as you go from a local development environment to a production cluster?
- Once your application is written, what is involved in deploying it to a production environment?
- What are some of the most useful strategies that you have seen for improving the efficiency and performance of a processing pipeline?
- What are some of the edge cases and architectural considerations that engineers should be considering as they begin to scale their deployments?
- What are some of the common ways that Spark is deployed, in terms of the cluster topology and the supporting technologies?
- What are the limitations of the Spark programming model?
- What are the cases where Spark is the wrong choice?
- What was your motivation for writing a book about Spark?
- Who is the target audience?
- What have been some of the most interesting or useful lessons that you have learned in the process of writing a book about Spark?
- What advice do you have for anyone who is considering or currently using Spark?
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
- Use the code poddataeng18 to get 40% off of all of Manning’s products at manning.com
- Apache Spark
- Spark In Action
- Book code examples in GitHub
- International Informix Users Group
- Microsoft SQL Server
- ETL (Extract, Transform, Load)
- Spark SQL and Spark In Action‘s chapter 11
- Spark ML and Spark In Action‘s chapter 18
- Spark Streaming (structured) and Spark In Action‘s chapter 10
- Spark GraphX
- IBM Watson Studio
- AWS Kinesis
- Spark Catalyst
- Spark Tungsten
- Spark UDF
- AWS EMR