Jupyter notebooks have gained popularity among data scientists as an easy way to do exploratory analysis and build interactive reports. However, this can cause difficulties when trying to move the work of the data scientist into a more standard production environment, due to the translation efforts that are necessary. At Netflix they had the crazy idea that perhaps that last step isn’t necessary, and the production workflows can just run the notebooks directly. Matthew Seal is one of the primary engineers who has been tasked with building the tools and practices that allow the various data oriented roles to unify their work around notebooks. In this episode he explains the rationale for the effort, the challenges that it has posed, the development that has been done to make it work, and the benefits that it provides to the Netflix data platform teams.
Do you want to try out some of the tools and applications that you heard about on the Data Engineering Podcast? Do you have some ETL jobs that need somewhere to run? Check out Linode at promo.linode.com/dataengineeringpodcast or use the code dataengineering2018 and get a $20 credit (that’s 4 months free!) to try out their fast and reliable Linux virtual servers. They’ve got lightning fast networking and SSD servers with plenty of power and storage to run whatever you want to experiment on.
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- When you’re ready to build your next pipeline you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to run a bullet-proof data platform. Go to dataengineeringpodcast.com/linode to get a $20 credit and launch a new server in under a minute.
- Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
- Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
- Your host is Tobias Macey and today I’m interviewing Matthew Seal about the ways that Netflix is using Jupyter notebooks to bridge the gap between data roles
- How did you get involved in the area of data management?
- Can you start by outlining the motivation for choosing Jupyter notebooks as the core interface for your data teams?
- Where are you using notebooks and where are you not?
- What is the technical infrastructure that you have built to suppport that design choice?
- Which team was driving the effort?
- Was it difficult to get buy in across teams?
- How much shared code have you been able to consolidate or reuse across teams/roles?
- Have you investigated the use of any of the other notebook platforms for similar workflows?
- What are some of the notebook anti-patterns that you have encountered and what conventions or tooling have you established to discourage them?
- What are some of the limitations of the notebook environment for the work that you are doing?
- What have been some of the most challenging aspects of building production workflows on top of Jupyter notebooks?
- What are some of the projects that are ongoing or planned for the future that you are most excited by?
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
- Netflix Notebook Blog Posts
- Nteract Tooling
- Project Jupyter
- Zeppelin Notebooks