Jupyter notebooks have gained popularity among data scientists as an easy way to do exploratory analysis and build interactive reports. However, this can cause difficulties when trying to move the work of the data scientist into a more standard production environment, due to the translation efforts that are necessary. At Netflix they had the crazy idea that perhaps that last step isn’t necessary, and the production workflows can just run the notebooks directly. Matthew Seal is one of the primary engineers who has been tasked with building the tools and practices that allow the various data oriented roles to unify their work around notebooks. In this episode he explains the rationale for the effort, the challenges that it has posed, the development that has been done to make it work, and the benefits that it provides to the Netflix data platform teams.
Your data platform needs to be scalable, fault tolerant, and performant, which means that you need the same from your cloud provider. Linode has been powering production systems for over 17 years, and now they’ve launched a fully managed Kubernetes platform. With the combined power of the Kubernetes engine for flexible and scalable deployments, and features like dedicated CPU instances, GPU instances, and object storage you’ve got everything you need to build a bulletproof data pipeline. If you go to dataengineeringpodcast.com/linode today you’ll even get a $60 credit to use on building your own cluster, or object storage, or reliable backups, or… And while you’re there don’t forget to thank them for being a long-time supporter of the Data Engineering Podcast!
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- When you’re ready to build your next pipeline you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to run a bullet-proof data platform. Go to dataengineeringpodcast.com/linode to get a $20 credit and launch a new server in under a minute.
- Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
- Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
- Your host is Tobias Macey and today I’m interviewing Matthew Seal about the ways that Netflix is using Jupyter notebooks to bridge the gap between data roles
- How did you get involved in the area of data management?
- Can you start by outlining the motivation for choosing Jupyter notebooks as the core interface for your data teams?
- Where are you using notebooks and where are you not?
- What is the technical infrastructure that you have built to suppport that design choice?
- Which team was driving the effort?
- Was it difficult to get buy in across teams?
- How much shared code have you been able to consolidate or reuse across teams/roles?
- Have you investigated the use of any of the other notebook platforms for similar workflows?
- What are some of the notebook anti-patterns that you have encountered and what conventions or tooling have you established to discourage them?
- What are some of the limitations of the notebook environment for the work that you are doing?
- What have been some of the most challenging aspects of building production workflows on top of Jupyter notebooks?
- What are some of the projects that are ongoing or planned for the future that you are most excited by?
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
- Netflix Notebook Blog Posts
- Nteract Tooling
- Project Jupyter
- Zeppelin Notebooks