The theory behind how a tool is supposed to work and the realities of putting it into practice are often at odds with each other. Learning the pitfalls and best practices from someone who has gained that knowledge the hard way can save you from wasted time and frustration. In this episode James Meickle discusses his recent experience building a new installation of Airflow. He points out the strengths, design flaws, and areas of improvement for the framework. He also describes the design patterns and workflows that his team has built to allow them to use Airflow as the basis of their data science platform.
Your data platform needs to be scalable, fault tolerant, and performant, which means that you need the same from your cloud provider. Linode has been powering production systems for over 17 years, and now they’ve launched a fully managed Kubernetes platform. With the combined power of the Kubernetes engine for flexible and scalable deployments, and features like dedicated CPU instances, GPU instances, and object storage you’ve got everything you need to build a bulletproof data pipeline. If you go to dataengineeringpodcast.com/linode today you’ll even get a $60 credit to use on building your own cluster, or object storage, or reliable backups, or… And while you’re there don’t forget to thank them for being a long-time supporter of the Data Engineering Podcast!
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- When you’re ready to build your next pipeline you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to run a bullet-proof data platform. Go to dataengineeringpodcast.com/linode to get a $20 credit and launch a new server in under a minute.
- Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
- Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
- Your host is Tobias Macey and today I’m interviewing James Meickle about his experiences building a new Airflow installation
- How did you get involved in the area of data management?
- What was your initial project requirement?
- What tooling did you consider in addition to Airflow?
- What aspects of the Airflow platform led you to choose it as your implementation target?
- Can you describe your current deployment architecture?
- How many engineers are involved in writing tasks for your Airflow installation?
- What resources were the most helpful while learning about Airflow design patterns?
- How have you architected your DAGs for deployment and extensibility?
- What kinds of tests and automation have you put in place to support the ongoing stability of your deployment?
- What are some of the dead-ends or other pitfalls that you encountered during the course of this project?
- What aspects of Airflow have you found to be lacking that you would like to see improved?
- What did you wish someone had told you before you started work on your Airflow installation?
- If you were to start over would you make the same choice?
- If Airflow wasn’t available what would be your second choice?
- What are your next steps for improvements and fixes?
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
- Harvard Brain Science Initiative
- DevOps Days Boston
- Google Maps API
- ETL (Extract, Transform, Load)
- AWS Glue
- REST (Representational State Transfer)
- SAML (Security Assertion Markup Language)
- RBAC (Role-Based Access Control)
- Maxime Beauchemin
- Jupyter Notebook
- Airflow Improvement Proposals
- Python Enhancement Proposals (PEP)