The theory behind how a tool is supposed to work and the realities of putting it into practice are often at odds with each other. Learning the pitfalls and best practices from someone who has gained that knowledge the hard way can save you from wasted time and frustration. In this episode James Meickle discusses his recent experience building a new installation of Airflow. He points out the strengths, design flaws, and areas of improvement for the framework. He also describes the design patterns and workflows that his team has built to allow them to use Airflow as the basis of their data science platform.
Do you want to try out some of the tools and applications that you heard about on the Data Engineering Podcast? Do you have some ETL jobs that need somewhere to run? Check out Linode at promo.linode.com/dataengineeringpodcast or use the code dataengineering2018 and get a $20 credit (that’s 4 months free!) to try out their fast and reliable Linux virtual servers. They’ve got lightning fast networking and SSD servers with plenty of power and storage to run whatever you want to experiment on.
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- When you’re ready to build your next pipeline you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to run a bullet-proof data platform. Go to dataengineeringpodcast.com/linode to get a $20 credit and launch a new server in under a minute.
- Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
- Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
- Your host is Tobias Macey and today I’m interviewing James Meickle about his experiences building a new Airflow installation
- How did you get involved in the area of data management?
- What was your initial project requirement?
- What tooling did you consider in addition to Airflow?
- What aspects of the Airflow platform led you to choose it as your implementation target?
- Can you describe your current deployment architecture?
- How many engineers are involved in writing tasks for your Airflow installation?
- What resources were the most helpful while learning about Airflow design patterns?
- How have you architected your DAGs for deployment and extensibility?
- What kinds of tests and automation have you put in place to support the ongoing stability of your deployment?
- What are some of the dead-ends or other pitfalls that you encountered during the course of this project?
- What aspects of Airflow have you found to be lacking that you would like to see improved?
- What did you wish someone had told you before you started work on your Airflow installation?
- If you were to start over would you make the same choice?
- If Airflow wasn’t available what would be your second choice?
- What are your next steps for improvements and fixes?
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
- Harvard Brain Science Initiative
- DevOps Days Boston
- Google Maps API
- ETL (Extract, Transform, Load)
- AWS Glue
- REST (Representational State Transfer)
- SAML (Security Assertion Markup Language)
- RBAC (Role-Based Access Control)
- Maxime Beauchemin
- Jupyter Notebook
- Airflow Improvement Proposals
- Python Enhancement Proposals (PEP)