In recent years the traditional approach to building data warehouses has shifted from transforming records before loading, to transforming them afterwards. As a result, the tooling for those transformations needs to be reimagined. The data build tool (dbt) is designed to bring battle tested engineering practices to your analytics pipelines. By providing an opinionated set of best practices it simplifies collaboration and boosts confidence in your data teams. In this episode Drew Banin, creator of dbt, explains how it got started, how it is designed, and how you can start using it today to create reliable and well-tested reports in your favorite data warehouse.
Your data platform needs to be scalable, fault tolerant, and performant, which means that you need the same from your cloud provider. Linode has been powering production systems for over 17 years, and now they’ve launched a fully managed Kubernetes platform. With the combined power of the Kubernetes engine for flexible and scalable deployments, and features like dedicated CPU instances, GPU instances, and object storage you’ve got everything you need to build a bulletproof data pipeline. If you go to dataengineeringpodcast.com/linode today you’ll even get a $100 credit to use on building your own cluster, or object storage, or reliable backups, or… And while you’re there don’t forget to thank them for being a long-time supporter of the Data Engineering Podcast!
Segment provides the reliable data infrastructure companies need to easily collect, clean, and control their customer data. Once you try it, you’ll understand why Segment is one of the hottest companies coming out of Silicon Valley. Segment recently launched a Startup Program so that early-stage startups can get a Segment account totally free up to $25k, plus exclusive deals from some favorite vendors and other resources to become data experts. Go to dataengineeringpodcast.com/segmentio today and see if you or a startup you know qualify for the program today.
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show!
- Understanding how your customers are using your product is critical for businesses of any size. To make it easier for startups to focus on delivering useful features Segment offers a flexible and reliable data infrastructure for your customer analytics and custom events. You only need to maintain one integration to instrument your code and get a future-proof way to send data to over 250 services with the flip of a switch. Not only does it free up your engineers’ time, it lets your business users decide what data they want where. Go to dataengineeringpodcast.com/segmentio today to sign up for their startup plan and get $25,000 in Segment credits and $1 million in free software from marketing and analytics companies like AWS, Google, and Intercom. On top of that you’ll get access to Analytics Academy for the educational resources you need to become an expert in data analytics for measuring product-market fit.
- You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Dataversity, and the Open Data Science Conference. Go to dataengineeringpodcast.com/conferences to learn more and take advantage of our partner discounts when you register.
- Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
- To help other people find the show please leave a review on iTunes and tell your friends and co-workers
- Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
- Your host is Tobias Macey and today I’m interviewing Drew Banin about DBT, the Data Build Tool, a toolkit for building analytics the way that developers build applications
- How did you get involved in the area of data management?
- Can you start by explaining what DBT is and your motivation for creating it?
- Where does it fit in the overall landscape of data tools and the lifecycle of data in an analytics pipeline?
- Can you talk through the workflow for someone using DBT?
- One of the useful features of DBT for stability of analytics is the ability to write and execute tests. Can you explain how those are implemented?
- The packaging capabilities are beneficial for enabling collaboration. Can you talk through how the packaging system is implemented?
- Are these packages driven by Fishtown Analytics or the dbt community?
- What are the limitations of modeling everything as a SELECT statement?
- Making SQL code reusable is notoriously difficult. How does the Jinja templating of DBT address this issue and what are the shortcomings?
- What are your thoughts on higher level approaches to SQL that compile down to the specific statements?
- Can you explain how DBT is implemented and how the design has evolved since you first began working on it?
- What are some of the features of DBT that are often overlooked which you find particularly useful?
- What are some of the most interesting/unexpected/innovative ways that you have seen DBT used?
- What are the additional features that the commercial version of DBT provides?
- What are some of the most useful or challenging lessons that you have learned in the process of building and maintaining DBT?
- When is it the wrong choice?
- What do you have planned for the future of DBT?
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
- Fishtown Analytics
- 8Tracks Internet Radio
- Stitch Data
- Business Intelligence
- Jinja template language
- Version Control
- Continuous Integration
- Test Driven Development
- Snowplow Analytics
- We Can Do Better Than SQL blog post from EdgeDB
- Looker LookML
- Presto DB
- Spark SQL
- Azure SQL Data Warehouse
- Data Warehouse
- Data Lake
- Data Council Conference
- Slowly Changing Dimensions
- dbt Archival
- Mode Analytics
- Periscope BI
- dbt docs
- dbt repository