When your data lives in multiple locations, belonging to at least as many applications, it is exceedingly difficult to ask complex questions of it. The default way to manage this situation is by crafting pipelines that will extract the data from source systems and load it into a data lake or data warehouse. In order to make this situation more manageable and allow everyone in the business to gain value from the data the folks at Dremio built a self service data platform. In this episode Tomer Shiran, CEO and co-founder of Dremio, explains how it fits into the modern data landscape, how it works under the hood, and how you can start using it today to make your life easier.
Your data platform needs to be scalable, fault tolerant, and performant, which means that you need the same from your cloud provider. Linode has been powering production systems for over 17 years, and now they’ve launched a fully managed Kubernetes platform. With the combined power of the Kubernetes engine for flexible and scalable deployments, and features like dedicated CPU instances, GPU instances, and object storage you’ve got everything you need to build a bulletproof data pipeline. If you go to dataengineeringpodcast.com/linode today you’ll even get a $60 credit to use on building your own cluster, or object storage, or reliable backups, or… And while you’re there don’t forget to thank them for being a long-time supporter of the Data Engineering Podcast!
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute.
- Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
- Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
- Your host is Tobias Macey and today I’m interviewing Tomer Shiran about Dremio, the open source data as a service platform
- How did you get involved in the area of data management?
- Can you start by explaining what Dremio is and how the project and business got started?
- What was the motivation for keeping your primary product open source?
- What is the governance model for the project?
- How does Dremio fit in the current landscape of data tools?
- What are some use cases that Dremio is uniquely equipped to support?
- Do you think that Dremio obviates the need for a data warehouse or large scale data lake?
- How is Dremio architected internally?
- How has that architecture evolved from when it was first built?
- There are a large array of components (e.g. governance, lineage, catalog) built into Dremio that are often found in dedicated products. What are some of the strategies that you have as a business and development team to manage and integrate the complexity of the product?
- What are the benefits of integrating all of those capabilities into a single system?
- What are the drawbacks?
- One of the useful features of Dremio is the granular access controls. Can you discuss how those are implemented and controlled?
- For someone who is interested in deploying Dremio to their environment what is involved in getting it installed?
- What are the scaling factors?
- What are some of the most exciting features that have been added in recent releases?
- When is Dremio the wrong choice?
- What have been some of the most challenging aspects of building, maintaining, and growing the technical and business platform of Dremio?
- What do you have planned for the future of Dremio?
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
- Business Intelligence
- Power BI
- OLAP Cube
- Apache Foundation
- Nikon DSLR
- ETL (Extract, Transform, Load)
- Gandiva Initiative for Apache Arrow