While the overall concept of timeseries data is uniform, its usage and applications are far from it. One of the most demanding applications of timeseries data is for application and server monitoring due to the problem of high cardinality. In his quest to build a generalized platform for managing timeseries Paul Dix keeps getting pulled back into the monitoring arena. In this episode he shares the history of the InfluxDB project, the business that he has helped to build around it, and the architectural aspects of the engine that allow for its flexibility in managing various forms of timeseries data. This is a fascinating exploration of the technical and organizational evolution of the Influx Data platform, with some promising glimpses of where they are headed in the near future.
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
- RudderStack’s smart customer data pipeline is warehouse-first. It builds your customer data warehouse and your identity graph on your data warehouse, with support for Snowflake, Google BigQuery, Amazon Redshift, and more. Their SDKs and plugins make event streaming easy, and their integrations with cloud applications like Salesforce and ZenDesk help you go beyond event streaming. With RudderStack you can use all of your customer data to answer more difficult questions and then send those insights to your whole customer data stack. Sign up free at dataengineeringpodcast.com/rudder today.
- We’ve all been asked to help with an ad-hoc request for data by the sales and marketing team. Then it becomes a critical report that they need updated every week or every day. Then what do you do? Send a CSV via email? Write some Python scripts to automate it? But what about incremental sync, API quotas, error handling, and all of the other details that eat up your time? Today, there is a better way. With Census, just write SQL or plug in your dbt models and start syncing your cloud warehouse to SaaS applications like Salesforce, Marketo, Hubspot, and many more. Go to dataengineeringpodcast.com/census today to get a free 14-day trial.
- Your host is Tobias Macey and today I’m interviewing Paul Dix about Influx Data and the different facets of the market for timeseries databases
- How did you get involved in the area of data management?
- Can you describe what you are building at Influx Data and the story behind it?
- Timeseries data is a fairly broad category with many variations in terms of storage volume, frequency, processing requirements, etc. This has led to an explosion of database engines and related tools to address these different needs. How do you think about your position and role in the ecosystem?
- Who are your target customers and how does that focus inform your product and feature priorities?
- What are the use cases that Influx is best suited for?
- Can you give an overview of the different projects, tools, and services that comprise your platform?
- How is InfluxDB architected?
- How have the design and implementation of the DB engine changed or evolved since you first began working on it?
- What are you optimizing for on the consistency vs. availability spectrum of CAP?
- What is your approach to clustering/data distribution beyond a single node?
- For the interface to your database engine you developed a custom query language. What was your process for deciding what syntax to use and how to structure the programmatic interface?
- How do you handle the lifecycle of data in an Influx deployment? (e.g. aging out old data, periodic compaction/rollups, etc.)
- With your strong focus on monitoring use cases, how do you handle the challenge of high cardinality in the data being stored?
- What are some of the data modeling considerations that users should be aware of as they are designing a deployment of Influx?
- What is the role of open source in your product strategy?
- What are the most interesting, innovative, or unexpected ways that you have seen the Influx platform used?
- What are the most interesting, unexpected, or challenging lessons that you have learned while working on Influx?
- When is Influx DB and/or the associated tools the wrong choice?
- What do you have planned for the future of Influx Data?
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
- Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you’ve learned something or tried out a project from the show then tell us about it! Email firstname.lastname@example.org) with your story.
- To help other people find the show please leave a review on iTunes and tell your friends and co-workers
- Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
- Influx Data
- Influx DB
- Search and Information Retrieval
- New Relic
- Latent Semantic Indexing
- TICK Stack
- ELK Stack
- TSM storage engine
- TSI Storage Engine
- Rust Language
- RAFT Protocol
- Flux Language
- Apache Arrow
- Apache Parquet