The past year has been an active one for the timeseries market. New products have been launched, more businesses have moved to streaming analytics, and the team at Timescale has been keeping busy. In this episode the TimescaleDB CEO Ajay Kulkarni and CTO Michael Freedman stop by to talk about their 1.0 release, how the use cases for timeseries data have proliferated, and how they are continuing to simplify the task of processing your time oriented events.
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute.
- Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
- To help other people find the show please leave a review on iTunes, or Google Play Music, tell your friends and co-workers, and share it on social media.
- Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
- Your host is Tobias Macey and today I’m welcoming Ajay Kulkarni and Mike Freedman back to talk about how TimescaleDB has grown and changed over the past year
- How did you get involved in the area of data management?
- Can you refresh our memory about what TimescaleDB is?
- How has the market for timeseries databases changed since we last spoke?
- What has changed in the focus and features of the TimescaleDB project and company?
- Toward the end of 2018 you launched the 1.0 release of Timescale. What were your criteria for establishing that milestone?
- What were the most challenging aspects of reaching that goal?
- In terms of timeseries workloads, what are some of the factors that differ across varying use cases?
- How do those differences impact the ways in which Timescale is used by the end user, and built by your team?
- What are some of the initial assumptions that you made while first launching Timescale that have held true, and which have been disproven?
- How have the improvements and new features in the recent releases of PostgreSQL impacted the Timescale product?
- Have you been able to leverage some of the native improvements to simplify your implementation?
- Are there any use cases for Timescale that would have been previously impractical in vanilla Postgres that would now be reasonable without the help of Timescale?
- What is in store for the future of the Timescale product and organization?
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
- Original Appearance on the Data Engineering Podcast
- 1.0 Release Blog Post
- IOT (Internet Of Things)
- AWS Timestream
- OLTP (Online Transaction Processing)
- Oracle DB
- Data Lake