Making Analytical APIs Fast With Tinybird - Episode 185

Summary

Building an API for real-time data is a challenging project. Making it robust, scalable, and fast is a full time job. The team at Tinybird wants to make it easy to turn a continuous stream of data into a production ready API or data product. In this episode CEO Jorge Sancha explains how they have architected their system to handle high data throughput and fast response times, and why they have invested heavily in Clickhouse as the core of their platform. This is a great conversation about the challenges of building a maintainable business from a technical and product perspective.

Ascend.io logoAscend.io, the data engineering company, provides the flex-code data platform for autonomous pipelines that frees data teams to spend more time innovating. Data pipelines are the backbone of modern data systems. However, data engineers are overburdened with building and maintaining brittle pipelines, which creates a backlog that prevents data analysts and data scientists from accessing critical information. The Ascend Unified Data Engineering Platform removes these bottlenecks and enables teams to create self-service data pipelines that dynamically adapt to changes in data, code, and environment. 

In a radical departure from data orchestration solutions that require excessive coding, Ascend democratizes data engineering with 10x faster build velocity, automated maintenance, and 95% less code. DataAware™ intelligence understands and tracks every piece of data, enabling data pipelines to run at optimal efficiency with integrated lineage tracking, auditability, and governance. The cloud-native platform is available fully hosted, as well as for private cloud deployments on Amazon Web Services, Microsoft Azure, or Google Cloud Platform. Ascend.io accelerates the journey from prototype to production and helps leading organizations achieve faster time to value. 

 


RudderStack is the smart customer data pipeline. It takes the toil out of building data pipelines that connect your whole customer data stack. Its easy-to-use SDKs and source integrations, Cloud Extract integrations, transformations, and expansive library of destination and warehouse integrations makes building customer data pipelines for both event streaming and cloud-to-warehouse ELT simple. RudderStack’s warehouse-first approach and Warehouse Actions functionality makes your customer data stack smarter by enabling analysis and modeling in your data warehouse to trigger enrichment and activation in all of your customer tools. Start building smarter customer data pipelines today with RudderStack. Visit dataengineeringpodcast.com/rudder to learn more and sign-up for our no credit card required, no time limit free tier.


Your data platform needs to be scalable, fault tolerant, and performant, which means that you need the same from your cloud provider. Linode has been powering production systems for over 17 years, and now they’ve launched a fully managed Kubernetes platform. With the combined power of the Kubernetes engine for flexible and scalable deployments, and features like dedicated CPU instances, GPU instances, and object storage you’ve got everything you need to build a bulletproof data pipeline. If you go to dataengineeringpodcast.com/linode today you’ll even get a $100 credit to use on building your own cluster, or object storage, or reliable backups, or… And while you’re there don’t forget to thank them for being a long-time supporter of the Data Engineering Podcast!


Announcements

  • Hello and welcome to the Data Engineering Podcast, the show about modern data management
  • When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
  • RudderStack’s smart customer data pipeline is warehouse-first. It builds your customer data warehouse and your identity graph on your data warehouse, with support for Snowflake, Google BigQuery, Amazon Redshift, and more. Their SDKs and plugins make event streaming easy, and their integrations with cloud applications like Salesforce and ZenDesk help you go beyond event streaming. With RudderStack you can use all of your customer data to answer more difficult questions and then send those insights to your whole customer data stack. Sign up free at dataengineeringpodcast.com/rudder today.
  • Ascend.io — recognized as a 2021 Gartner Cool Vendor in Enterprise AI Operationalization and Engineering—empowers data teams to to build, scale, and operate declarative data pipelines with 95% less code and zero maintenance. Connect to any data source using Ascend’s new flex code data connectors, rapidly iterate on transformations and send data to any destination in a fraction of the time it traditionally takes—just ask companies like Harry’s, HNI, and Mayvenn. Sound exciting? Come join the team! We’re hiring data engineers, so head on over to dataengineeringpodcast.com/ascend and check out our careers page to learn more.
  • Your host is Tobias Macey and today I’m interviewing Jorge Sancha about Tinybird, a platform to easily build analytical APIs for real-time data

Interview

  • Introduction
  • How did you get involved in the area of data management?
  • Can you start by describing what you are building at Tinybird and the story behind it?
  • What are some of the types of use cases that your customers are focused on?
  • What are the areas of complexity that come up when building analytical APIs that are often overlooked when first designing a system to operate on and expose real-time data?
    • What are the supporting systems that are necessary and useful for operating this kind of system which contribute to the overall time and engineering cost beyond the baseline functionality?
  • How is the Tinybird platform architected?
    • How have the goals and implementation of Tinybird changed or evolved since you first began building it?
  • What was your criteria for selecting the core building block of your platform, and how did that lead to your choice to build on top of Clickhouse?
  • What are some of the sharp edges that you have run into while operating Clickhouse?
    • What are some of the custom tools or systems that you have built to help deal with them?
  • What are some of the performance challenges that an API built with Tinybird might run into?
    • What are the considerations that users should be aware of to avoid introducing performance issues?
  • How do you handle multi-tenancy in your platform? (e.g. separate clusters, in-database quotas, etc.)
  • For users of Tinybird, can you talk through the workflow of getting it integrated into their platform and designing an API from their data?
  • What are some of the most interesting, innovative, or unexpected ways that you have seen Tinybird used?
  • What are the most interesting, unexpected, or challenging lessons that you have learned while building and growing Tinybird?
  • When is Tinybird the wrong choice?
  • What do you have planned for the future of the product and business?

Contact Info

Parting Question

  • From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Liked it? Take a second to support the Data Engineering Podcast on Patreon!