Web and mobile analytics are an important part of any business, and difficult to get right. The most frustrating part is when you realize that you haven’t been tracking a key interaction, having to write custom logic to add that event, and then waiting to collect data. Heap is a platform that automatically tracks every event so that you can retroactively decide which actions are important to your business and easily build reports with or without SQL. In this episode Dan Robinson, CTO of Heap, describes how they have architected their data infrastructure, how they build their tracking agents, and the data virtualization layer that enables users to define their own labels.
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- When you’re ready to build your next pipeline you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to run a bullet-proof data platform. Go to dataengineeringpodcast.com/linode to get a $20 credit and launch a new server in under a minute.
- For complete visibility into the health of your pipeline, including deployment tracking, and powerful alerting driven by machine-learning, DataDog has got you covered. With their monitoring, metrics, and log collection agent, including extensive integrations and distributed tracing, you’ll have everything you need to find and fix performance bottlenecks in no time. Go to dataengineeringpodcast.com/datadog today to start your free 14 day trial and get a sweet new T-Shirt.
- Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
- Your host is Tobias Macey and today I’m interviewing Dan Robinson about Heap and their approach to collecting, storing, and analyzing large volumes of data
- How did you get involved in the area of data management?
- Can you start by giving a brief overview of Heap?
- One of your differentiating features is the fact that you capture every interaction on web and mobile platforms for your customers. How do you prevent the user experience from suffering as a result of network congestion, while ensuring the reliable delivery of that data?
- Can you walk through the lifecycle of a single event from source to destination and the infrastructure components that it traverses to get there?
- Data collected in a user’s browser can often be messy due to various browser plugins, variations in runtime capabilities, etc. How do you ensure the integrity and accuracy of that information?
- What are some of the difficulties that you have faced in establishing a representation of events that allows for uniform processing and storage?
- What is your approach for merging and enriching event data with the information that you retrieve from your supported integrations?
- What challenges does that pose in your processing architecture?
- What are some of the problems that you have had to deal with to allow for processing and storing such large volumes of data?
- How has that architecture changed or evolved over the life of the company?
- What are some changes that you are anticipating in the near future?
- Can you describe your approach for synchronizing customer data with their individual Redshift instances and the difficulties that entails?
- What are some of the most interesting challenges that you have faced while building the technical and business aspects of Heap?
- What changes have been necessary as a result of GDPR?
- What are your plans for the future of Heap?
- @danlovesproofs on twitter
- @drob on github
- heapanalytics.com / @heap on twitter
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
- User Analytics
- Google Analytics
- Chaos Engineering
- Heap SQL
- Data Virtualization