Gain Visibility Into Your Entire Machine Learning System Using Data Logging With WhyLogs

00:00:00
/
00:59:03

April 24th, 2022

59 mins 3 secs

Your Host

About this Episode

Summary

There are very few tools which are equally useful for data engineers, data scientists, and machine learning engineers. WhyLogs is a powerful library for flexibly instrumenting all of your data systems to understand the entire lifecycle of your data from source to productionized model. In this episode Andy Dang explains why the project was created, how you can apply it to your existing data systems, and how it functions to provide detailed context for being able to gain insight into all of your data processes.

Announcements

  • Hello and welcome to the Data Engineering Podcast, the show about modern data management
  • When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
  • This episode is brought to you by Acryl Data, the company behind DataHub, the leading developer-friendly data catalog for the modern data stack. Open Source DataHub is running in production at several companies like Peloton, Optum, Udemy, Zynga and others. Acryl Data provides DataHub as an easy to consume SaaS product which has been adopted by several companies. Signup for the SaaS product at dataengineeringpodcast.com/acryl
  • RudderStack helps you build a customer data platform on your warehouse or data lake. Instead of trapping data in a black box, they enable you to easily collect customer data from the entire stack and build an identity graph on your warehouse, giving you full visibility and control. Their SDKs make event streaming from any app or website easy, and their state-of-the-art reverse ETL pipelines enable you to send enriched data to any cloud tool. Sign up free… or just get the free t-shirt for being a listener of the Data Engineering Podcast at dataengineeringpodcast.com/rudder.
  • The most important piece of any data project is the data itself, which is why it is critical that your data source is high quality. PostHog is your all-in-one product analytics suite including product analysis, user funnels, feature flags, experimentation, and it’s open source so you can host it yourself or let them do it for you! You have full control over your data and their plugin system lets you integrate with all of your other data tools, including data warehouses and SaaS platforms. Give it a try today with their generous free tier at dataengineeringpodcast.com/posthog
  • Your host is Tobias Macey and today I’m interviewing Andy Dang about powering observability of AI systems with the whylogs data logging library

Interview

  • Introduction
  • How did you get involved in the area of data management?
  • Can you describe what Whylabs is and the story behind it?
  • How is "data logging" differentiated from logging for the purpose of debugging and observability of software logic?
  • What are the use cases that you are aiming to support with Whylogs?
    • How does it compare to libraries and services like Great Expectations/Monte Carlo/Soda Data/Datafold etc.
  • Can you describe how Whylogs is implemented?
    • How have the design and goals of the project changed or evolved since you started working on it?
  • How do you maintain feature parity between the Python and Java integrations?
  • How do you structure the log events and metadata to provide detail and context for data applications?
    • How does that structure support aggregation and interpretation/analysis of the log information?
  • What is the process for integrating Whylogs into an existing project?
    • Once you have the code instrumented with log events, what is the workflow for using Whylogs to debug and maintain a data application?
  • What have you found to be useful heuristics for identifying what to log?
  • What are some of the strategies that teams can use to maintain a balance of signal vs. noise in the events that they are logging?
  • How is the Whylogs governance set up and how are you approaching sustainability of the open source project?
  • What are the additional utilities and services that you anticipate layering on top of/integrating with Whylogs?
  • What are the most interesting, innovative, or unexpected ways that you have seen Whylogs used?
  • What are the most interesting, unexpected, or challenging lessons that you have learned while working on Whylabs?
  • When is Whylogs/Whylabs the wrong choice?
  • What do you have planned for the future of Whylabs?

Contact Info

Parting Question

  • From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

  • Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
  • Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
  • If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story.
  • To help other people find the show please leave a review on iTunes and tell your friends and co-workers

Links

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Support Data Engineering Podcast