Data Collection And Management To Power Sound Recognition At Audio Analytic


June 29th, 2020

57 mins 28 secs

Your Host

About this Episode


We have machines that can listen to and process human speech in a variety of languages, but dealing with unstructured sounds in our environment is a much greater challenge. The team at Audio Analytic are working to impart a sense of hearing to our myriad devices with their sound recognition technology. In this episode Dr. Chris Mitchell and Dr. Thomas le Cornu describe the challenges that they are faced with in the collection and labelling of high quality data to make this possible, including the lack of a publicly available collection of audio samples to work from, the need for custom metadata throughout the processing pipeline, and the need for customized data processing tools for working with sound data. This was a great conversation about the complexities of working in a niche domain of data analysis and how to build a pipeline of high quality data from collection to analysis.


  • Hello and welcome to the Data Engineering Podcast, the show about modern data management
  • What are the pieces of advice that you wish you had received early in your career of data engineering? If you hand a book to a new data engineer, what wisdom would you add to it? I’m working with O’Reilly on a project to collect the 97 things that every data engineer should know, and I need your help. Go to to add your voice and share your hard-earned expertise.
  • When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to today and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
  • You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data platforms. For more opportunities to stay up to date, gain new skills, and learn from your peers there are a growing number of virtual events that you can attend from the comfort and safety of your home. Go to to check out the upcoming events being offered by our partners and get registered today!
  • Your host is Tobias Macey and today I’m interviewing Dr. Chris Mitchell and Dr. Thomas le Cornu about Audio Analytic, a company that is building sound recognition technology that is giving machines a sense of hearing beyond speech and music


  • Introduction
  • How did you get involved in the area of data management?
  • Can you start by describing what you are building at Audio Analytic?
    • What was your motivation for building an AI platform for sound recognition?
  • What are some of the ways that your platform is being used?
  • What are the unique challenges that you have faced in working with arbitrary sound data?
  • How do you handle the collection and labelling of the source data that you rely on for building your models?
    • Beyond just collection and storage, what is your process for defining a taxonomy of the audio data that you are working with?
    • How has the taxonomy had to evolve, and what assumptions have had to change, as you progressed in building the data set and the resulting models?
  • challenges of building an embeddable AI model
    • update cycle
  • difficulty of identifying relevant audio and dealing with literal noise in the input data
  • rights and ownership challenges in collection of source data
  • What was your design process for constructing a pipeline for the audio data that you need to process?
  • Can you describe how your overall data management system is architected?
    • How has that architecture evolved since you first began building and using it?
  • A majority of data tools are oriented around, and optimized for, collection and processing of textual data. How much off-the-shelf technology have you been able to use for working with audio?
  • What are some of the assumptions that you made at the start which have been shown to be inaccurate or in need of reconsidering?
  • How do you address variability in the duration of source samples in the processing pipeline?
  • How much of an issue do you face as a result of the variable quality of microphones in the embedded devices where the model is being run?
  • What are the limitations of the model in dealng with complex and layered audio environments?
    • How has the testing and evaluation of your model fed back into your strategies for collecting source data?
  • What are some of the weirdest or most unusual sounds that you have worked with?
  • What have been the most interesting, unexpected, or challenging lessons that you have learned in the process of building the technology and business of Audio Analytic?
  • What do you have planned for the future of the company?

Contact Info

Parting Question

  • From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

  • Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
  • Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
  • If you’ve learned something or tried out a project from the show then tell us about it! Email with your story.
  • To help other people find the show please leave a review on iTunes and tell your friends and co-workers
  • Join the community in the new Zulip chat workspace at


The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Support Data Engineering Podcast