Data is often messy or incomplete, requiring human intervention to make sense of it before being usable as input to machine learning projects. This is problematic when the volume scales beyond a handful of records. In this episode Dr. Cheryl Martin, Chief Data Scientist for Alegion, discusses the importance of properly labeled information for machine learning and artificial intelligence projects, the systems that they have built to scale the process of incorporating human intelligence in the data preparation process, and the challenges inherent to such an endeavor.
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- When you’re ready to build your next pipeline you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to run a bullet-proof data platform. Go to dataengineeringpodcast.com/linode to get a $20 credit and launch a new server in under a minute.
- Are you struggling to keep up with customer request and letting errors slip into production? Want to try some of the innovative ideas in this podcast but don’t have time? DataKitchen’s DataOps software allows your team to quickly iterate and deploy pipelines of code, models, and data sets while improving quality. Unlike a patchwork of manual operations, DataKitchen makes your team shine by providing an end to end DataOps solution with minimal programming that uses the tools you love. Join the DataOps movement and sign up for the newsletter at datakitchen.io/de today. After that learn more about why you should be doing DataOps by listening to the Head Chef in the Data Kitchen at dataengineeringpodcast.com/datakitchen
- Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
- Your host is Tobias Macey and today I’m interviewing Cheryl Martin, chief data scientist at Alegion, about data labelling at scale
- How did you get involved in the area of data management?
- To start, can you explain the problem space that Alegion is targeting and how you operate?
- When is it necessary to include human intelligence as part of the data lifecycle for ML/AI projects?
- What are some of the biggest challenges associated with managing human input to data sets intended for machine usage?
- For someone who is acting as human-intelligence provider as part of the workforce, what does their workflow look like?
- What tools and processes do you have in place to ensure the accuracy of their inputs?
- How do you prevent bad actors from contributing data that would compromise the trained model?
- What are the limitations of crowd-sourced data labels?
- When is it beneficial to incorporate domain experts in the process?
- When doing data collection from various sources, how do you ensure that intellectual property rights are respected?
- How do you determine the taxonomies to be used for structuring data sets that are collected, labeled or enriched for your customers?
- What kinds of metadata do you track and how is that recorded/transmitted?
- Do you think that human intelligence will be a necessary piece of ML/AI forever?
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
- University of Texas at Austin
- Cognitive Science
- Labeled Data
- Mechanical Turk
- Computer Vision
- Sentiment Analysis
- Speech Recognition
- Feature Engineering