There are countless sources of data that are publicly available for use. Unfortunately, combining those sources and making them useful in aggregate is a time consuming and challenging process. The team at Enigma builds a knowledge graph for use in your own data projects. In this episode Chris Groskopf explains the platform they have built to consume large varieties and volumes of public data for constructing a graph for serving to their customers. He discusses the challenges they are facing to scale the platform and engineering processes, as well as the workflow that they have established to enable testing of their ETL jobs. This is a great episode to listen to for ideas on how to organize a data engineering organization.
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- When you’re ready to build your next pipeline you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to run a bullet-proof data platform. Go to dataengineeringpodcast.com/linode to get a $20 credit and launch a new server in under a minute.
- You work hard to make sure that your data is reliable and accurate, but can you say the same about the deployment of your machine learning models? The Skafos platform from Metis Machine was built to give your data scientists the end-to-end support that they need throughout the machine learning lifecycle. Skafos maximizes interoperability with your existing tools and platforms, and offers real-time insights and the ability to be up and running with cloud-based production scale infrastructure instantaneously. Request a demo at dataengineeringpodcast.com/metis-machine to learn more about how Metis Machine is operationalizing data science.
- Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
- Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
- Your host is Tobias Macey and today I’m interviewing Chris Groskopf about Enigma and how the are using public data sources to build a knowledge graph
Your Data Scientist finished a new Machine Learning model, so he sends you his python script and wishes you good luck. Now you have to figure out where to put it and plead with DevOps to deploy it. Not to mention write the API to consume the model’s results.
Wouldn’t it make your job easier if the Data Science team could build, train, deploy and monitor their models independently? Metis Machine agrees.
Meet Skafos, the machine learning platform that enables teams of data scientists to drastically speed up the time to market by providing tools and workflows that are familiar and easy. Serverless ML production deployment is as simple as “git push”. Skafos orchestrates your jobs seamlessly, guaranteeing they will run.
Skafos handles the tedious and time-consuming work of applying Machine Learning at scale so you can focus on what you do best.
The team here at Metis Machine shipped a proof-of-concept integration between our powerful machine learning platform Skafos, and the business intelligence software Tableau. BI teams can now invoke custom-built machine learning models built by in-house science teams.
Does that sound awesome? It is.
Join Metis Machine’s free webinar to walk through the architecture of this extension, demonstrate its capabilities in real time, and lay out a use case for empowering your BI team to modify machine learning models independently and immediately see the results, right from Tableau. You have to see it to believe it. So join us on October 11th at 2 PM ET (11 AM PT) and see what
Skafos + Tableau can do.
To register, go to metismachine.com/webinars
Your data platform needs to be scalable, fault tolerant, and performant, which means that you need the same from your cloud provider. Linode has been powering production systems for over 17 years, and now they’ve launched a fully managed Kubernetes platform. With the combined power of the Kubernetes engine for flexible and scalable deployments, and features like dedicated CPU instances, GPU instances, and object storage you’ve got everything you need to build a bulletproof data pipeline. If you go to dataengineeringpodcast.com/linode today you’ll even get a $100 credit to use on building your own cluster, or object storage, or reliable backups, or… And while you’re there don’t forget to thank them for being a long-time supporter of the Data Engineering Podcast!
- How did you get involved in the area of data management?
- Can you give a brief overview of what Enigma has built and what the motivation was for starting the company?
- How do you define the concept of a knowledge graph?
- What are the processes involved in constructing a knowledge graph?
- Can you describe the overall architecture of your data platform and the systems that you use for storing and serving your knowledge graph?
- What are the most challenging or unexpected aspects of building the knowledge graph that you have encountered?
- How do you manage the software lifecycle for your ETL code?
- What kinds of unit, integration, or acceptance tests do you run to ensure that you don’t introduce regressions in your processing logic?
- What are the current challenges that you are facing in building and scaling your data infrastructure?
- How does the fact that your data sources are primarily public influence your pipeline design and what challenges does it pose?
- What techniques are you using to manage accuracy and consistency in the data that you ingest?
- Can you walk through the lifecycle of the data that you process from acquisition through to delivery to your customers?
- What are the weak spots in your platform that you are planning to address in upcoming projects?
- If you were to start from scratch today, what would you have done differently?
- What are some of the most interesting or unexpected uses of your product that you have seen?
- What is in store for the future of Enigma?
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
- Chicago Tribune
- Knowledge Graph
- Data Lake
- AWS Neptune
- AWS Batch
- Money Laundering
- Jupyter Notebook
- Cauldron: The Un-Notebook