The conventional approach to analytics involves collecting large amounts of data that can be cleaned, followed by a separate step for analysis and interpretation. Unfortunately this strategy is not viable for handling real-time, real-world use cases such as traffic management or supply chain logistics. In this episode Simon Crosby, CTO of Swim Inc., explains how the SwimOS kernel and the enterprise data fabric built on top of it enable brand new use cases for instant insights. This was an eye opening conversation about how stateful computation of data streams from edge devices can reduce cost and complexity as compared to batch oriented workflows.
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show!
- Listen, I’m sure you work for a ‘data driven’ company – who doesn’t these days? Does your company use Amazon Redshift? Have you ever groaned over slow queries or are just afraid that Amazon Redshift is gonna fall over at some point? Well, you’ve got to talk to the folks over at intermix.io. They have built the “missing” Amazon Redshift console – it’s an amazing analytics product for data engineers to find and re-write slow queries and gives actionable recommendations to optimize data pipelines. WeWork, Postmates, and Medium are just a few of their customers. Go to dataengineeringpodcast.com/intermix today and use promo code DEP at sign up to get a $50 discount!
- You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management.For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Dataversity, Corinium Global Intelligence, and Data Council. Upcoming events include the O’Reilly AI conference, the Strata Data conference, the combined events of the Data Architecture Summit and Graphorum, and Data Council in Barcelona. Go to dataengineeringpodcast.com/conferences to learn more about these and other events, and take advantage of our partner discounts to save money when you register today.
- Your host is Tobias Macey and today I’m interviewing Simon Crosby about Swim.ai, a data fabric for the distributed enterprise
- How did you get involved in the area of data management?
- Can you start by explaining what Swim.ai is and how the project and business got started?
- Can you explain the differentiating factors between the SwimOS and Data Fabric platforms that you offer?
- What are some of the use cases that are enabled by the Swim platform that would otherwise be impractical or intractable?
- How does Swim help alleviate the challenges of working with sensor oriented applications or edge computing platforms?
- Can you describe a typical design for an application or system being built on top of the Swim platform?
- What does the developer workflow look like?
- What kind of tooling do you have for diagnosing and debugging errors in an application built on top of Swim?
- What does the developer workflow look like?
- Can you describe the internal design for the SwimOS and how it has evolved since you first began working on it?
- For such widely distributed applications, efficient discovery and communication is essential. How does Swim handle that functionality?
- What mechanisms are in place to account for network failures?
- Since the application nodes are explicitly stateful, how do you handle scaling as compared to a stateless web application?
- Since there is no explicit data layer, how is data redundancy handled by Swim applications?
- What are some of the most interesting/unexpected/innovative ways that you have seen the Swim technology used?
- What have you found to be the most challenging aspects of building the Swim platform?
- What are some of the assumptions that you had going into the creation of SwimOS and how have they been challenged or updated?
- What do you have planned for the future of the technical and business aspects of Swim.ai?
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
- Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you’ve learned something or tried out a project from the show then tell us about it! Email firstname.lastname@example.org) with your story.
- To help other people find the show please leave a review on iTunes and tell your friends and co-workers
- Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
- Streaming Data
- Apache Flink
- Apache Kafka
- Digital Twin
- Swim Concepts Documentation
- RFID == Radio Frequency IDentification
- PCB == Printed Circuit Board
- Graal VM
- Azure IoT Edge Framework
- Azure DLS (Data Lake Storage)
- Power BI
- WARP Protocol
Support Data Engineering Podcast