Navigating Boundless Data Streams With The Swim Kernel - Episode 98

Summary

The conventional approach to analytics involves collecting large amounts of data that can be cleaned, followed by a separate step for analysis and interpretation. Unfortunately this strategy is not viable for handling real-time, real-world use cases such as traffic management or supply chain logistics. In this episode Simon Crosby, CTO of Swim Inc., explains how the SwimOS kernel and the enterprise data fabric built on top of it enable brand new use cases for instant insights. This was an eye opening conversation about how stateful computation of data streams from edge devices can reduce cost and complexity as compared to batch oriented workflows.

Listen, I’m sure you work for a ‘data driven’ company – who doesn’t these days?? Does your company use Amazon Redshift? Have you ever groaned over slow queries or are just afraid that Amazon Redshift is gonna fall over at some point??

Well, you GOTTA talk to the folks over at intermix.io. They have built the “missing” Amazon Redshift console – it’s an amazing analytics product for data engineers to find and re-write slow queries and gives actionable recommendations to optimize data pipelines. WeWork, Postmates, and Medium are just a few of their customers.

DEP listeners get a $50 discount! Just go to dataengineeringpodcast.com/intermix and use promo code DEP at sign up.


linode-banner-sponsor-largeDo you want to try out some of the tools and applications that you heard about on the Data Engineering Podcast? Do you have some ETL jobs that need somewhere to run? Check out Linode at linode.com/dataengineeringpodcast or use the code dataengineering2019 and get a $20 credit (that’s 4 months free!) to try out their fast and reliable Linux virtual servers. They’ve got lightning fast networking and SSD servers with plenty of power and storage to run whatever you want to experiment on.


Announcements

  • Hello and welcome to the Data Engineering Podcast, the show about modern data management
  • When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show!
  • Listen, I’m sure you work for a ‘data driven’ company – who doesn’t these days? Does your company use Amazon Redshift? Have you ever groaned over slow queries or are just afraid that Amazon Redshift is gonna fall over at some point? Well, you’ve got to talk to the folks over at intermix.io. They have built the “missing” Amazon Redshift console – it’s an amazing analytics product for data engineers to find and re-write slow queries and gives actionable recommendations to optimize data pipelines. WeWork, Postmates, and Medium are just a few of their customers. Go to dataengineeringpodcast.com/intermix today and use promo code DEP at sign up to get a $50 discount!
  • You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management.For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Dataversity, Corinium Global Intelligence, and Data Council. Upcoming events include the O’Reilly AI conference, the Strata Data conference, the combined events of the Data Architecture Summit and Graphorum, and Data Council in Barcelona. Go to dataengineeringpodcast.com/conferences to learn more about these and other events, and take advantage of our partner discounts to save money when you register today.
  • Your host is Tobias Macey and today I’m interviewing Simon Crosby about Swim.ai, a data fabric for the distributed enterprise

Interview

  • Introduction
  • How did you get involved in the area of data management?
  • Can you start by explaining what Swim.ai is and how the project and business got started?
    • Can you explain the differentiating factors between the SwimOS and Data Fabric platforms that you offer?
  • What are some of the use cases that are enabled by the Swim platform that would otherwise be impractical or intractable?
  • How does Swim help alleviate the challenges of working with sensor oriented applications or edge computing platforms?
  • Can you describe a typical design for an application or system being built on top of the Swim platform?
    • What does the developer workflow look like?
      • What kind of tooling do you have for diagnosing and debugging errors in an application built on top of Swim?
  • Can you describe the internal design for the SwimOS and how it has evolved since you first began working on it?
  • For such widely distributed applications, efficient discovery and communication is essential. How does Swim handle that functionality?
    • What mechanisms are in place to account for network failures?
  • Since the application nodes are explicitly stateful, how do you handle scaling as compared to a stateless web application?
  • Since there is no explicit data layer, how is data redundancy handled by Swim applications?
  • What are some of the most interesting/unexpected/innovative ways that you have seen the Swim technology used?
  • What have you found to be the most challenging aspects of building the Swim platform?
  • What are some of the assumptions that you had going into the creation of SwimOS and how have they been challenged or updated?
  • What do you have planned for the future of the technical and business aspects of Swim.ai?

Contact Info

Parting Question

  • From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

  • Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
  • Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
  • If you’ve learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story.
  • To help other people find the show please leave a review on iTunes and tell your friends and co-workers
  • Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat

Links

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Click here to read the raw transcript...
Tobias Macey
0:00:10
Hello, and welcome to the data engineering podcast the show about modern data management. When you're ready to build your next pipeline, I want to test out the projects you hear about on the show, you'll need some more to deploy it. So check out our friends over at Lynn ODE with 200 gigabit private networking, scalable shared block storage and the 40 gigabit public network. You've got everything you need to run a fast, reliable and bulletproof data platform. If you need global distribution, they've got bad coverage to with worldwide data centers, including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to data engineering, podcast.com, slash Lenovo, that's LI and OD today to get a $20 credit and launch a new server in under a minute. And don't forget to thank them for their continued support of this show. And listen, I'm sure you work for a data driven company. Who doesn't these days? Does your company use Amazon redshift? Have you ever grown over slow queries or just afraid that Amazon redshift is going to fall over at some point? Well, you've got to talk to the folks [email protected] they have built the missing Amazon redshift console. It's an amazing analytics product for data engineers to find and rewrite slow queries. And it gives actionable recommendations to optimize data pipelines. We work Postmates and media or just a few of their customers. Go to data engineering podcast.com slash intermix today and use promo code DEP at sign up to get a $50 discount. And you listen to this show to learn and stay up to date with what's happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet listen and learn from your peers you don't want to miss out on this year's conference season. We have partnered with organizations such as O'Reilly Media Day diversity, Caribbean global intelligence data Council, upcoming events and do Riley AI conference, the strata data conference, the combined events of the data architecture summit in graph forum, and data Council in Barcelona. Go to data engineering podcast.com slash conferences to learn more about these and other events and to take advantage of our partner discounts to save money when you register today. Your host is Tobias Macey, and today I'm interviewing Simon Crosby about swim.ai, the Data Fabric for the distributed enterprise. So Simon, can you start by introducing yourself?
Simon Crosby
0:02:28
Hi, I'm Simon Crosby, I am the CTO, I guess of long duration. I've been around for a long time. And it's a privilege to be with the swim folks who have been building this fabulous platform for streaming data for about five years.
Tobias Macey
0:02:49
And do you remember how you first got involved in the area of data management?
Simon Crosby
0:02:53
Well, I have a PhD in applied mathematics and probably, so I am kind of not data management guy. I'm an analysis guy. I like what comes out of, you know, streams of data and what influence you can draw from it. So my background is more on the analytical side. And then along the way, I saw begin to how to build big infrastructure for it.
Tobias Macey
0:03:22
And now you have taken up the position as CTO for swim.ai, I'm wondering if you can explain a bit about what the platform is and how the overall project and business got started?
Simon Crosby
0:03:33
Sure. So here's the problem. We're all reading all the time. But these wonderful things that you can do with machine learning, and streaming data, and so on, it all involves cloud and other magical things. And in general, most organizations chest don't know how to make head or tail of that, for a bunch of reasons, it's just too hard to get there. So if you're an organization, with assets, that are chipping out lots of data, and that could be a bunch of different types, you know, you probably don't have the skill fit in house to deal with a vast amount of information. And we're talking about boundless data sources, yet things that never showed up. And so to deal with these data flow pipelines to deal with it itself, to deal with the learning and inferences you might draw from that, and so on. And so, enterprises, a huge skill set challenge. There is also a cost challenge, because today's techniques related to drawing inference from data in general resolve with it, you know, in large, expensive, dead legs, either in house or perhaps in the cloud. And then finally, there's a challenge with the timeliness within which you can draw an insight. And most folks today, believe that you store data, and then you think about it in some magical way. And you draw inference from that. And we're all suffering from the Hadoop Cloudera, I guess, after effects, and really, this notion of storing and then analyzing needs to be dispensed with in terms of fast it, certainly for boundless data sources that will never stop. It's really inappropriate. So when I talk about boundaries, today, we're going to talk about data streams that just never stop. And Ross can talk web, the need to derive insights from that data on the fly, because if you don't, something will go wrong. So it's of the type that would stop your car before you hit the pedestrian, the crosswalk, that kind of stuff. So for that kind of data, there's just no chance to know still down hard disk. And then
Tobias Macey
0:06:16
and how would you differentiate the work that you're doing with the swimming AI platform and the swim OS kernel from things that are being done with tools such as Flink or other streaming systems, such as Kafka that is now got capabilities for being able to do some limited streaming analysis on the data as it flows through, or also platforms such as wall a room that are built for being able to do state for computations on data streams?
Simon Crosby
0:06:44
So first of all, there have been some major steps forward. And anything we do we stand on the shoulders of giants. Let's start off with distinguishing between the large enterprise skill set that's out there, and the cup world. And all the things you mentioned live in the cloud world. So at that reference distinction, most people in the enterprise when you said Flink wouldn't know what the hell you talking about. Okay, similarly will ruin anything else, they just wouldn't know what you're talking about. And so there is a major problem with the tools and technologies that are built for the cloud, really for against for log cloud native applications, and the majority of enterprises who just their step with legacy IT and application skill set, and they still come up to speed with the right thing to do. And to be honest, they're getting over the headache of Hadoop. So then, if we talk about cloud native world, there is a fascinating distinction between all the various projects, which have started to tackle streaming data. And there have been some major progress has been made some major progress there, Jim be delighted to point out some being one of them, and have been going into each one of those projects in detail as we go forward. The key point being that, first and foremost, the large majority of enterprises just don't
Tobias Macey
0:08:22
know what to do. And then within your specific offerings, there is the data Friedberg platform, which you're targeting for enterprise consumers. And then there's also the open source kernel of that in the form of swim OS. I'm wondering if you can provide some explanation as to what are the differentiating factors between those two products and the sort of decision points along when somebody might want to use one versus the other?
Simon Crosby
0:08:50
Yeah, let's cut it first at the distinction between the application layer and the infrastructure needed to run largest distribute data for pipeline. And so for swim all of the application layer stuff that then there's everything you need to build nap is entirely open source. Some of the capabilities that you want to run a large distributed data pipeline are proprietary. And that's really just because, you know, we're building a business around this, we plan to open source more and more features more and more features over time.
Tobias Macey
0:09:29
And then as far as the primary use cases that you are enabling with the swim platform, and some of the different ways that enterprise organizations are implementing it, what are some of the cases were using something other than swim, either the OS or the Data Fabric layer would be either impractical or intractable if they were trying to use more traditional approaches such as Hadoop, as you mentioned, or data warehouse and more batch oriented workflows?
Simon Crosby
0:09:58
So So let's start off describing what swim does, can it can I do that, that that might help our in our view, it's our job to build the pipeline, and indeed the model from the data. Okay, so swim, just once data, and from the data we will build, automatically build this typical data flow pipeline. And indeed, from that, we will build a model of arbitrarily interesting complexity, which allows us to solve some very interesting problems. Ok. So the swim perspective, starts with data. Because that's where our customers journey starts. They have lots and lots of data, they don't know what to do with it. And so the approach we take and swim is to allow the data to build the model. Now, you would naturally say that's impossible, in general, but requires is some oncology at the edge, which describes the dead, you could think of it as a schema, in fact, basically, to describe what data items mean, in some sort of useful sense to us as modelers. But then given data swim will build that model. So let me give you an example. Given a relatively simple ontology, for traffic, and traffic equipment, so position lights, the loops, and the road, the lights and so on, swim will build a model, which is a staple, digital twin is where for every sensor, every in every source of data, which is running in concurrently in some distributed fabric, and processes its own raw data and truly evolves, okay. So simply given that ontology, some knows how to build, stay faithful, concurrent, little things we call web engines, actually, I'm yeah, I'm using that term,
0:12:18
I guess the same as digital twin.
0:12:21
And these are concurrent things which are going to stay fluid process raw data and represent that in a meaningful way. And the cool thing about that is that each one of these little digital twins exists in a context, a real world context, that term is going to discover for us. So for example, a an intersection might have 60, to 80, sensors. So this notion of containment, but also, intersections are adjacent to other sections in the real world map. And so on. That notion of a Jason's is also real world relationship. And in swimming, this notion of a link allows us to express the real world relationships between these little digital twins. And linking in swim has this wonderful additional property, which is to allow us to express it essentially, as soon swim, there is never a pub, but there is up. And if something links to something else so filing to you, then it's like LinkedIn for things, I get to see the real time updates of in memory state still buy that digital twin. So digital twins, a link to a digital twins courtesy of real world relationships, such as containment or proximity. We can even do other relationships, like correlation,
0:14:05
also linked to each other, which allows them to share data.
0:14:09
And sharing data allows interesting computational properties to be derived. For example, we can learn and predict. Okay, so job one is to define the songs ology something goes and builds a graph, and a graph of digital twins, which is constructed entirely from the data. And then the linking happens as part of that. And that allows us to then construct interesting competitions.
0:14:45
Is that useful?
Tobias Macey
0:14:46
Yes, that's definitely helpful to get an idea of some of the use cases and some of the ways that the different concepts within swim work together to be able to build out to what a sort of conceptual architecture would be for an application that would utilize swim.
Simon Crosby
0:15:03
So the key thing here is I'm talking about an application bit just said, the application is to predict the future, the future traffic in a city, or what's going to happen in the traffic area right. Now, I could do that for a bunch of different cities, what I can tell you is I need a model for each city. And there are two ways to build a model. One way is I get a data scientist to have them build them, or maybe they train it and a whole bunch of other things. And I'm going to have to do this for every single city where I want to use this application. The other way to do it is to build the model from the data. And that's the approach. So what swim does is simply given the ontology, build these little digital twins, which are representatives of the real world things, get them to stay fully evolve, and then link to other things in, you know, to represent real world relationships. And then suddenly, hey, presto, you have built a large graph, which is effectively the model that you would have had to average a human build otherwise, right? So it's constructed in the sense that in any new city you go to this thing is just going to unbundle and just given a stream of data, it will build a model, which represents the things that are the sources of data and their physical relationships. That make sense.
Tobias Macey
0:16:38
Yeah, and I'm wondering if you can expand upon that, in terms of the type of workflow that a developer who is building an application on top of swim would go through as far as identifying what those ontology is, are and defining how the links will occur as the data streams into the different nodes in the swimming graph.
Simon Crosby
0:17:01
So the key point here is that we think that we will do, and then we can build, like 80% of a nap, okay, from the data. And that is we can find all of the big structural, red, all structural properties of relevance in the data, and then let the, the application builder drop in what they want to compute. And so let me try and express is slightly differently. Job, one, we believe is to build a model of the staple digital twins obby, which almost mirror their real world counterparts. So at all points in time, their job is to represent the real world, as faithfully and as close to real time as they can in a stable way, which is relevance to the problem at hand. Okay, so rather involved, so I'm going to have a red light, okay, something like that. And the first problem is to build this, the central digital twins, which are interlinked, which represent the real world being said, okay, and it's important to separate that, from the application layer component of what you want to compute from that. So frequently, we see people making the wrong decision that is hard, hard coupling, the notion of prediction, or learning or any other form analysis into the application in such a way that any change requires programming. And we think that that's wrong. So job one is to have this faithful representation of a real time world in which everything evolves its own state, whenever it's real world when evolves, and evolves stay pretty. And then the second component to that is, which of which we do on a separate timescale is to inject operators, which are going to then compute on the states of those things at the edge, right. So we have a model, which represents the relationships between things in the real world. It's attempting to evolve as close as possible to real time in relationship to the real world twin, and it's reflecting its links and so on. But the notion what you want to compute from it is separate from that and decoupled. And so the second step, which is an application, or building an application right here, right now, is to drop in an operator, which is going to compute a thing from that. So you might say, cool, I want every digital, every intersection to compute, you know, to be able to learn from its own behavior and predict. That's one thing, we might say, I want to compute the average wait time of every kind and see, that's another thing. So the key point here is that computing from these rapidly evolving world worldviews, is decoupled from the actual model of what's going on in that world and point in time. So it's from reflects that decoupling by allowing you to bind operators to the model whenever you want.
0:20:45
Okay,
0:20:46
bye whenever you want. I mean, you can write them in code and bits of job or whatever. But also, you can write them in blobs of JavaScript or Python, and dynamically insert them into a running model. Okay, so let me make that one concrete for you. I could have a deployed system, which is a model a deployed graph of digital twins, which are currently mirroring the state of Las Vegas. And data dynamically, a data scientist says, Let me compute the average wait time of red cars at these intersections, and drop said in as a blob of JavaScript attached to every digital twin for an intersection. That is what I mean by an application. And so we want to get to this point where the notional application is not something deeply hidden in somebody's, you know, notebook, or Jupiter notebook, or in some program his brain and they quit and wander off to the next startup 10 months ago, an application is what everyone or no right now grew up into a running model.
Tobias Macey
0:22:02
So the way that sounds like to me is that swim essentially acts as you deploy the infrastructure layer to ingest the data feeds from the sets of sensors, and then it will automatically create these digital twin objects to be able to have some digital manifestation of the real world so that you have a continuous stream of data and how that's interrelated. And then it sort of flips the order of operations in terms of how the data engineer and the data scientists might work together, where the way that most people are used to, you will ingest the data from these different sensors, bundle it up, and then hand it off to a data scientist to be able to do their analyses. They generate a model and then hand it back to the data engineer to say, Okay, go ahead and deploy this and then see what the outputs are where instead, the swim platform essentially acts as the delivery mechanism and the interactive environment for the data scientists to be able to experiment with the data, build a model, and then get it deployed on top of the continuously updating live stream of data, and then be able to have some real world interaction with those sensors, in real time, as they're doing that to be able to feed that back to say, okay, red cars are waiting 15% longer than other cars at these two intersections, and I want to be able to optimize our overall grid, and that will then feed back into the rest of the network to have some physical manifestation of the analysis that they're trying to perform to try and maybe optimizing all traffic.
Simon Crosby
0:23:39
So there are some consequences for that, first of all, the every algorithm has to compute stuff on the fly. So if you look at, you know, the kind of store and then analyze approach to Big Data Type learning, or training or anything else, you know, you have a little bit here, you don't. And so every algorithm that is part of swim, is coded in such a way is to continually process data. And that's fundamentally different to most frameworks. Okay, so for example,
0:24:19
the,
0:24:21
the Learn and predict cycle is what, you know, you mentioned training, and, and so on. And that's very interesting. But no train flies, that I collect and store some train data, and that it's complete and useful enough to try the model back and then hand back. You know, what, if it isn't, and so, in whom we don't do that, mean, we can if you want, if you have a bottle less no problem for us to be dead, too. But instead, in swim, the input vector, say to a prediction, I will say DNA is precisely the current state of the digital twins for some bunch things, right? Maybe the set of sensors in the neighborhood of the urban intersection. And so this is a continuously burying real world triggered scenario in which real data is fed through the algorithm, but is not stored anywhere. So everything is fundamentally streaming. So we assume that data streams continually and indeed, the output of every algorithm streams continually. So what you see when you compete in the average is the current average. Okay, when you see when you when you're looking for heavy hitters, the what you see as the current heavy hitters. All right. And so every algorithm has its streaming, twin, I guess. And and part of the art in the same context is reformulating the notion of of analysis into a streaming context. So that you never expect a complete answer, because there isn't one is just what I've seen until now. Okay, and what I've seen until now has been fed through the algorithm, this is the current answer. And so every algorithm, compute and streams. And so the notion of linking, which I described earlier for swim between digital twin say, applies also to these operators, which effectively would link to things they want to compute from, and then they would stream their results. Okay, so if you LinkedIn, you see a continued update. And for example, that stream could use to be could be used to feed a Kafka cathkin limitation, which would serve a bunch of applications, you know, the notion of streaming is, is pretty well understood. So we can feed other bits of the infrastructure very well. But fundamentally, everything is designed to stream,
Tobias Macey
0:27:21
it's definitely an interesting approach to the just overall workflow of how these analyses work. And one thing that I'm curious of is how data scientists and analysts have found working with this platform in terms of ways that they might be used to, you know, you're interested in, in what they scientists would view or how they view this,
Simon Crosby
0:27:45
to be honest, in general with surprise.
0:27:50
Our experience today has been largely with people who don't know what the heck they're doing in terms of data science. So they're trying to run an oil rig more efficiently they have, what about 10,000 sensors, and they want to make sure this thing isn't going to blow up. Okay? So tend to be heavily operationally focused folks. They're not that scientists, they never could afford one. And they don't understand the language of data science, or have the ability to build cloud based pipelines that you and I might be familiar with. So these are folks who effectively just want to do a better job, given this enormous stream of data they have, they believe they have something in the data, they don't know what that might be. But they came to go and see. Okay. And so those are the folks who spent most of our time with, I'll give you a funny example, if you'd like char man.
Tobias Macey
0:29:00
illustrative,
Simon Crosby
0:29:02
we work with a manufacturer of aircraft.
0:29:05
And they have very large number of RFID tag parts, and equipment to and so if you know anything about RFID, you know, it's pretty useless stuff is built from about 10 years ago, 20 years ago. And so what they were doing is from about 2000, readers, again, about 10,000 reads a second. And each one these read is simply being written into an oracle database, at the end of the day that try and reconcile the soul with what whatever parts have, and wherever the thing is, and so on. And this whom solution to this is entirely different, it gives you a good idea of why we care about modeling data, or thinking about data differently. We simply built a digital twin for every tag, the first time it's seen, we create one. And then they know they have been in for a long time, they just expire. And whenever a reader sees attack, it simply says, Hey, I saw you. And this was the signal strength. Now, because tanks get seen by multiple readers, the each digital 12 attack does the obvious thing. It triangulate from the readers. Okay, so it learns the attenuation different parts of the plot. It's very simple initially, it that's the word learn there is a rather stretch to British straightforward calculation, and then suddenly can work out where it is in three space. So instead of an oracle database, or a database full of tag berries, and lots and lots of post processing, you know, but the Kepler Raspberry Pi's and each one NNE, Raspberry Pi's, you know, have millions of these tanks running, and then you can ask any one of them where it is. Okay, and you then you can do even more, you can say, hey, show me all the things within three meters of this tech, okay? And that allows you to see components being put together into real physical objects, right? So as a fuse, ours gets built up the engine or whatever it is. And so a problem, which was tons of infrastructure, and tons of taghreed got tend to Raspberry Pi's with stuff, which kind of self organized and into a phone, which could feed real time visualization and controls around what what bits of infrastructure were.
0:31:52
Okay. Now, that
0:31:54
was transformative for this outfit, which was, which quite literally had for tackling the problem this way.
Tobias Macey
0:32:02
Does that make sense? Yeah, that's definitely very useful example of how this technology can flip the overall order of operations and just the overall capabilities of an organization to be able to answer useful questions. And the idea of going from, as you said, an Oracle Database full of all these just rows and rows of records of this tag, read at this point in this location. And then being able to actually get something meaningful out of it. As far as this part is in this location in the overall reference space of the warehouse is definitely transformative, and probably gave them weeks or months worth of additional time in terms of lead time for being able to predict problems or identify areas for potential optimization.
Simon Crosby
0:32:47
Yeah, I think we said them $2 million a year. Let me tell you what, from this tale come two interesting things. First of all, if you show up at customer service running on Raspberry Pi, you can charge them a million bucks. Okay, that's less than one lesson too is that the volume of the data is not relevant, or not related to the value of the insight. Okay. I mentioned traffic earlier. In the city of Las Vegas, we get about 20 1516 terabytes per day of the traffic infrastructure. And every intersection, every digital twin, every intersection in the city predicts two minutes into future, okay. And those insights are sold in an API in Azure, to customers like Audi and Uber and Lyft, and whatever else, okay. Now, that's a ton of data, okay, it's just you couldn't even think of where to put in your cloud. But the manual, the inside is relatively low. This is the total amount of money Agni extract from Uber per month per intersection is low. Alright, by the way, all this stuff is open source, you can go grab it, and play and hopefully make your city better. So what from that you can go there, it's not a high enough value for me to do anything other than say, go grab it and run. So vast amounts of data and relatively important, but not commercially relevant value.
Tobias Macey
0:34:35
And another aspect of that case, in particular, is that despite this volume of data, it might be interesting for being able to do historical analyses. But in terms of the actual real world utility, it has a distinct expiration period where you have no real interest in the sensor data as it existed an hour ago, because that has no particular relevance on your current state of the world and what you're trying to do with it at this point in time,
Simon Crosby
0:35:03
yeah, you have historical interest in the sense of wanting to know if your predictions were right, or wanting to know about traffic engineering purposes, which runs on a slower time scale. So some form bucketing or whatever assemble, terse followed, recording is useful. And sure, that easy. But you certainly did not want to record it, there were no dead rate.
Tobias Macey
0:35:30
And then going back to the other question I had earlier, when we were talking about the workflow of an analyst, or a data scientist pushing out their analyses live to these digital twins and potentially having some real world impact. I'm curious if the swim platform has some concept of a dry run mode, where you can deploy this analysis and see what the output of it is without it and see maybe what impact it would have without it actually manifesting in the real world for cases where you want to ensure that you're not accidentally introducing error or potentially having a dangerous outcome, particularly in the case that you were mentioning of an oil and gas rig.
Simon Crosby
0:36:12
Yeah, so I'm with the 1% XE. Everything we've done thus far has been open loop in the sense that we're informing another human or another application, but we're not directly controlling the structure. and the value of a dry run would be enormous, you can imagine in those scenarios, but thus far, we don't have any use cases that we can report of using some for direct control. We do have use cases where on a second by second basis, we are predicting whether machines are going to make an error they make as they build PCBs, for servers, and so on. Then again, what you're doing is you're calling from ladies come over and fix the machine, you're not, you know, you're not trying to change the way the machine bags.
Tobias Macey
0:37:06
And now digging a bit deeper into the actual implementation of swim, I'm wondering if you can talk through how the actual system itself is architected. And some of the ways that it has evolved as you have worked with different partners to deploy it into real world environments and get feedback from them, and how that has affected the overall direction of the product roadmap.
Simon Crosby
0:37:29
So swim is a couple of megabytes of job extensions. Okay? So it's extremely lean, we tend to deploy in containers using the growl VM. To very small, we can run in, you know, probably 100 megabytes or so. And so, people tend to think of when people tend to think of edge, they tend to think of branding in the educated ways or things, we don't really think of Ag that way. And so an important part of defining edge, as far as we're concerned, is simply gaining access to streaming data, we don't really care where it is, but to me small enough to get on limited amounts of compute towards the physical edge. And the, you know, the product has evolved in the sense that, Originally, it was a way of building applications for the edge and you'd sit down, write them in Java, and so on.
0:38:34
laterally, this ability to simply let
0:38:39
let the app application data or let the data build the app, or most of the app can bonus in response
0:38:46
to customer needs.
0:38:49
But swim is deployed, typically in containers, and for that we have in the current release relied very heavy on the Azure IoT edge framework. And that is magical, to be quite honest, because we can rely on Mac soft machinery to deal with all of the painful bits of deployment and lifecycle management for the code base and the application as it runs. These are not things we are really focused on what we're trying to do is build a capability which will respond to data and do the right thing for the application developer. And so we are fully published in the Azure IoT Hub, and you can download this and get going and managers through a cycle that way. And so in several use cases, now, what we're doing is we are use to feed fast time skill, insights at the physical edge, we are labeling data and then dropping it into Azure AD pls, Gen two, and feeding insights into applications built in Power BI. Okay, so it just for the sake of machinery, you know, using the Azure framework for management of the IoT edge, by the way, I think IoT edge is too bad, the worst possible name you could ever pick, because all you want is a thing to manage the lifecycle management of a capability, which is going to deal with fast data. Whether it's at the physical edge or not, is immaterial. But that, but that's basically what we've been doing is relying on Microsoft's fabulous Lifecycle Management Framework for that, plugged into the IoT Hub, and all the Azure IoT small as your services generally, for back end things which enterprises love.
Tobias Macey
0:41:00
Then another element of what we're discussing in the use case, examples that you were describing, particularly, for instance, with the traffic intersections, is the idea of discover ability and routing between these digital twins, as far as how they identify the cardinality of which twins are useful to communicate with and establishing those links, and also at the networking layer, how they handle network failures in terms of communication and ensuring that if there is some sort of fault that they're able to recover from it,
Simon Crosby
0:41:36
symbols, let's talk about two layers. One is the app layer. And the other one is the infrastructure, which is going to run this effective is distributed graph.
0:41:45
And so assume is going to build this graph for us
0:41:49
from the data. What that means is the digital twin, by the way, we technically call these web agents, these little web agents are going to be distributed somewhere a fabric of physical instances, and they may be widely geographically
0:42:06
distributed. And
0:42:08
so there is a need, nonetheless, at the application layer for things which are related in some way linked physically or, you know, in some other way to be able to link to each other that says to
0:42:23
me couldn't have a sub. And so links
0:42:27
require that object, which are the digital twins have the ability to inspect
0:42:33
each other's data,
0:42:34
right, their members, and of course, is something is running on the other side of the planet, and you're linked to it, how on earth is that going to work. So we're all familiar with object oriented languages and objects in one address space, that's pretty easy. We know what an
0:42:50
object handle or an object
0:42:51
reference or a pointer or whatever we get it, but when these things distribute, that's hot, and so in swim with your an application program, where you will simply use object references, but these resolve to your eyes. So in practice, at runtime, the linking is when I link to you, I'll link to your eye. And that link,
0:43:17
will it's resolved by swim
0:43:19
enables a continuous stream of updates to flow from you to me. And if we happen to be on different instances that is running in different address spaces, then there will be over a mash of all my direct web sockets connection between your instance in mind. And so in any swim deployment, all instances are interlinked. So each link to each other using a single web sockets connection, and then these links permit the flow of information between linked digital twins. And what happens is, whenever in a change in the in memory, state of a linked, you know, digital twin happens, what happens is that it's instance, then streams to every other linked object and update to the state for that thing, right. So what are quite what's required is, in effect, a streaming update to Jason, but because of, we're going to record our model in some form of like JSON state or whatever, we would not need to be able to update little bits of it as things change until we use a protocol called warp for that. And that's a swim capability, which we've open sourced. And what that really does is bring streaming to Jason right, streaming updates two parts of a Jason model. And then every instance in swim maintains its own view of the whole model. So as things streaming, the local view of the model is change. But the view of the of the via the world is very much one of a consistency model based on whatever happens to be executing locally and whatever needs to view state certain, eventually consistent dare model, which every node eventually learns the entire thing.
Tobias Macey
0:45:22
And generally, eventually here means you know, a link, so a link away from real time, right, so links delay away from real time. And then the other aspect of the platform is the state fullness of the computation. And as you're saying that that state is eventually consistent dependent on the communication delay between the different nodes within the context graph. And then in terms of data durability, one thing I'm curious of is the length of state, or sort of the overall buffer that is available, I'm guessing is largely dependent on where it happens to be deployed, and what the physical capabilities are of the particular node. And then also, as far as persisting that data for maybe historical analysis, my guess is that that relies on distributing the data to some other system for long term storage. I'm just wondering what the overall sort of pattern or paradigm is for people who want to be able to have that capability?
Simon Crosby
0:46:24
Oh, this is a great question. So in general, the move going from some horrific raw data form on the wire from this the original physical thing to you know, something much more efficient and meaningful in memory, and generally much more concise, so we get a whole ton of dead redaction am I. And so the system is focused on streaming, we don't stop you storing your original data, if you want to, you might just have to discover or whatever the key thing into them is, we don't do that on the hard path. Okay, so things change this day to memory, and maybe compute on that. And that's what they do first and foremost, and then we lately throw things to disk, because disks happens slowly relative to compute. And so typically, what we end up storing is the semantic state of the context graph, as you put it, not the original data.
0:47:23
That is, for example, in traffic world,
0:47:26
you know, we store things like this slide turn red at this particular time, not the voltage on all the registers in the light, and to get massive data reduction. And that form of data is very amenable to storage in the cloud, say or somewhere else. And it's even affordable at, at reasonable rates.
0:47:50
So the key thing for for swimming storage is
0:47:53
you're going to remember as much as you want as much as you have space for locally. And then storage in general, is on the is not on a hot pot, it's not on the computer and string bar and January beginning huge data reductions for every step up the graph we make. So for example, if I go from you know, all the states have all the traffic centers to predictions, then I've made a very substantial reduction in the data remand anyway, right. So as you move up this computational graph, you reduce the amount of data you're going to have to store. And it's up to you really pick what you want to what you want. So
Tobias Macey
0:48:39
in terms of your overall experience, working as the CTO of this organization and shepherding the product direction and the capabilities of this system, I'm wondering what you have found to be some of the most challenging aspects, both from the technical and business sides, and some of the most useful or interesting or unexpected lessons that you've learned in the process.
Simon Crosby
0:49:03
So what's hard is that the real world is not the cloud native world. So we've all seen tablets, examples of Netflix, and Amazon and everybody else doing cool things with data they do. But you know, if you're an oil company, and you have a regarded See, you just don't know how to do this. So, you know, we can come at this, with whatever skill sets we have, what we find is that the real world large enterprises have today are still acres behind the cloud native folk. And that's a challenge. Okay, so getting to be able to understand what they need, because they still have lots of assets, which is generating tons of data is very hard. Second, this notion of edge is continually confusing. And I mentioned previously that, that I would never I've chosen IOTHS, for example, that as your name, because it's not about IoT, or maybe it is, but you may give you two examples. One is traffic lights, say physical things, it's pretty obvious that you're, what the notion of edge is its physical edge. But the other one is this, we build a real time model for millions 10s of millions of headsets for a large mobile carrier in memory, and devolve all the time, right in response to continue to receive signals from these devices,
0:50:38
there is no age,
0:50:40
that is its data and drives over the internet. And we have to figure out where the digital twin for that thing is, and evolve it in real time. Okay, and there, you know, there is no concept of of a of a network to be no or physical edge and traveling over them. We just have to make decisions on the fly and learn and update this month.
0:51:06
So for me, edges, the following thing, edge is stable.
0:51:13
And
0:51:15
cloud is all about rest. Okay, so what I'd say is the fundamental difference between the notion of edge and the notion of cloud that I would like to see, broadly understood is that Where's rest and databases made the cloud very successful, in order to be successful with, you know, this boundless streaming data, state fullness is fundamental, which means rest goes up the door. And we have to move to a model, which is streaming based and staple computation.
Tobias Macey
0:51:50
And then in terms of the future direction, both from the technical and business perspective, I'm wondering what you have planned for both the enterprise product for swim.ai, as well as the open source kernel in the form of CMOS.
Simon Crosby
0:52:06
From an open source perspective, we,
0:52:08
you know, we don't have the advantage of having come up at LinkedIn or something we built it built in us at scale, and be coming out of the startup? Well, we think we built is something which is a phenomenal value. And we're seeing that grow. And our intention is to continually feed their community as much as you can take. And we're just getting more and more stuff ready for open sourcing and ending up.
0:52:36
So we want to see our community
0:52:40
go and explore new use cases for using this stuff, and are totally dedicated to empowering our community. From a commercial perspective, we are focused on honor world, which is edge and moment you said people they tend to get an idea physical edge or something in their heads. And then you know, very quickly, you can get put in a bucket of IoT, I gave an example of say, building a model in real time in AWS for you know, a mobile customer, our intention is to continue to push the bounds of what edge means and and to enable people to build stream pipelines for massive amounts of data easily without complexity, and without the skill set required to invest in these traditionally, fairly heavyweight pipeline components such as beam and flank and, and so on,
0:53:46
to
0:53:47
to enable people to get insights cheaply, and to make the problem of dealing
0:53:51
with new insights from data very easy to solve.
Tobias Macey
0:53:56
And are there any other aspects of your work on swimming is the space of streaming data and digital twins that we didn't discuss yet that you'd like to cover? Before we close out the show?
Simon Crosby
0:54:08
I think we've done pretty good job, you know, I think there are a bunch of parallel efforts. And that's all goodness, that is one of the hardest things has been to get this notion of stapling this more broadly accepted. And I see the function like vendor out there pushing their idea, this a staple functions as service. And really, these are staple amateurs. And there are others out there too. So for me, step number one is to get people to realize that if we're going to take this data that rest and databases are going to kill us, okay? That is there is so much data and the rates are so high that you simply cannot afford to use a stateless paradigm for processing you have to do stay fully. Because, you know, forgetting context every time and then look it up. It's just too expensive.
Tobias Macey
0:55:08
For anybody who wants to follow along with you and get in touch and keeping track of what you're up to. I'll have you add your preferred contact information to the show notes. And as a final question, I would just like to get your perspective on what you see as being the biggest gap and the tooling or technology that's available for data management today?
Simon Crosby
0:55:26
Well, I think, I mean, there isn't much tooling to be perfect out there a bunch of really fabulous open source code bases and experts in their use. But that's far from tooling. And then there is I guess, an extension of the Power BI downwards. Were, which is something like the monster Excel spreadsheet world, right? So you find all these folks who are pushing that kind of you no end user model of data, doing great things, but leaving a huge gap between the consumer of the insight and the data itself is assuming the data is already there in some good form and can be put into spiritual view, whatever it happens to be. So there's this huge gap in the middle, which is how do we build the model? What does the model tell us? Just off the bat, how do we do this reconstructive Lee in large numbers situations? And then how do we dynamically insert operators which are going to compute useful things for us on the fly in writing models?
Tobias Macey
0:56:44
Well, thank you very much for taking the time today to join me and discuss the work that you've been doing on the swim platform. It's definitely a very interesting approach to data management and analytics, and I look forward to seeing the direction that you take it in the future. So I appreciate your time on that. I hope you enjoy the rest of your day.
Simon Crosby
0:57:01
Thanks very much. You've been great be
Tobias Macey
0:57:03
Thank you for listening. Don't forget to check out our other show podcast.in it at python podcast.com. To learn about the Python language its community in the innovative ways it is being used. visit the site at data engineering podcast. com Subscribe to the show, sign up for the mailing list and read the show notes. If you've learned something or tried other projects on the show, then tell us about it. Email hosts at data engineering podcast.com with your story, and to help other people find the show. Please leave a review on iTunes and tell your friends and coworkers
Liked it? Take a second to support the Data Engineering Podcast on Patreon!
Navigating Boundless Data Streams With The Swim Kernel 1